Search is not available for this dataset
text
stringlengths
1
1.92M
id
stringlengths
14
6.21k
metadata
dict
\section{Introduction} Experimental studies of the magnetic and electronic properties of the cuprates continue to produce new and unexpected results that spur on theorists in their attempts to understand and describe the underlying orderings and excitations that may be involved in the pairing leading to high-temperature superconductivity. One experiment, that was the motivation for the work that we present in this paper, concerned the (zero-field) magnetic susceptibility of undoped La$_2$CuO$_4$ -- it was found \cite{Lavrov} that the magnetic response of undoped ~La$_2$CuO$_4$~was highly anisotropic, and that this anisotropy persisted well above the N\'eel ordering temperature. Further, they found that this anisotropy persisted in the weakly doped state. The importance of this result can be recognized if one notes the ongoing efforts of various researchers in understanding the origin and nature of so-called stripe correlations that are found in some cuprates (for a recent review of this problem, see reference \cite{jtran05}). That is, if the undoped state has a highly anisotropic magnetic susceptibility, can it really be that surprising that ``spin stripes" are also present when the system is doped, and if not, what role does the anisotropic magnetic response play in the formation of stripe-like structures? Previously, we have examined \cite{Tabunshchyk} the origin of this magnetic anisotropy by considering a single CuO$_2$ plane utilizing a magnetic Hamiltonian that contains spin-orbit generated Dzyaloshinskii-Moriya (DM) interactions \cite{Dzyaloshinskii,Moriya}, and near-neighbour superexchange. If one includes both the symmetric and anti-symmetric DM interactions one finds that a true phase transition (at a non-zero temperature) occurs to an antiferromagnetic (AFM) state, wherein the AFM moment lies in the plane, with a weak parasitic ferromagnetic moment generated by a small canting of the moments out of the plane. Within mean-field theory, linear spin-wave theory, and within the RPA utilizing the Tyablikov decoupling scheme, we determined the magnetic susceptibility, and found (i) that it was indeed highly anisotropic, even when the DM interactions were small compared to near-neighbour intraplanar superexchange, and (ii) quantum fluctuations produced a substantial modification of the susceptibility as one used a more and more ``sophisticated" theory \cite{Tabunshchyk}. Other potentially important terms (\emph{e.g.}, cyclic ring exchange \cite{Katanin}) that could have been included in that paper are discussed at the end of this paper. In this report we focus on an augmented model that now includes the third dimension and the full body-centred orthorhombic structure of ~La$_2$CuO$_4$. Our motivations for doing so are as the following. I -- Although one can produce a true phase transition within a model that accounts for only a single plane, the interplanar exchange interactions can also produce a phase transition in approximately the same temperature range. That is, some researchers have suggested that both the (unfrustrated) interplanar exchange and the DM interactions are of comparable strength, and thus there is no good reason to exclude either of these terms in our model Hamiltonian (see \cite{Johnston} and references therein). II -- As we discuss below, there are two different ``near''-neighbour interplanar exchange constants, and these are different in different directions. Thus, this difference will be a source of magnetic anisotropy, and it is necessary to determine the extent of this anisotropy through a calculation that includes both the DM interactions and the interplanar exchange. III -- As mentioned above, our previous work noted the strong effect of quantum fluctuations in a two-dimensional model. Since one expects such effects to be larger the lower the dimensionality of the system, it is possible that this behaviour is reduced in a full three-dimensional model. In this paper, again with the RPA utilizing the Tyablikov decoupling scheme, we have completed the requisite calculations for this more complicated but also more realistic magnetic Hamiltonian. Our paper is organized as follows. In the next section we summarize the formalism necessary to analyze this problem; although somewhat similar formalism is presented our previous paper \cite{Tabunshchyk}, when going from 2D to 3D the analysis is much more complicated, and it is thus necessary to present the required equations that must be solved. (Some aspects of the calculations have been put into various appendices.) In the subsequent section we present the results of a detailed and exhaustive numerical study of the resulting formalism for reasonable parameter values. Then we suggest a simpler model Hamiltonian, one for a simple tetragonal structure which avoids the frustrated interplanar AFM interactions of the original body-centred orthorhombic structure. Finally we conclude the paper by discussing the key results that we have obtained, and then provide a comparison between the predictions of our theory and the experiments of Lavrov, Ando and co-workers \cite{Lavrov}. \section{Model and Methods} \label{sec:Model} \subsection{Model Hamiltonian} \label{subsec:Model_IR} We describe the magnetic structure of the La$_2$CuO$_4$~crystal in the low-temperature orthorhombic (LTO) phase by using an effective spin-$\frac12$ Hamiltonian for the Cu$^{2+}$ magnetic ions of the CuO$_2$~planes defined by \numparts \begin{eqnarray} \label{eq:H_DMa} \fl H&=& J\sum_{\langle i_1,j_1\rangle}{\bf S}_{i_1}\cdot{\bf S}_{j_1} +\sum_{\langle i_1,j_1\rangle}{\bf D}_{i_1j_1}\cdot({\bf S}_{i_1}\times{\bf S}_{j_1}) +\sum_{\langle i_1,j_1\rangle}{\bf S}_{i_1}\cdot \tilde{\Gamma}_{i_1j_1}\cdot{\bf S}_{j_1}\\ \label{eq:H_DMb} \fl &+&J\sum_{\langle i_2,j_2\rangle}{\bf S}_{i_2}\cdot{\bf S}_{j_2} +\sum_{\langle i_2,j_2\rangle}{\bf D}_{i_2j_2}\cdot({\bf S}_{i_2}\times{\bf S}_{j_2}) +\sum_{\langle i_2,j_2\rangle}{\bf S}_{i_2}\cdot \tilde{\Gamma}_{i_2j_2}\cdot{\bf S}_{j_2}\\ \label{eq:H_DMc} \fl &+&J_{\perp}\bigg\{\! \sum_{\langle i_1,i_2\rangle}\!{\bf S}_{i_1}{\cdot}{\bf S}_{i_2} {+}\sum_{\langle j_1,j_2\rangle}\!{\bf S}_{j_1}{\cdot}{\bf S}_{j_2} \!\bigg\} +J'_{\perp}\bigg\{\! \sum_{\langle i_1,j_2\rangle}\!{\bf S}_{i_1}{\cdot}{\bf S}_{j_2} {+}\sum_{\langle j_1,i_2\rangle}\!{\bf S}_{j_1}{\cdot}{\bf S}_{i_2} \!\bigg\}~~. \end{eqnarray} \endnumparts In this equation ${\bf S}_{i}$ denotes a spin at site $i$, and sites labelled as $i_1$ and $j_1$ are in the ``first" plane while $i_2$ and $j_2$ are in the ``second" (neighbouring) plane; the notation $\langle i_\alpha,j_\beta\rangle$ refer to near-neighbour sites. This Hamiltonian is written within the $xyz$ orthorhombic coordinate system shown in figure~\ref{fig:lattice} (see right-hand side) and in figure~\ref{fig:vectors}(a), in what we refer to as the ``initial representation'' in the LTO phase. The various terms in the magnetic Hamiltonian given in equation (1) correspond to the following interactions. As was mentioned in the introduction, the orthorhombic distortion in the La$_2$CuO$_4$~crystal, together with the spin-orbit coupling, lead to the antisymmetric Dzyaloshinskii-Moriya ($\bf D$~term) and the symmetric pseudo-dipolar ($\tilde{\Gamma}$~term) interactions within the each CuO$_2$~plane \cite{Coffey,Aharony,Koshibae}. These interactions together with su\-per\-ex\-chan\-ge one ($J$) can give rise to an ordered phase within a \emph{single} CuO$_2$~plane at some nonzero temperature \cite{Tabunshchyk}. In this long-range ordered state Cu spins are aligned antiferromagnetically in the $y$-direction, with a small canting out of the plane. Therefore, each CuO$_2$~plane in a La$_2$CuO$_4$~crystal exhibits a net ferromagnetic moment, so-called weak ferromagnetism (WF) in the direction parallel to the $c$-axis of the {\it{Bmab}} space group ($z$-axis in the initial coordinates). Due to the weak antiferromagnetic coupling between the planes, the net ferromagnetic moments of adjacent CuO$_2$~planes are antiferromagnetically aligned and the system possesses no net moment. Each Cu spin has four sites above and below it in neighbouring planes. If all of these distances were equal the system would be frustrated because the ordering in one plane would not lift the degeneracy that would result in adjacent planes. However, in the LTO phase these distances are not all equal, and thus the interplanar coupling between nearest-neighbour spins depends on which pair of neighbouring sites are considered. That is, due to the small orthorhombic distortion (relative to the high-temperature body-centred tetragonal phase) some near neighbour sites are closer together than other pairs (which are, technically, next-near-neighbour sites). In what follows we refer to the sites shown in figure 1, which allows for these ideas and the interplanar terms in equation~(\ref{eq:H_DMc}) to be made clear. The distance between $j_1$ and $j_2$ sites is slightly less than the distance between $j_1$ and $i_2$, and thus the superexchange couplings are different, and in this paper we thus specify that neighbouring spins in the $x-z$ plane ($J_{\perp}$ ) have a larger superexchange than do neighbouring spins in the $y-z$ plane ($J'_{\perp}$): $|J_{\perp}|>|J'_{\perp}|$ (see, for example, the discussion in reference \cite{Xue}). As discussed in the introduction, and quantified in the next section, this difference immediately leads to an enhanced anisotropy of the magnetic susceptibility. We schematically illustrate the magnetic structure of the La$_2$CuO$_4$~crystal within the ordered state ($T<T_N$) in figure~\ref{fig:lattice}, where arrows represent the Cu spin structure; this ordered state is quadripartite, as we now explain. In our notations we label sites in a plane with the spin canting up as $i_1$ and $j_1$, and correspondingly the sites of the nearest-neighbour planes with the spin canting down are labelled by indices $i_2$ and $j_2$. In each plane sites with label $i$ differ from the sites $j$ by the spin orientation within the antiferromagnetic order. Clearly, the magnetic structure of the La$_2$CuO$_4$~crystal in the ground state can be represented by four different sublattices with different spin orientations, and in our calculations we will follow the notation that $i_1$-sites belong to sublattice~1, $j_1$-sites to sublattice~2, $i_2$-sites to sublattice~3, and $j_2$-sites to sublattice~4. The interaction of the spins from the sublattice 1 and 2 with the nearest-neighbour spins from sublattice 3 and 4, respectively, are described by term $J_{\perp}$, and interaction with the spins from 4 and 3, respectively, are described by term $J'_{\perp}$. Each magnetic ion interacts with the four nearest neighbour sites within its plane and with eight ions (four above and four below) from neighbouring planes. \begin{figure}[h] \centerline{\epsfxsize 1.\textwidth\epsfbox{lattice_3D_fig2.eps}} \caption{\label{fig:lattice}(Colour online) Magnetic structure of La$_2$CuO$_4$~crystal. Sites having different spin orientations are labelled by indices $i_1$, $j_1$ in the plane with the WF moment in the positive $z$ direction, and $i_2$, $j_2$ in the plane with the WF moment in the negative $z$ direction. For each set of sites the $\sigma$ spin coordinate system within the characteristic representation (CR) is shown. The thin net is shown only to simplify the visualization of the canting in the spin structure.} \end{figure} To summarize, the above presented magnetic Hamiltonian describes the magnetic interactions within the each CuO$_2$~plane in its first and second parts (equations~(\ref{eq:H_DMa},~\ref{eq:H_DMb})), while the third part (equation~(\ref{eq:H_DMc})) takes into account the weak interplanar superexchange couplings. The structure of the Dzyaloshinskii-Moriya (DM) and the pseudo-dipolar interactions for the LTO phase are given by \begin{eqnarray} \label{eq:DM} {\bf D}_{ab}=\frac d{\sqrt{2}}(-1,1,0),\qquad {\bf D}_{ac}=\frac d{\sqrt{2}}(-1,-1,0), \end{eqnarray} and \begin{eqnarray} \label{eq:Gamma} \tilde{\Gamma}_{ab}= \left( \begin{array}{ccc} \Gamma_1 & \Gamma_2 & 0 \\ \Gamma_2 & \Gamma_1 & 0 \\ 0 & 0 & \Gamma_3 \end{array} \right),\qquad \tilde{\Gamma}_{ac}= \left( \begin{array}{ccc} \Gamma_1 & -\Gamma_2 & 0 \\ -\Gamma_2 & \Gamma_1 & 0 \\ 0 & 0 & \Gamma_3 \end{array} \right), \end{eqnarray} within the initial coordinate system \cite{Tabunshchyk}. The DM vector given in equation~(\ref{eq:DM}) alternates in sign on successive bonds in the $a-b$ and in the $a-c$ direction of each plane, as is represented schematically by the double arrows in figure~\ref{fig:vectors}(b). \begin{figure}[h] \centerline{ \epsfxsize 0.8\textwidth\epsfbox{vectors_fig1.eps}} \hspace*{4cm}(a)\hspace*{4cm}(b) \caption{\label{fig:vectors} (a) Coordinates in the initial representation. (b) Thin arrows --- the Cu spins, and open arrows --- the DM vectors.} \end{figure} Thin arrows in this figure describe the in-plane antiferromagnetic order of the Cu spins, and are canted up/down from the in-plane order by a small angle. In the classical ground state of the LTO phase the absolute value of the canting angles are equal on all sites and are given by the expression \begin{eqnarray} \label{eq:canting-angle} \theta = \frac 12\tan^{-1}\Big\{\frac{d/\sqrt{2}}{J+\frac 12(\Gamma_1+\Gamma_3)-\frac 12J'_{\perp}}\Big\}. \end{eqnarray} Following the scheme described in our earlier work \cite{Tabunshchyk}, we perform rotations of the spin coordinate system in such a way that the new quantization axis ($\sigma^z$) is in the direction of a classical moment characterizing the ground state. Hereinafter we will call such a representation as the ``characteristic representation'' (CR). Since four different types of spin orientations are present in the magnetic structure of the La$_2$CuO$_4$~crystal, we introduce four different spin coordinate systems (see the left-hand side of figure~\ref{fig:lattice}) given by the transformations equations~(\ref{eq:1_rot1}-\ref{eq:1_rot4}) in Appendix~A. Thus, each sublattice consists its own spin coordinate system. The model Hamiltonian in terms of these spin operators $\sigma$ in the CR is given by \begin{eqnarray} \nonumber \fl H_{\rm CR} &=& \sum_{\langle i_1,j_1\rangle_{ab}} \left\{ A(\sigma^+_{i_1}\sigma^-_{j_1}+\sigma^-_{i_1}\sigma^+_{j_1}) -B^*\sigma^+_{i_1}\sigma^+_{j_1} -B\sigma^-_{i_1}\sigma^-_{j_1} -J_2\sigma^z_{i_1}\sigma^z_{j_1} \right\}\\ \nonumber \fl &+& \sum_{\langle i_1,j_1\rangle_{ac}} \left\{ A(\sigma^+_{i_1}\sigma^-_{j_1}+\sigma^-_{i_1}\sigma^+_{j_1}) +B\sigma^+_{i_1}\sigma^+_{j_1} +B^*\sigma^-_{i_1}\sigma^-_{j_1} -J_2\sigma^z_{i_1}\sigma^z_{j_1} \right\}\\ \label{eq:H_CR} \fl &+&\sum_{\langle i_2,j_2\rangle_{ab}} \left\{ A(\sigma^+_{i_2}\sigma^-_{j_2}+\sigma^-_{i_2}\sigma^+_{j_2}) +B\sigma^+_{i_2}\sigma^+_{j_2} +B^*\sigma^-_{i_2}\sigma^-_{j_2} -J_2\sigma^z_{i_2}\sigma^z_{j_2} \right\}\\ \nonumber \fl &+& \sum_{\langle i_2,j_2\rangle_{ac}} \left\{ A(\sigma^+_{i_2}\sigma^-_{j_2}+\sigma^-_{i_2}\sigma^+_{j_2}) -B^*\sigma^+_{i_2}\sigma^+_{j_2} -B\sigma^-_{i_2}\sigma^-_{j_2} -J_2\sigma^z_{i_2}\sigma^z_{j_2} \right\}\\ \nonumber \fl &+& \!\!\sum_{\langle i_1,j_2\rangle} \left\{\frac14({J'_{\perp}}{+}J_p)(\sigma^+_{i_1}\sigma^-_{j_2}+\sigma^-_{i_1}\sigma^+_{j_2}) +{\rm {i}}\frac14({J'_{\perp}}{-}J_p)(\sigma^+_{i_1}\sigma^+_{j_2}-\sigma^-_{i_1}\sigma^-_{j_2}) +J_p\sigma^z_{i_1}\sigma^z_{j_2} \right\}\\ \nonumber \fl &+& \!\!\sum_{\langle j_1,i_2\rangle} \left\{\frac14({J'_{\perp}}{+}J_p)(\sigma^+_{j_1}\sigma^-_{i_2}+\sigma^-_{j_1}\sigma^+_{i_2}) +{\rm {i}}\frac14({J'_{\perp}}{-}J_p)(\sigma^+_{j_1}\sigma^+_{i_2}-\sigma^-_{j_1}\sigma^-_{i_2}) +J_p\sigma^z_{j_1}\sigma^z_{i_2} \right\}\\ \nonumber \fl &+& J_{\perp}\!\!\sum_{\langle i_1,i_2\rangle} \left\{\frac{\rm {i}}{2}(\sigma^+_{i_1}\sigma^+_{i_2}{-}\sigma^-_{i_1}\sigma^-_{i_2}){-} \sigma^z_{i_1}\sigma^z_{i_2}\right\} + J_{\perp}\!\!\sum_{\langle j_1,j_2\rangle} \left\{\frac{\rm {i}}{2}(\sigma^+_{j_1}\sigma^+_{j_2}{-}\sigma^-_{j_1}\sigma^-_{j_2}){-} \sigma^z_{j_1}\sigma^z_{j_2}\right\}, \end{eqnarray} where we have used the following definitions: \begin{eqnarray} \label{eq:AB} \fl && A = \frac{J_1-J_3}4,\qquad B = \frac{J_4}2+{\rm i} \frac{J_1+J_3}4,\\ \label{eq:J1} \fl &&J_1=J+\Gamma_1, \qquad J_p= J'_{\perp}\cos2\theta,\\ \label{eq:J2} \fl && J_2=\hphantom{-} \frac {\Gamma_1{-}\Gamma_3}2+\left(J+\frac {\Gamma_1{+}\Gamma_3}2\right)\cos2\theta +\frac{d}{\sqrt{2}}\sin2\theta,\\ \label{eq:J3} \fl && J_3=-\frac {\Gamma_1{-}\Gamma_3}2+\left(J+\frac {\Gamma_1{+}\Gamma_3}2\right)\cos2\theta +\frac{d}{\sqrt{2}}\sin2\theta,\\ \label{eq:J4} \fl && J_4=-\Gamma_2\sin\theta+\frac d{\sqrt{2}}\cos\theta. \end{eqnarray} The subscripts $\langle i,j\rangle_{ab}$ and $\langle i,j\rangle_{ac}$ in the summations of equation~(\ref{eq:H_CR}) imply the nearest neighbours in the $ab$ and $ac$ directions, respectively, as shown in figure~\ref{fig:vectors}(b). \subsection{Mean field analysis} In this subsection, we present the results of the mean field approximation (MFA) for the above Hamiltonian by following the standard decoupling. That is, in the MFA in equation (1) we use \begin{eqnarray} \label{eq:MFA_decoupling} \sigma_i^a\sigma_j^b \rightarrow \langle \sigma_i^a\rangle ~ \sigma_j^b ~+~ \sigma_i^a ~ \langle \sigma_j^b\rangle ~-~ \langle \sigma_i^a\rangle ~ \langle \sigma_j^b\rangle, \end{eqnarray} where $a$ and $b$ can be equal to any of $x,y,z$. Then, the equation for the order parameter, to be denoted by $\eta$, within the MFA reads as \begin{eqnarray} \label{eq:sigma_MFA} \eta\equiv\langle\sigma^z\rangle = \frac 12 \tanh \left\{\frac{\beta}2 {\cal Z}[J_2+J_{\perp}-J_p]\langle\sigma^z\rangle\right\}, \end{eqnarray} where ${\cal Z}=4$ is the in-plane coordination number, and $\beta = 1/T$. From this equation the N\'eel temperature at which $\eta$ vanishes can be written immediately as \begin{eqnarray} \label{eq:T_N^MFA} T^{MFA}_{N} = J_2+J_{\perp}-J_p~~. \end{eqnarray} By applying a magnetic field sequentially in the $x$, $y$, and $z$ directions of each coordinate systems within the CR we can find the transverse and longitudinal components of susceptibility within all four sublattices. Using the relation between the components of susceptibility in the initial and characteristic representations given in equations~(\ref{eq:CRtoINx}-\ref{eq:CRtoINz}), we obtain the final result for the zero-field uniform susceptibility within the MFA below the ordering temperature ($T<T^{MFA}_N$) \begin{eqnarray} \label{eq:MFA_SxSx} \fl\chi^{x~ MFA}&=&\frac 14 \frac 1{J_1 {+} J_2 {+} 2J_{\perp} {+} (J'_{\perp} {-} J_p)},\\ \fl\chi^{y~ MFA}&=&\frac 14 \frac {\sin^2(\theta)} {J_2 {-} J_3 {+} 2J_{\perp}{-} 2J_p} +\frac{\cos^2\theta}{4} \frac{{\rm {sech}}^2\left\{\frac{\beta}2{\cal Z}\eta J_{mfa}\right\}} {T+[J_2{+}J_{\perp}{+}J_p]\: {\rm {sech}}^2\left\{\frac{\beta}2{\cal Z}\eta J_{mfa}\right\}}, \label{eq:MFA_SySy}\\ \fl\chi^{z~ MFA}&=&\frac 14 \frac {\cos^2(\theta)} {J_2 {+} J_3 {+} 2J_{\perp}} +\frac{\sin^2\theta}{4} \frac{{\rm {sech}}^2\left\{\frac{\beta}2{\cal Z}\eta J_{mfa}\right\}} {T-[J_2{-}J_{\perp}{+}J_p]\: {\rm {sech}}^2\left\{\frac{\beta}2{\cal Z}\eta J_{mfa}\right\}}, \label{eq:MFA_SzSz} \end{eqnarray} where we define \begin{eqnarray} \label{J_mfa} J_{mfa}= J_2+J_{\perp}-J_p~~, \end{eqnarray} and the equation for the order parameter $\eta$ given by equation~(\ref{eq:sigma_MFA}). We used the ``\emph{mfa}'' subscript in equation~(\ref{J_mfa}) because this combination determines the effective interaction, and thus the N\'eel temperature within the mean field theory (see equation~(\ref{eq:sigma_MFA})). The final results for the components of the susceptibility in the initial representation for high temperatures, that is above the ordering temperature ($T>T^{MFA}_N$), are \begin{eqnarray} \label{eq:MFA_SxSx_para} \chi^{x~ MFA}&=&\frac 14 \frac 1{T + J_1 + J_{\perp} + J'_{\perp}},\\ \label{eq:MFA_SySy_para} \chi^{y~MFA}&=&\frac 14 \frac {\sin^2(\theta)} {T - J_3 + J_{\perp} - J_p} +\frac 14 \frac{\cos^2(\theta)}{T+J_2+J_{\perp}+J_p},\\ \label{eq:MFA_SzSz_para} \chi^{z~MFA}&=&\frac 14 \frac {\cos^2(\theta)} {T + J_3 + J_{\perp} + J_p} +\frac 14 \frac{\sin^2(\theta)}{T-J_2+J_{\perp}-J_p}. \end{eqnarray} In the limit $T\to T^{MFA}_{\rm N}$ we obtain that the $x$ component of the susceptibility is continuous at the transition and is given by equation~(\ref{eq:MFA_SxSx}). The $y$ component of the susceptibility at the transition temperature reads \begin{eqnarray} \chi^{y~ MFA}\bigg|_{T\to T^{MFA}_{\rm N}} =\frac 14 \frac {\sin^2(\theta)}{J_2 - J_3 + 2J_{\perp} - 2J_p} +\frac{\cos^2\theta}{8}\frac{1}{J_{\perp} + J_2}. \label{eq:MFA_SySy_Tn} \end{eqnarray} Note that with respect to the pure 2D case \cite{Tabunshchyk} the $z$ component of the susceptibility does not diverge at the N\'eel point and also is continuous at the transition \begin{eqnarray} \chi^{z~ MFA}\bigg|_{T\to T^{MFA}_{\rm N}} =\frac 14 \frac {\cos^2(\theta)}{J_2 + J_3 + 2J_{\perp}} +\frac 18 \frac{\sin^2(\theta)}{J_{\perp} - J_p}. \label{eq:MFA_SzSz_Tn} \end{eqnarray} \subsection{Random phase approximation} In this part of the paper we use the technique of the double-time temperature dependent Green's functions within the framework of the random-phase approximation (RPA). In the imaginary-time formalism, the temperature dependent Green's function and the corresponding equation of motion for two Bose-type operators reads \begin{eqnarray} \label{eq:defG} \fl G_{AB}(\tau) = \langle T_\tau A(\tau)B(0)\rangle,\quad \frac{{\rm d}G_{AB}(\tau)}{{\rm d}\tau}=\delta(\tau)\langle[A,B]\rangle +\langle T_\tau[H(\tau),A(\tau)]B(0)\rangle, \end{eqnarray} where $A(\tau)={\rm e}^{H\tau}A{\rm e}^{-H\tau}$ is the operator in the Heisenberg representation for imaginary time argument $\tau$, and $T_\tau$ is the time-ordering operator. By using the method proposed originally by Liu \cite{Liu}, we employ the perturbed Hamiltonian \begin{eqnarray} \label{eq:H_pert} H^f_1 = H_{\rm CR} - f \sum_{i'_1}\sigma^z_{i'_1},\qquad i'_1 \in \mbox{ sublattice 1} \end{eqnarray} to find the longitudinal components of the susceptibility in the CR. In the equation~(\ref{eq:H_pert}) $f$ is a small fictitious field applied to the spins of {\emph {sublattice 1 only}}. In this paper we are studying the zero-field uniform magnetic susceptibility, therefore we restrict $f$ to be constant and static. The Green's functions to be used in present calculations are \begin{eqnarray} \nonumber \fl &&G^f_{i_1j_1}(\tau)=\langle T_{\tau}\sigma^+_{i_1}(\tau)\sigma^-_{j_1}(0)\rangle^f,\quad G^{f-}_{i_1j_1}(\tau)=\langle T_{\tau}\sigma^-_{i_1}(\tau)\sigma^-_{j_1}(0)\rangle^f,\\ \label{eq:GGG} \fl &&G^f_{j'_1j_1}(\tau)=\langle T_{\tau}\sigma^+_{j'_1}(\tau)\sigma^-_{j_1}(0)\rangle^f,\quad G^{f-}_{j'_1j_1}(\tau)=\langle T_{\tau}\sigma^-_{j'_1}(\tau)\sigma^-_{j_1}(0)\rangle^f,\\ \nonumber \fl &&G^f_{i_2j_1}(\tau)=\langle T_{\tau}\sigma^+_{i_2}(\tau)\sigma^-_{j_1}(0)\rangle^f,\quad G^{f-}_{i_2j_1}(\tau)=\langle T_{\tau}\sigma^-_{i_2}(\tau)\sigma^-_{j_1}(0)\rangle^f,\\ \nonumber \fl &&G^f_{j'_2j_1}(\tau)=\langle T_{\tau}\sigma^+_{j'_2}(\tau)\sigma^-_{j_1}(0)\rangle^f,\quad G^{f-}_{j'_2j_1}(\tau)=\langle T_{\tau}\sigma^-_{j'_2}(\tau)\sigma^-_{j_1}(0)\rangle^f, \end{eqnarray} where $\langle...\rangle^f$ means that all expectation values are taken with respect to the perturbed Hamiltonian in equation~(\ref{eq:H_pert}). After an expansion in a power series of $f$ the Green's function, \emph{e.g.} $G^f_{i_1j_1}(\tau)$, reads \begin{eqnarray} \label{eq:def_Gf} G^f_{i_1j_1}(\tau) = G^{(0)}_{i_1j_1}(\tau) + f G^{(1)}_{i_1j_1}(\tau) + O(f^2). \end{eqnarray} Since $G^{(0)}_{i_1j_1}(\tau)=G_{i_1j_1}(\tau)$, from now we drop the superscript and use \begin{eqnarray} \label{eq:explanation1} G^f_{i_1j_1}(\tau) = G_{i_1j_1}(\tau) + f G^{(1)}_{i_1j_1}(\tau) + O(f^2). \end{eqnarray} Also, we introduce \begin{eqnarray} \label{eq:explanation2} \langle \sigma^z_{i_1}(\tau)\rangle^f = \langle \sigma^z_{i_1}\rangle + f{\rm v}_{i_1}+O(f^2), \end{eqnarray} where, due to the translation periodicity $\langle \sigma^z_{i_1}\rangle=\eta$, the order parameter at $f=0$. Now let us find the equation of motion for the Green's function $G^f_{i_1j_1}(\tau)$. The equations for other functions can be found in the same way. Starting from the equation~(\ref{eq:defG}) we can write \begin{eqnarray} \label{eq:eqom_Gf} \fl \frac{{\rm d}G^f_{i_1j_1}(\tau)}{{\rm d}\tau}=2\delta(\tau)\delta_{i_1j_1} \langle \sigma^z_{i_1}\rangle^f +\langle T_\tau[H_{\rm CR}(\tau),\sigma^+_{i_1}(\tau)]\sigma^-_{j_1}(0)\rangle^f -fG^f_{i_1j_1}. \end{eqnarray} In order to solve this equation of motion we are following the RPA scheme, and using the so-called Tyablikov's decoupling \cite{Tyablikov} which is given by \begin{eqnarray} \label{eq:Tyablikov_decoupling} \fl\langle T_{\tau} \sigma^z_l(\tau)\sigma^+_{i_1}(\tau)\sigma^-_{j_1}(0) \rangle^f \to \langle \sigma^z_l(\tau)\rangle^f \langle T_{\tau} \sigma^+_{i_1}(\tau)\sigma^-_{j_1}(0) \rangle^f =\langle \sigma^z_l(\tau)\rangle^f G^f_{i_1j_1}(\tau). \end{eqnarray} After this decoupling is introduced, equation~(\ref{eq:eqom_Gf}) is found to be \begin{eqnarray} \label{eq:eqom_GF_2} \fl \frac{{\rm d}G^f_{i_1j_1}(\tau)}{{\rm d}\tau}&=& 2\delta(\tau)\delta_{i_1j_1}\langle \sigma^z_{i_1}\rangle^f -fG^f_{i_1j_1}(\tau)\\ \nonumber \fl &&\hspace*{-.9cm} -\!\!\sum_{\delta_{ab}}\left\{\!2\langle \sigma^z_{i_1}(\tau)\rangle^f [AG^f_{(i_1{+}\delta)j_1}(\tau)-BG^{f-}_{(i_1{+}\delta)j_1}(\tau)] +J_2\langle \sigma^z_{i_1{+}\delta}(\tau)\rangle^fG^f_{i_1j_1}(\tau)\!\right\}\\ \nonumber \fl &&\hspace*{-.9cm}-\!\!\sum_{\delta_{ac}}\left\{2\langle \sigma^z_{i_1}(\tau)\rangle^f [AG^f_{(i_1{+}\delta)j_1}(\tau){+}B^*G^{f-}_{(i_1{+}\delta)j_1}(\tau)] +J_2\langle \sigma^z_{i_1{+}\delta}(\tau)\rangle^fG^f_{i_1j_1}(\tau)\right\}\\ \nonumber \fl &&\hspace*{-.9cm}-\!\!\sum_{\langle j'_2\rangle_{i_1}}\left\{2\langle \sigma^z_{i_1}(\tau)\rangle^f \bigg[ \frac{J'_{\perp}{+}J_p}{4}G^f_{j'_2j_1}(\tau) -{\rm {i}}\frac{J'_{\perp}{-}J_p}{4}G^{f-}_{j'_2j_1}(\tau)\bigg] -J_p\langle \sigma^z_{j'_2}(\tau)\rangle^fG^f_{i_1j_1}(\tau)\!\right\}\\ \nonumber \fl &&\hspace*{-.9cm}-\!\!\sum_{\langle i'_2\rangle_{i_1}}\left\{-{\rm {i}}J_{\perp}\langle \sigma^z_{i_1}(\tau)\rangle^fG^{f-}_{i'_2j_1}(\tau) +J_{\perp}\langle \sigma^z_{i'_2}(\tau)\rangle^fG^{f}_{i_1j_1}(\tau)\!\right\}, \end{eqnarray} where $\sum_{\delta_{ab}}$ refers to a summation over the nearest neighbours of the sites $i_1$ in the $ab$ direction of the same CuO$_2$~plane, and similarly for $\sum_{\delta_{ac}}$ --- see figure~\ref{fig:vectors}(b). Thus, in equation~(\ref{eq:eqom_GF_2}) all sites $i_1+\delta$ belong to the sublattice 2. The notation $\sum_{\langle i'_2\rangle_{i_1}}$ means sum over all sites $i'_2$ from sublattice 3 which are nearest neighbours of sites $i_1$ that belong to the sublattice 1, and similarly for $\sum_{\langle j'_2\rangle_{i_1}}$. Next, we perform the transformation into the momentum-frequency representation for the Green's functions and the spin operators: \begin{eqnarray} \label{eq:Fourier1} \fl G^f_{i_1j_1}(\tau)&=&\frac 4{N\beta}\sum_{\boldsymbol {k}, m}G^f_{12}(\boldsymbol {k},\omega_m) {\rm e}^{{\rm i}\boldsymbol {k}\cdot({\bf R}_{i_1}-{\bf R}_{j_1})}{\rm e}^{-{\rm i}\omega_m\tau},\\ \label{eq:Fourier2} \fl \langle\sigma^z_{i_1}(\tau)\rangle^f&=& \frac 1{\beta}\sum_{\boldsymbol {k}, m}\langle \sigma^z_1(\boldsymbol {k},\omega_m)\rangle^f {\rm e}^{-{\rm i}\boldsymbol {k}\cdot{\bf R}_{i_1}}{\rm e}^{-{\rm i}\omega_m\tau} = \sum_{\boldsymbol {k}}\delta(\boldsymbol {k})[\eta + f{\rm v}_1]{\rm e}^{-{\rm i}\boldsymbol {k}\cdot{\bf R}_{i_1}}, \end{eqnarray} where the expansion of equation~(\ref{eq:explanation2}) and the linear response to the uniform perturbation expressed by ${\rm v}_1(\boldsymbol {k})=\delta(\boldsymbol {k}){\rm v}_1$ were taken into account. In the transformation given by equations~(\ref{eq:Fourier1},~\ref{eq:Fourier2}), the sum over $\boldsymbol {k}$ runs over $\frac 14 N$ points of the first Brillouin zone, and $\omega_n = 2\pi n/\beta$ for $n\in\mathbb{Z}$ are the Bose Matsubara frequencies. The equation of motion for the Green's function $G^f_{i_1j_1}(\tau)$ in the momentum-frequency representation reads \begin{eqnarray} \label{eq:eq3} \fl -{\rm i}\omega_m G^f_{12}(\boldsymbol {k},\omega_m)&=& -2{\cal Z}A_{\boldsymbol {k}}[\eta + f{\rm v}_1]G^f_{22}(\boldsymbol {k},\omega_m) +2{\cal Z}B_{\boldsymbol {k}}[\eta + f{\rm v}_1]G^{f-}_{22}(\boldsymbol {k},\omega_m)\\ \nonumber \fl && -2{\cal Z}\:a_{\boldsymbol {k}}[\eta + f{\rm v}_1]G^f_{42}(\boldsymbol {k},\omega_m) +2{\cal Z}\:{\rm {i}}b_{\boldsymbol {k}}[\eta + f{\rm v}_1]G^{f-}_{42}(\boldsymbol {k},\omega_m)\\ \nonumber \fl &&+2{\cal Z}\:{\rm {i}}d_{\boldsymbol {k}}[\eta + f{\rm v}_1]G^{f-}_{32}(\boldsymbol {k},\omega_m) -fG^f_{12}(\boldsymbol {k},\omega_m)\\ \nonumber \fl && -{\cal Z}\bigg\{ J_2 [\eta + f{\rm v}_2] +J_{\perp}[\eta + f{\rm v}_3] -J_p [\eta + f{\rm v}_4] \bigg\} G^f_{12}(\boldsymbol {k},\omega_m), \end{eqnarray} where, as before, ${\cal Z}$ is the in-plane coordination number, and we introduce \begin{eqnarray} \label{eq:notationAB} && A_{\boldsymbol {k}} = A\gamma_{\boldsymbol {k}},\quad B_{\boldsymbol {k}} = (\Re B) \gamma_{\boldsymbol {k}}'+{\rm {i}}(\Im B) \gamma_{\boldsymbol {k}}, \\ \label{eq:notationabd} && a_{\boldsymbol {k}} = \frac{J'_{\perp}+J_p}4\xi_{\boldsymbol {k}}, \quad b_{\boldsymbol {k}} = \frac{J'_{\perp}-J_p}4\xi_{\boldsymbol {k}}, \quad d_{\boldsymbol {k}} = \frac{J_{\perp}}2\xi'_{\boldsymbol {k}},\\ && \gamma_{\boldsymbol {k}}=\frac 12(\cos k_x+\cos k_y),\quad \xi_{\boldsymbol {k}}= \cos k_z\cos\bigg(\frac{k_x+k_y}2\bigg),\\ && \gamma'_{\boldsymbol {k}}=\frac 12(\cos k_x-\cos k_y),\quad \xi'_{\boldsymbol {k}}= \cos k_z\cos\bigg(\frac{k_x-k_y}2\bigg). \end{eqnarray} Now we can write down the final equations for the zero-order in $f$ Green's function $G_{12}(\boldsymbol {k},\omega_m)$, and the first-order one $G^{(1)}_{12}(\boldsymbol {k},\omega_m)$ \begin{eqnarray} \fl \frac {{\rm i}\omega_m}{2{\cal Z}\eta}G_{12}&=& \bigg\{\frac{J_2}2{+}\frac{J_{\perp}}2{-}\frac{J_p}2\bigg\}G_{12} +A_{\boldsymbol {k}}G_{22}-B_{\boldsymbol {k}}G^-_{22} +a_{\boldsymbol {k}}G_{42}-{\rm {i}}b_{\boldsymbol {k}}G^-_{42}-{\rm {i}}d_{\boldsymbol {k}}G^-_{32},\\ \nonumber \fl \frac {{\rm i}\omega_m}{2{\cal Z}\eta}G^{(1)}_{12}&=& \frac{G_{12}}{2{\cal Z}\eta}+ \bigg\{\frac{J_2}2 \frac{{\rm {v}}_2}{\eta} {+}\frac{J_\perp}2 \frac{{\rm {v}}_3}{\eta} {-}\frac{J_p}2 \frac{{\rm {v}}_4}{\eta}\bigg\}G_{12} +\frac{{\rm {v}}_1}{\eta}\bigg(\frac{{\rm {i}}\omega}{2{\cal Z}\eta} -\bigg\{\frac{J_2}2{+}\frac{J_{\perp}}2{-}\frac{J_p}2\bigg\} \bigg)G_{12}\\ \nonumber \fl &+& \bigg\{\frac{J_2}2{+}\frac{J_{\perp}}2{-}\frac{J_p}2\bigg\}G^{(1)}_{12} +A_{\boldsymbol {k}}G^{(1)}_{22}-B_{\boldsymbol {k}}G^{(1)-}_{22} +a_{\boldsymbol {k}}G^{(1)}_{42}-{\rm {i}}b_{\boldsymbol {k}}G^{(1)-}_{42}-{\rm {i}}d_{\boldsymbol {k}}G^{(1)-}_{32}\\ \fl && \end{eqnarray} where in these equations we drop the wave vector and frequency dependencies for the Green's functions; that is $G=G(\boldsymbol {k},\omega_m)$ and $G^{(1)}=G^{(1)}(\boldsymbol {k},\omega_m)$. In order to obtain a closed set of the equations for the zero and first order Green's function we should use the above described scheme for the all other functions in equation~(\ref{eq:GGG}), and the final system of equations for the zero and first-order Green's function are given in the Appendix B in equations~(\ref{eq:system_g}-\ref{eq:coef_G1}). The structure of the system for the zero-order functions is identical with the system of equations for the first-order ones, except for the free terms. Hence, the poles of the zero-order Green's functions (that determine the spectrum of the spin-wave excitations) $G(\boldsymbol {k},\omega_m)$ are equal to the poles for the first-order ones $G^{(1)}(\boldsymbol {k},\omega_m)$, and are found to be \begin{eqnarray} \label{eq:varepsilon} \fl&&\varepsilon_{1,\boldsymbol {k}} = 2{\cal Z}\eta\omega_{1,\boldsymbol {k}}=\sqrt{\alpha_{1,\boldsymbol {k}}+\sqrt{\beta_{1,\boldsymbol {k}}}}~~,\qquad \varepsilon_{2,\boldsymbol {k}} = 2{\cal Z}\eta\omega_{2,\boldsymbol {k}}=\sqrt{\alpha_{1,\boldsymbol {k}}-\sqrt{\beta_{1,\boldsymbol {k}}}}~~,\\ \nonumber \fl&&\varepsilon_{3,\boldsymbol {k}} = 2{\cal Z}\eta\omega_{3,\boldsymbol {k}}=\sqrt{\alpha_{2,\boldsymbol {k}}+\sqrt{\beta_{2,\boldsymbol {k}}}}~~,\qquad \varepsilon_{4,\boldsymbol {k}} = 2{\cal Z}\eta\omega_{4,\boldsymbol {k}}=\sqrt{\alpha_{2,\boldsymbol {k}}-\sqrt{\beta_{2,\boldsymbol {k}}}}~~,\\ \label{eq:alpha} \fl&&\alpha_{1,\boldsymbol {k}} = a_{\boldsymbol {k}}^2+(A_{\boldsymbol {k}}{-}J_{mfa}/2)^2 -(b_{\boldsymbol {k}}{-}d_{\boldsymbol {k}})^2-|B_{\boldsymbol {k}}|^2,\\ \nonumber \fl&&\alpha_{2,\boldsymbol {k}} = a_{\boldsymbol {k}}^2+(A_{\boldsymbol {k}}{+}J_{mfa}/2)^2 -(b_{\boldsymbol {k}}{+}d_{\boldsymbol {k}})^2-|B_{\boldsymbol {k}}|^2,\\ \label{eq:beta} \fl&&\beta_{1,\boldsymbol {k}} = 4[a_{\boldsymbol {k}}(A_{\boldsymbol {k}}{-}J_{mfa}/2)-(b_{\boldsymbol {k}}{-}d_{\boldsymbol {k}})\Im B_{\boldsymbol {k}}]^2 -(2\Re B_{\boldsymbol {k}})^2[a_{\boldsymbol {k}}^2-(b_{\boldsymbol {k}}{-}d_{\boldsymbol {k}})^2],\\ \nonumber \fl&&\beta_{2,\boldsymbol {k}} = 4[a_{\boldsymbol {k}}(A_{\boldsymbol {k}}{+}J_{mfa}/2)-(b_{\boldsymbol {k}}{+}d_{\boldsymbol {k}})\Im B_{\boldsymbol {k}}]^2 -(2\Re B_{\boldsymbol {k}})^2[a_{\boldsymbol {k}}^2-(b_{\boldsymbol {k}}{+}d_{\boldsymbol {k}})^2], \end{eqnarray} within the notation of equations~(\ref{eq:notationAB},~\ref{eq:notationabd}), and the MFA-inspired definition of $J_{mfa}= J_2+J_{\perp}-J_p$. The free terms in the first-order systems (see equation~(\ref{eq:coef_G1})) are determined by the zero-order Green's functions, and thus the first-order quantities $G^{(1)}$ can be written down in terms of the solution for the zero-order system \ref{eq:solutionG}, and the as-yet-unknown quantities ${\rm v}_1$, ${\rm v}_2$, ${\rm v}_3$, and ${\rm v}_4$. To calculate ${\rm v}_{1,2,3,4}$ we use a relation that connects ${\rm v}$ and the Green's functions $G^{(1)}(\boldsymbol {k},\tau=0^-)$, \emph{viz.} \begin{eqnarray} \label{eq:V} -{\rm v}_l&=&\frac 4N\sum_{\boldsymbol {k}}G^{(1)}_{ll}(\boldsymbol {k},0^-),\qquad l=1,2,3,4. \end{eqnarray} After the substitution of the solutions of the systems of equations for the first-order Green's functions $G^{(1)}(\boldsymbol {k},\omega_m)$ in \ref{eq:solutionG1} into the system of equations for ${\rm v}_{l}$ in equation~(\ref{eq:V}), the results are found to be \begin{eqnarray} \label{eq:SSaz} \fl&&{\rm {v}}_1-{\rm {v}}_2-{\rm {v}}_3+{\rm {v}}_4=\frac{\frac1{\beta}\sum\limits_m{\rm {e}}^{-{\rm {i}}\omega_m0^-}{\cal N}^y(\omega_m)} {2\eta - \frac1{\beta}\sum\limits_m{\rm {e}}^{-{\rm {i}}\omega_m0^-}\left\{ {\rm {i}}\omega_m{\cal D}^y(\omega_m)/\eta - 2{\cal Z}(J_{\perp}{+}J_2){\cal N}^y(\omega_m)\right\}},\\ \label{eq:SSbz} \fl&&{\rm {v}}_1+{\rm {v}}_2-{\rm {v}}_3-{\rm {v}}_4=\frac{\frac1{\beta}\sum\limits_m{\rm {e}}^{-{\rm {i}}\omega_m0^-}{\cal N}^z(\omega_m)} {2\eta - \frac1{\beta}\sum\limits_m{\rm {e}}^{-{\rm {i}}\omega_m0^-}\left\{ {\rm {i}}\omega_m{\cal D}^z(\omega_m)/\eta - 2{\cal Z}(J_{\perp}{-}J_p){\cal N}^z(\omega_m)\right\}}, \end{eqnarray} where \begin{eqnarray} \nonumber \fl&&{\cal N}^y(\omega_m)=\frac 4N\sum_{\boldsymbol {k}}\bigg\{ |G_{22}|^2{-}|G_{12}|^2{+}|G^-_{22}|^2{-}|G^-_{12}|^2 {-}|G_{42}|^2{+}|G_{32}|^2{-}|G^-_{42}|^2{+}|G^-_{32}|^2\bigg\}~~,\\ \nonumber \fl&&{\cal D}^y(\omega_m)=\frac 4N\sum_{\boldsymbol {k}}\bigg\{ |G_{22}|^2{-}|G_{12}|^2{-}|G^-_{22}|^2{+}|G^-_{12}|^2 {-}|G_{42}|^2{+}|G_{32}|^2{+}|G^-_{42}|^2{-}|G^-_{32}|^2\bigg\}~~,\\ \fl&&\\ \nonumber \fl&&{\cal N}^z(\omega_m)=\frac 4N\sum_{\boldsymbol {k}}\bigg\{ |G_{22}|^2{+}|G_{12}|^2{+}|G^-_{22}|^2{+}|G^-_{12}|^2 {-}|G_{42}|^2{-}|G_{32}|^2{-}|G^-_{42}|^2{-}|G^-_{32}|^2\bigg\}~~,\\ \nonumber \fl&&{\cal D}^z(\omega_m)=\frac 4N\sum_{\boldsymbol {k}}\bigg\{ |G_{22}|^2{+}|G_{12}|^2{-}|G^-_{22}|^2{-}|G^-_{12}|^2 {-}|G_{42}|^2{-}|G_{32}|^2{+}|G^-_{42}|^2{+}|G^-_{32}|^2\bigg\}~~,\\ \fl&& \end{eqnarray} and all zero-order Green's functions $G(\boldsymbol {k},\omega_m)$ are given in \ref{eq:solutionG}. Now let us find the quantities which determine the linear response to a magnetic field applied to the one of sublattices (\emph{e.g.}, see equation~(\ref{eq:iterpret1})). The longitudinal $z$ components of the susceptibility in the characteristic representation are given by \begin{eqnarray} \label{eq:chi_z} \fl&&\chi^{\sigma^z\sigma^z}_{11} =\frac{\partial \langle\sigma^z_1\rangle^f}{\partial f}\Big|_{f=0}={\rm v}_1,\qquad \chi^{\sigma^z\sigma^z}_{12} =\frac{\partial \langle\sigma^z_2\rangle^f}{\partial f}\Big|_{f=0}={\rm v}_2,\\ \nonumber \fl&& \chi^{\sigma^z\sigma^z}_{13} =\frac{\partial \langle\sigma^z_3\rangle^f}{\partial f}\Big|_{f=0}={\rm v}_3,\qquad \chi^{\sigma^z\sigma^z}_{14} =\frac{\partial \langle\sigma^z_4\rangle^f}{\partial f}\Big|_{f=0}={\rm v}_4, \end{eqnarray} where the expansion of equation~(\ref{eq:explanation2}) was used. The transverse $x$ and $y$ components of the susceptibility tensor are determined in the terms of Green's functions to be given by \begin{eqnarray} \label{eq:def} \fl&& \chi^{\sigma^\alpha\sigma^{\alpha'}}_{11}\!\!= \frac 4N\sum_{i_1,i'_1}\int^{\beta}_0\!\!\langle T_{\tau}\sigma_{i_1}^{\alpha}(\tau) \sigma_{i'_1}^{\alpha'}(0)\rangle{\rm d}\tau,\quad \chi^{\sigma^{\alpha}\sigma^{\alpha'}}_{12}\!\!= \frac 4N\sum_{i_1,j_1}\int^{\beta}_0\!\!\langle T_{\tau}\sigma_{i_1}^{\alpha}(\tau) \sigma_{j_1}^{\alpha'}(0)\rangle{\rm d}\tau,\\ \nonumber \fl&& \chi^{\sigma^\alpha\sigma^{\alpha'}}_{13}\!\!= \frac 4N\sum_{i_1,i'_2}\int^{\beta}_0\!\!\langle T_{\tau}\sigma_{i_1}^{\alpha}(\tau) \sigma_{i'_2}^{\alpha'}(0)\rangle{\rm d}\tau,\quad \chi^{\sigma^{\alpha}\sigma^{\alpha'}}_{14}\!\!= \frac 4N\sum_{i_1,j_2}\int^{\beta}_0\!\!\langle T_{\tau}\sigma_{i_1}^{\alpha}(\tau) \sigma_{j_2}^{\alpha'}(0)\rangle{\rm d}\tau, \end{eqnarray} where $\alpha=x,y$. By substituting the solutions in \ref{eq:solutionG} into the definition in equation~(\ref{eq:def}) for the transverse components of susceptibility, we obtain the result given in \ref{trans_comp}. This result for the transverse components in the CR is {\emph{exactly the same}} as the MFA calculations for the transverse components. Then, using equations~(\ref{eq:CRtoINx})-(\ref{eq:CRtoINz}) the components of the susceptibility in the initial coordinate system of equation~(1) below the transition temperature are found to be \begin{eqnarray} \label{eq:SxSx} \fl\chi^{x}&=&\frac 14 \frac 1{J_1 {+} J_2 {+} 2J_{\perp} {+} (J'_{\perp} {-} J_p)},\\ \label{eq:SySy} \fl\chi^{y}&=&\frac 14 \frac {\sin^2(\theta)} {J_2 {-} J_3 {+} 2J_{\perp} {-} 2J_p} +\cos^2(\theta)[{\rm v}_1-{\rm v}_2-{\rm v}_3+{\rm v}_4],\\ \label{eq:SzSz} \fl\chi^{z}&=&\frac 14 \frac {\cos^2(\theta)} {J_2 {+} J_3 {+} 2J_{\perp}} +\sin^2(\theta)[{\rm v}_1+{\rm v}_2-{\rm v}_3-{\rm v}_4]~~. \end{eqnarray} These expressions for the components of susceptibility include the as-yet-unknown value of the order parameter $\eta$. It can be found directly; from the definition of the Green's functions we have \begin{eqnarray} \label{eq:temp} \fl G_{nn}(\tau=0^-)=\langle \sigma^-_n\sigma^+_n\rangle=\frac 12 - \eta,\quad G_{nn}(\tau=0^-)=\frac 2N\sum_{\boldsymbol {k}}G_{22}(\boldsymbol {k},\tau=0^-). \end{eqnarray} Substituting $G_{22}(\boldsymbol {k},\omega)$ from equation~(\ref{eq:G22zero}), and performing the summation on the Matsubara frequencies, the equation for the order parameter turns out to be \begin{eqnarray} \label{eq:sigma^z} \fl\frac 1{\eta} =\frac12\frac 4N\sum_{\boldsymbol {k}} \bigg\{\!\!&\hphantom{+}& \bigg(y_{1,\boldsymbol {k}}{+}\frac{x_{1,\boldsymbol {k}}}{\sqrt{\beta_{1,\boldsymbol {k}}}}\bigg) \frac{2n(\varepsilon_{1,\boldsymbol {k}}){+}1}{\omega_{1,\boldsymbol {k}}} +\bigg(y_{1,\boldsymbol {k}}{-}\frac{x_{1,\boldsymbol {k}}}{\sqrt{\beta_{1,\boldsymbol {k}}}}\bigg) \frac{2n(\varepsilon_{2,\boldsymbol {k}}){+}1}{\omega_{2,\boldsymbol {k}}}\\ \nonumber\fl \!\!&+&\bigg(y_{2,\boldsymbol {k}}{+}\frac{x_{2,\boldsymbol {k}}}{\sqrt{\beta_{2,\boldsymbol {k}}}}\bigg) \frac{2n(\varepsilon_{3,\boldsymbol {k}}){+}1}{\omega_{3,\boldsymbol {k}}} +\bigg(y_{2,\boldsymbol {k}}{-}\frac{x_{2,\boldsymbol {k}}}{\sqrt{\beta_{2,\boldsymbol {k}}}}\bigg) \frac{2n(\varepsilon_{4,\boldsymbol {k}}){+}1}{\omega_{4,\boldsymbol {k}}}\bigg\}~~, \end{eqnarray} where \begin{eqnarray} \nonumber \fl x_{1,\boldsymbol {k}}&=&-2a_{\boldsymbol {k}}[a_{\boldsymbol {k}}(A_{\boldsymbol {k}}{-}J_{mfa}/2)-(b_{\boldsymbol {k}}{-}d_{\boldsymbol {k}})\Im B_{\boldsymbol {k}}],\qquad y_{1,\boldsymbol {k}}=-(A_{\boldsymbol {k}}{-}J_{mfa}/2),\\ \nonumber \fl x_{2,\boldsymbol {k}}&=&\hphantom{-} 2a_{\boldsymbol {k}}[a_{\boldsymbol {k}}(A_{\boldsymbol {k}}{+}J_{mfa}/2)-(b_{\boldsymbol {k}}{+}d_{\boldsymbol {k}})\Im B_{\boldsymbol {k}}],\qquad y_{2,\boldsymbol {k}}=\hphantom{-}(A_{\boldsymbol {k}}{+}J_{mfa}/2),\\ \nonumber \fl n(\varepsilon_{l,\boldsymbol {k}})&=&[\exp(\beta\varepsilon_{l,\boldsymbol {k}})-1]^{-1},\qquad l=1,2,3,4. \end{eqnarray} Since the order parameter (\emph {viz.}, the sublattice magnetization) is temperature dependent, it follows that the spectrum of elementary excitations (equation~(\ref{eq:varepsilon})) is also temperature dependent. The N\'eel temperature at which $\eta$ vanishes within the adopted RPA approximation is determined by \begin{eqnarray} \label{eq:T_n} \fl T_{\rm N} =\bigg[\frac1{2{\cal Z}}\frac 4N\sum_{\boldsymbol {k}} \bigg\{\!\!&\hphantom{+}& \bigg(y_{1,\boldsymbol {k}}{+}\frac{x_{1,\boldsymbol {k}}}{\sqrt{\beta_{1,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{1,\boldsymbol {k}}} +\bigg(y_{1,\boldsymbol {k}}{-}\frac{x_{1,\boldsymbol {k}}}{\sqrt{\beta_{1,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{2,\boldsymbol {k}}}\\ \nonumber\fl \!\!&+&\bigg(y_{2,\boldsymbol {k}}{+}\frac{x_{2,\boldsymbol {k}}}{\sqrt{\beta_{2,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{3,\boldsymbol {k}}} +\bigg(y_{2,\boldsymbol {k}}{-}\frac{x_{2,\boldsymbol {k}}}{\sqrt{\beta_{2,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{4,\boldsymbol {k}}}\bigg\}\bigg]^{-1}. \end{eqnarray} By putting $\eta\to 0$ we find that the $z$-component of the susceptibility $\chi^{z}$ in equation~(\ref{eq:SzSz}) does not diverges at the N\'eel temperature, whereas it diverges for the pure 2D model ($J_\perp=J_\perp^\prime=0$). At the N\'eel temperature all components of the susceptibility within the RPA are equal to the MFA results, the latter of which are given in equations~(\ref{eq:MFA_SxSx},~\ref{eq:MFA_SySy_Tn},~\ref{eq:MFA_SzSz_Tn}). For completeness, we mention that the investigation of the model equation~(1) within linear spin-wave (LSW) theory leads to the same structure of the susceptibility expressions as we found within the RPA in equations~(\ref{eq:SxSx}-\ref{eq:SzSz}). The main difference between the results in RPA and LSW theory comes from the calculation of the longitudinal components of the susceptibility in the CR. The spin-wave theory gives unity in the denominator of the expressions in equations~(\ref{eq:SSaz},~\ref{eq:SSbz}), and $S=1/2$ instead of the order parameter $\eta$ everywhere in the numerator ${\cal N}^y$ and ${\cal N}^z$. The transverse components of the susceptibility in the CR are equal within the all of the MFA, RPA, and SW theories. When the temperature of the system is above the N\'eel temperature, $T_N$, there still exists short-range antiferromagnetic order. To model such an order we follow reference \cite{Liu} and introduce a fictitious field $h$ pointing in the direction of the sublattice magnetization, that is the $z$ direction in the characteristic representation. To this end, the Hamiltonian \begin{eqnarray} \label{eq:H_para} H_h = H_{\rm CR} - h \sum_i\sigma^z_i - h \sum_j\sigma^z_j \end{eqnarray} is used, and the limit $h\to 0$ is taken after the calculation is carried out. Above the N\'eel temperature, we define a (different) order parameter by \begin{eqnarray} \label{eq:y_def} {\cal Y} = \lim_{h\to 0}(2{\cal Z}\eta/h). \end{eqnarray} By a procedure similar to the above presented \cite{Tabunshchyk} (that is, the RPA scheme below $T_N$) we have found the equation for the order parameter and all components of the magnetic susceptibility in the paramagnetic phase. It is then possible to show that paramagnetic version of the equation for the order parameter in equation~(\ref{eq:sigma^z}) leads to \begin{eqnarray} \label{eq:eq_y} \fl \beta =\frac 1{2{\cal Z}}\frac 4N\sum_{\boldsymbol {k}} \bigg\{\!\!&\hphantom{+}& \bigg(y_{1,\boldsymbol {k}}{+}\frac{x_{1,\boldsymbol {k}}}{\sqrt{\beta_{1,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{1,\boldsymbol {k}}} +\bigg(y_{1,\boldsymbol {k}}{-}\frac{x_{1,\boldsymbol {k}}}{\sqrt{\beta_{1,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{2,\boldsymbol {k}}}\\ \nonumber\fl \!\!&+&\bigg(y_{2,\boldsymbol {k}}{+}\frac{x_{2,\boldsymbol {k}}}{\sqrt{\beta_{2,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{3,\boldsymbol {k}}} +\bigg(y_{2,\boldsymbol {k}}{-}\frac{x_{2,\boldsymbol {k}}}{\sqrt{\beta_{2,\boldsymbol {k}}}}\bigg) \frac{1}{\omega^2_{4,\boldsymbol {k}}}\bigg\}. \end{eqnarray} where in \emph{all expressions} for $x_{1,2}$, $y_{1,2}$, $\beta_{1,2}$ and $\omega_{1-4}$ which determine equation~(\ref{eq:eq_y}) in the paramagnetic phase, we use a new definition for $J_{mfa}$ (which we now call $\tilde J_{mfa}$) that reads \begin{eqnarray} \label{eq:newJ_mfa} \tilde J_{mfa}=J_2+J_{\perp}-J_p+\frac1{\cal Y}. \end{eqnarray} The quantity ${\cal Y}$ approaches infinity as the temperature is lowered to $T_{\rm N}$. Indeed, putting ${\cal Y}\to\infty$ in equation~(\ref{eq:eq_y}) we find the temperature at which ${\cal Y}$ diverges, which is identically equal to the N\'eel temperature. We have found (see below for numerical results) that for model~equation~(1) all components of susceptibility are continuous at the N\'eel point within the RPA. \section{Numerical Results} In this section we present the result of numerical calculations of the system modelled by the Hamiltonian of equation~(1) based on the above-presented analytical formulae. \subsection{Parameters regimes} Firstly, let us consider the set of model parameters that appears in the Hamiltonian of equation~(1), \emph{viz.} in-plane parameters $J$, $d$ and $\Gamma$, and out-of-plane parameters $J_{\perp}$ and $J'_{\perp}$. The in-plane parameter $d$ that describes the antisymmetric DM interaction and parameters $\Gamma_{1,2,3}$ that give the pseudo-dipolar anisotropy, are of order $10^{-2}$ and $10^{-4}$ respectively in units of $J$ \cite{Aharony,Koshibae}, and it has been shown that the only combination from the pseudo-dipolar terms that affects the behaviour of the system is $\Delta\Gamma\equiv\Gamma_1-\Gamma_3$ \cite{Tabunshchyk,Neto}. Thus, the in-plane part of the model, that is equations~(\ref{eq:H_DMa},\ref{eq:H_DMb}), can be completely described by the AFM Heisenberg model with the DM antisymmetric exchange interaction ${\bf D}$ and XY-like pseudo-dipolar anisotropy given by $\Delta\Gamma$. In order to examine the behaviour of the system with respect to the out-of-plane parameters, we introduce the combination \begin{eqnarray} \Delta J_{\perp}{\equiv}J_{\perp}-J'_{\perp}~~, \end{eqnarray} that describes the interplanar anisotropy interaction between nearest-neighbour spins which we refer to as the net interplanar coupling, and the combination \begin{eqnarray} \tilde{J}_{\perp}{\equiv}J_{\perp}{+}J'_{\perp}~. \end{eqnarray} In our calculations we take $\Delta J_{\perp}$ to be of the order $10^{-5}-10^{-4}$ in units of $J$ (see \cite{Johnston} and references therein). In this subsection we focus on the behaviour of order parameter $\eta$, N\'eel temperature $T_{\rm N}$, and susceptibility $\chi$ with respect to the parameter $\tilde{J}_{\perp}$ within the RPA method (we present a detailed consideration of the dependence on $\Delta J_{\perp}$ in a subsequent subsection). Firstly, we find that the order parameter and the N\'eel temperature are almost independent of the $\tilde{J}_{\perp}$ within a wide range of the model parameters. In figure~\ref{Fig01} we show two representative plots for the order parameter and the susceptibility for certain values of the in-plane parameters. \emph{In each line} of the figure~\ref{Fig01}a (that is solid, dotted, and dashed lines) we have simultaneously plotted five data sets, each with different values of the parameter $\tilde{J}_{\perp}$ that has been varied from zero up to $0.5J$ ($\tilde{J}_{\perp}/J=0, 0.01, 0.1, 0.2, 0.5$). As one can see, for such a wide range of the parameter $\tilde{J}_{\perp}$ there is virtually no difference of the absolute values of the N\'eel temperature and the order parameter, whereas the relatively small changes of the net interplanar coupling $\Delta J_{\perp}$ in figure~\ref{Fig01}a strongly affects these quantities. \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_01_Sz.eps}\quad \epsfxsize 0.47\textwidth\epsfbox{Fig_01_Chi.eps} \end{center} \caption{\label{Fig01}(Colour online) The (a) order parameter \emph{vs.} $T/J$ for the dif\-f\-e\-r\-ent values of $\Delta J_{\perp}$: $\Delta J_{\perp}{=}0$ -- black solid line, $\Delta J_{\perp}{=}0.1{\times} 10^{-3}J$ -- blue dotted line, $\Delta J_{\perp}{=}0.4{\times} 10^{-3}J$ -- red dashed line, and each line consists of five data sets with $\tilde{J}_{\perp}$ varying from zero up to $0.5J$. The (b) susceptibility, in units of $1/J$, {\emph vs.} $T/T_N$ for $\Delta J_{\perp}{=}0$, and for the following values of $\tilde{J}_{\perp}$: $\tilde{J}_{\perp}{=}0$ up to $0.02$ --- upper plot (many curves are superimposed on top of one another), $\tilde{J}_{\perp}{=}0.2$ --- middle plot, and $\tilde{J}_{\perp}{=}0.5$ --- lower plot. In both figures (a) and (b) we have fixed $d/J=0.02$ and $\Delta\Gamma/J=0.42\times10^{-3}$.} \end{figure} In case of the susceptibility, its dependence on $\tilde{J}_{\perp}$ differs from that discussed above for the order parameter $\eta$. Now, as is seen in figure~\ref{Fig01}b, the parameter $\tilde{J}_{\perp}$ generates the constant shift in the $\chi^x$ and $\chi^z$ components of susceptibility as well as the constant shift in the $\chi^y$ near the N\'eel temperature and within the paramagnetic region. However, for the reasonable values of the out-of-plane model parameters (that is $J_{\perp},\; J'_{\perp} < 10^{-3}J$) the parameter $\tilde{J}_{\perp}$ does \emph{not} affect on the behaviour of the susceptibility with temperature. This latter result for the susceptibility can be shown in various limits from the above analytical results by taking into account the small magnitude of $\Delta J_{\perp}$ and the in-plane parameters $d$ and $\Delta \Gamma$ with respect to $J$. In the zero temperature limit one can write down the susceptibility in the following form: \begin{eqnarray} \label{eq:Zero_SxSz} \fl\chi^{x,z}_{T\to 0} \approx\frac 14 \frac 1{2J + \tilde{J}_{\perp}},\quad \chi^{y}_{T\to 0} \approx\frac 1{32} \frac 1{\{J - \tilde{J}_{\perp}/4\}^2} \frac {d^2}{\Delta\Gamma+2\Delta J_{\perp}}. \label{eq:Zero_SySy} \end{eqnarray} Also, near the N\'eel temperature one finds \begin{eqnarray} \fl\chi^{y}_{T\to T_{\rm N}} &\approx&\frac 1{32} \frac 1{\{J - \tilde{J}_{\perp}/4\}^2} \frac {d^2}{\Delta\Gamma+2\Delta J_{\perp}} +\frac 14\frac 1{2J +\tilde{J}_{\perp}}, \label{eq:Neel_SySy}\\ \fl\chi^{z}_{T\to T_{\rm N}} &\approx&\frac 14 \frac 1{2J + \tilde{J}_{\perp}}+\frac 18 \frac 1{\tilde{J}_{\perp}}. \label{eq:Neel_SzSz} \end{eqnarray} Almost everywhere within the above presented equations~(\ref{eq:Zero_SxSz}-\ref{eq:Neel_SzSz}) one can ignore the contribution of $\tilde{J}_{\perp} < 10^{-3}J$ with respect to $J$; only the $z$-component of the susceptibility $\chi^z$ at the N\'eel temperature is strongly affected by $\tilde{J}_{\perp}$, as shown in equation~(\ref{eq:Neel_SzSz}). In fact, the upper plot in figure~\ref{Fig01}b consists of data sets with the different values of the parameter $\tilde{J}_{\perp}$ over the range of zero up to $0.02J$, but these differing plots can not be distinguished from one another. Similarly, the parameter $\tilde{J}_{\perp}$ can be ignored in the expressions for the spin-wave gaps, as can be clearly seen from the following approximate formulae \begin{eqnarray} \label{eq:Omega1_appr} \omega_{1,\boldsymbol {k}\to 0}^2&\approx&\bigg\{J-\frac{\tilde{J}_{\perp}}2\bigg\} \bigg(\Delta J_{\perp}+\frac d{\sqrt{2}}\theta-\big\{J-\frac{\tilde{J}_{\perp}}2\big\}\theta^2\bigg),\\ \label{eq:Omega2_appr} \omega_{2,\boldsymbol {k}\to 0}^2&\approx&\bigg\{J+\frac{\tilde{J}_{\perp}}2\bigg\} \bigg(\frac d{\sqrt{2}}\theta-\big\{J-\frac{\tilde{J}_{\perp}}2\big\}\theta^2\bigg),\\ \label{eq:Omega3_appr} \omega_{3,\boldsymbol {k}\to 0}^2&\approx&\bigg\{J-\frac{\tilde{J}_{\perp}}2\bigg\} (\Delta\Gamma/2 +\Delta J_{\perp}),\\ \label{eq:Omega4_appr} \omega_{4,\boldsymbol {k}\to 0}^2&\approx&\bigg\{J+\frac{\tilde{J}_{\perp}}2\bigg\} \Delta\Gamma/2. \end{eqnarray} Therefore, one can conclude that the affect of the parameter $\tilde{J}_{\perp}$ on the physics of the model is negligibly small. Consequently, without further trepidations the system can be studied using a fixed and representative value of this parameter, \emph{e.g.} $\tilde{J}_{\perp}=10^{-3}J$, without having to be concerned with it changing our results. At the end of this subsection we discuss briefly the case of isotropic interplanar coupling ($\Delta J_{\perp}=0$). In such a case the only difference in the susceptibility, with respect to a pure 2D model \cite{Tabunshchyk}, is the finite value of $\chi^z$ at the N\'eel temperature. Then, only the in-plane anisotropies are responsible for the anisotropic magnetic properties of such a system, \emph{viz.} the behaviour of susceptibility, order parameter, and spin-wave excitations. We emphasize the perhaps expected result, that for the 3D case with isotropic interplanar coupling, due to the frustration of interplanar coupling within the body-centred lattice, the effects of 2D quantum fluctuations and short-range correlations are very important, whereas the interplanar coupling is not. \subsection{N\'eel temperature and spin-wave excitations} Now we present the results of our numerical investigations of the N\'eel temperature and the spin wave excitations and their dependence on the parameters of the in-plane anisotropies $d/J$, $\Delta\Gamma/J$, and the out-of plane anisotropy $\Delta J_{\perp}/J$. Figure~\ref{Fig02}a shows the N\'eel temperature obtained within the RPA scheme as a function of both $\Delta\Gamma/J$ and $\Delta J_{\perp}/J$. We found that the transition temperature $T_N$ depends on both the in-plane XY-like pseudo-dipolar anisotropy parameter $\Delta\Gamma$ and the out-of-plane anisotropy parameter $\Delta J_{\perp}$, and changes of the same order ($\sim 10^{-4}J$) in $\Delta J_{\perp}$ and/or in $\Delta\Gamma$ produces considerable changes in the N\'eel temperature, $T_N$. Further, the dependence of the N\'eel temperature on the DM parameter is shown in figure~\ref{Fig02}b: $T_N$ decreases as $d$ increases for small $d$, but for larger values of the DM interaction, i.e. $d\gg\Delta J_{\perp},\Delta\Gamma$, the N\'eel temperature $T_N$ increases nearly linearly with $d$. The transition temperature into the long-range ordered state, $T_N$, increases as the parameter $\Delta J_{\perp}$ increases since the net interplanar coupling that each spin feels favours to the AFM state. \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_02_GmDJ.eps}\quad \epsfxsize 0.47\textwidth\epsfbox{Fig_02_Dm.eps} \end{center} \caption{\label{Fig02}(Colour online) The N\'eel temperature $T_{N}$, in units of $J$, \emph{vs.} (a) $\Delta J_{\perp}/J$ -- black solid line (fixed $\Delta \Gamma /J=0.42\times10^{-3}$) and $\Delta\Gamma/J$ -- blue dotted line (fixed $\Delta J_{\perp}/J=0.42\times10^{-3}$), both for fixed $d/J=0.02$, as well as \emph{vs.} (b) the DM parameter $d/J$ (fixed $\Delta\Gamma/J=0.42\times10^{-3}$ and $\Delta J_{\perp}/J=0.42\times10^{-3}$).} \end{figure} Figure~\ref{Fig03} shows the zero-temperature energy gaps in the long wavelength limit as a function of in- and out- of plane anisotropy parameters, and the resulting behaviours can be understood immediately from equations~(\ref{eq:Omega1_appr})-(\ref{eq:Omega4_appr}). Two modes, $\varepsilon_1$ and $\varepsilon_2$, are almost independent of $\Delta\Gamma$ (see figure~\ref{Fig03}a, and equations~(\ref{eq:Omega1_appr},\ref{eq:Omega2_appr})), but they show a strong dependency on the DM parameter, $d$, as seen in figure~\ref{Fig03}b. Since the canted angle goes as $\theta\approx(d/\sqrt{2})/(2J)$, the modes $\varepsilon_1$, $\varepsilon_2$ are nearly linear in $d$. In the limit of zero DM interaction the mode $\varepsilon_2$ goes to zero and a Goldstone mode appears in the spin-wave spectrum, while the mode $\varepsilon_1$ goes to a finite value, which is about $2{\cal Z}\eta\sqrt{J\Delta J_{\perp}}$ (see equations~(\ref{eq:Omega1_appr},\ref{eq:Omega2_appr})). Two other modes in the spectrum, $\varepsilon_3$, $\varepsilon_4$, are almost independent of the DM parameter of anisotropy, while they vary strongly with $\Delta\Gamma$. In the limit $\Delta\Gamma=0$ the mode $\varepsilon_4$ goes to zero and another Goldstone mode appears in the spectrum, while the mode $\varepsilon_3$ goes to the finite value of about $2{\cal Z}\eta\sqrt{J\Delta\Gamma/2}$ (equation~(\ref{eq:Omega3_appr},\ref{eq:Omega4_appr})). Since in the case of 3D model thermal fluctuations do not destroy the long-range ordering for $T\ne 0$, the N\'eel temperature does not go to zero when a Goldstone mode $\varepsilon_2$ or $\varepsilon_4$ appears in the spectrum. \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_03_Gm.eps}\quad\quad \epsfxsize 0.47\textwidth\epsfbox{Fig_03_Dm.eps}\\ \epsfxsize 0.47\textwidth\epsfbox{Fig_03_DJ.eps} \end{center} \caption{\label{Fig03}(Colour online) The energy gaps, in units of $J$, as function of\\ (a) XY-like anisotropy parameter $\Delta\Gamma/J$ ($d/J=0.02$, $\Delta J_{\perp}/J=0.42\times10^{-3}$),\\ (b) DM parameter $d/J$ ($\Delta\Gamma/J=0.42\times10^{-3}$, $\Delta J_{\perp}/J=0.42\times10^{-3}$), and\\(c) out-of-plane anisotropy parameter $\Delta J_{\perp}/J$ ($d/J=0.02$, $\Delta\Gamma/J=0.42\times10^{-3}$). } \end{figure} Lastly, we note that the plots of figure~\ref{Fig03}c show that two modes ($\varepsilon_2$ and $\varepsilon_4$) are independent of the net interplanar coupling $\Delta J_{\perp}$, and two modes ($\varepsilon_1$ and $\varepsilon_3$) demonstrate a square-root dependency on $\Delta J_{\perp}$. When the net interplanar coupling goes to zero, $\Delta J_{\perp}=0$, the two modes $\varepsilon_1$ and $\varepsilon_2$ become equal and describe the in-plane mode in the spin-wave excitations \cite{Keimer}. Similarly, the mode $\varepsilon_3$ coincides with $\varepsilon_4$ at $\Delta J_{\perp}=0$ and they correspond to the out-of-plane magnon mode \cite{Keimer}. \subsection{Susceptibility} Now we consider the temperature behaviour of the susceptibility and examine its dependency on different values of the in-plane and out-of plane anisotropy parameters. Our results for the $y$, and $z$ components of the susceptibility within the different approximation schemes, \emph{viz.} RPA, LSW theory, and MFA, are presented in figure~\ref{Fig05} ($T<T_N$). We do not present the similar comparing for the $x$ component of susceptibility because of the pure transverse component $\chi^x$ (see equation~(\ref{eq:SxSx})) has the same value within all mentioned approximations (below the transition temperature). On the other hand, the longitudinal (in the characteristic representation equations~ (\ref{eq:CRtoINy},\ref{eq:CRtoINz})) components of the susceptibility are involved in the equations for the components $\chi^y$ and $\chi^z$ that leads to their different temperature behaviours within the different approximation methods (see below). Our results for the $y$ component of the susceptibility, $\chi^y$, are shown in figure~\ref{Fig05}a. We find that at low temperatures the RPA analytical scheme, as was also found in pure 2D case \cite{Tabunshchyk}, is in good agreement with the linear spin-wave theory. Plots in figure~\ref{Fig05}a also show that RPA results agree with the MFA as one nears the transition temperature $T_N$, and both RPA and MFA lead to the same magnitude of susceptibility at the N\'eel temperature. (It is worth noting here that transition temperature $T_N$ within the MFA approach, where $T_N=J_2+J_{\perp}-J_p\approx J$, is almost independent of the anisotropy, in contrast to the RPA scheme where $T_N$ is very sensitive to the anisotropy parameters (see figure~\ref{Fig02}).) One can also see that in the zero temperature limit all approximations used in the paper go to the same value of susceptibility approximately given by equation~(\ref{eq:Zero_SxSz}). \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_05_chi_y.eps}\quad \epsfxsize 0.47\textwidth\epsfbox{Fig_05_chi_z.eps} \end{center} \caption{\label{Fig05}(Colour online) The (a) $\chi^y$ and (b) $\chi^z$ components of the susceptibility, in units of $1/J$ -- a comparison of the RPA (black solid line), LSW (blue dotted line) and MFA (red dashed line) results below $T_N$ (for $d/J=0.02$, $\Delta\Gamma/J=0.42\times10^{-3}$, $\Delta J_{\perp}/J=0.4\times10^{-3}$).} \end{figure} Figure~\ref{Fig05}b shows the $z$ component of the susceptibility $\chi^z$, and again we obtain the good agreement between the RPA and LSW methods at low $T$, and coincidence of all results in the zero temperature limit. On the other hand, in the vicinity of the transition temperature, the RPA scheme gives qualitatively different behaviour of $\chi^z$ with respect to the MFA and LSW formalisms. Thus, we can answer one of the motivating questions of this study: Does the extension of the model of reference \cite{Tabunshchyk} from 2D to 3D lead to a reduction of the strong effects of quantum fluctuations? Indeed, the answer is no, and there are strong effects of quantum fluctuations in our 3D Heisenberg model with the anisotropies. This statement is correct for a magnitude of the net interplanar coupling $\Delta J_{\perp}$ up to $\sim 10^{-3}J$. Now let us find the correlation between the ratio of spin-waves modes of the magnon excitation spectrum in the long wavelength limit, and the behaviour of the components of susceptibility in the zero temperature limit (similar to the correlation that we have found in the pure 2D model \cite{Tabunshchyk}). Firstly, from the analytical results, equation~(\ref{eq:Zero_SxSz}), we obtain that the ratio between the components of the susceptibility is given approximately by \begin{eqnarray} \label{eq:ratio_chi} \frac{\chi^{y}}{\chi^{x,z}}\bigg|_{T\to 0}\approx\frac{d^2}{4J(\Delta\Gamma+2\Delta J_{\perp})}. \end{eqnarray} Next, by taking into account that the canted angle is $\theta\approx(d/\sqrt{2})/(2J)$ and $\tilde{J}_{\perp}\ll J$, we can rewrite the expressions (\ref{eq:Omega1_appr})-(\ref{eq:Omega4_appr}) that specify the spin-wave gaps, which we write in a scaled form using $\varepsilon_l=2{\cal Z}\eta\omega_l$ as \begin{eqnarray} \label{eq:Omega_appr_all} \fl\omega_1^2 \approx J\Delta J_{\perp}+ d^2/8,\quad \omega_2^2 \approx d^2/8,\quad \omega_3^2 \approx J(\Delta\Gamma/2{+}\Delta J_{\perp}),\quad \omega_4^2 \approx J \Delta\Gamma/2. \end{eqnarray} Thus, the ratio between the components of the susceptibility turns out to be \begin{eqnarray} \label{eq:ratio_chi_Omega} \frac{\chi^{y}}{\chi^{x,z}}\bigg|_{T\to 0}\approx\bigg(\frac{\varepsilon_2}{\varepsilon_3}\bigg)^2_{\boldsymbol {k}\to 0}. \end{eqnarray} \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_06_chi_RPA_12.eps} \epsfxsize 0.47\textwidth\epsfbox{Fig_07_E_DJ_12.eps} \end{center} \caption{\label{Fig06}(Colour online) (a) All three components of the susceptibility within the RPA analytical scheme for different values of the interplanar anisotropy $\Delta J_{\perp}$: 2D result -- upper curve, $\Delta J_{\perp}/J=0.1\times 10^{-3}$ -- middle curve, and $\Delta J_{\perp}/J=1.0\times 10^{-3}$ -- top curve; and the (b) $T=0$ energy gaps \emph{vs.} the interplanar anisotropy $\Delta J_{\perp}$, for $d/J=0.058$, $\Delta\Gamma/J=0.42\times10^{-3}$.} \end{figure} We also find that gap $\varepsilon_1$ is always greater than $\varepsilon_2$, and $\varepsilon_3$ is always greater than $\varepsilon_4$, when $\Delta J_{\perp}=J_{\perp}-J'_{\perp}>0$ (indeed, that is the only case considered in this paper - see the discussion of figure 1 in section 2). As was mentioned above, the two modes $\varepsilon_1$, $\varepsilon_2$ describe the in-plane modes of the spectrum, while the modes $\varepsilon_3$, $\varepsilon_4$ describe the out-of-plane spin-wave excitations. Therefore, we find that the observed ratio between the $x$ and $y$ components $\chi^x<\chi^y$ (in the $T=0$ limit) \cite{Lavrov}, in any of the MFA, LSW theory, or the RPA, takes place only if the spin-wave gaps have the following hierarchy: \begin{eqnarray} \label{eq:hierarchy} \varepsilon_1>\varepsilon_2>\varepsilon_3>\varepsilon_4, \end{eqnarray} \emph{i.e.} the in-plane modes ($\varepsilon_{1,2}$) are greater than the out-of-plane ones ($\varepsilon_{3,4}$). The described situation is presented in figure~\ref{Fig06} -- they show the susceptibility for the different values of the interplanar parameter $\Delta J_{\perp}$. The upper curve was obtained for the pure 2D case and corresponds to the situation with the observed order of the susceptibility components $\chi^x<\chi^y$ and the following ordering of the gaps $\varepsilon_1=\varepsilon_2 > \varepsilon_3=\varepsilon_4$. By increasing the magnitude of the interplanar parameter $\Delta J_{\perp}$ two modes $\varepsilon_1$ and $\varepsilon_3$ increase and the hierarchy of the gaps (\ref{eq:hierarchy}) remains unchanged (figure~\ref{Fig06}b). As we can see from the middle curve in figure~\ref{Fig06}a, the ratio $\chi^y/\chi^x$ decreases as the ratio between the gaps $\varepsilon_2/\varepsilon_3$ decreases. The magnitude of the out-of-plane mode $\varepsilon_3$ becomes equal to the in-plane one $\varepsilon_2$ when $\Delta J_{\perp}/J\approx \frac 12 \big(\frac 14 (d/J)^2-\Delta \Gamma/J \big) \approx 2.1\times 10^{-4}$, where the ratio $\chi^y/\chi^x$ goes to unity. A further increasing of $\Delta J_{\perp}$ changes the ratio between the modes $\varepsilon_2/\varepsilon_3$, and according to equation~(\ref{eq:ratio_chi}) changes the order of the susceptibility components $x,z$ and $y$ at zero temperature (see the lowest curve in figure~\ref{Fig06}a and the corresponding value of the gaps at $\Delta J_{\perp}=0.001$ in figure~\ref{Fig06}b). \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_06_chi_RPA_10.eps} \epsfxsize 0.47\textwidth\epsfbox{Fig_07_E_DJ_10.eps} \end{center} \caption{\label{Fig07}(Colour online) (a) All three components of the susceptibility within the RPA analytical scheme for different values of the interplanar anisotropy $\Delta J_{\perp}$: 2D result -- upper curve, $\Delta J_{\perp}/J=0.1\times 10^{-3}$ --middle curve, and $\Delta J_{\perp}/J=1.0\times 10^{-3}$ -- top curve; and the (b) $T=0$ energy gaps \emph{vs.} the interplanar anisotropy $\Delta J_{\perp}$, for $d/J=0.02$, $\Delta\Gamma/J=0.42\times10^{-3}$.} \end{figure} For completeness, in figure~\ref{Fig07} we present the susceptibility and behaviour of the gaps {\emph{vs.}} $\Delta J_{\perp}$ for smaller magnitudes of the in-plane anisotropy. We find that the interplanar coupling introduced into the problem leads to a suppression of the 2D quantum fluctuations caused by {\it intraplanar} anisotropies. For the large magnitude of $\Delta J_{\perp}=10^{-3}$, {\emph{i.e.}} $\Delta J_{\perp}\sim 2\Delta\Gamma \ll d$, the net interplanar coupling $\Delta J_{\perp}$ dominates over the DM and XY-like pseudo-dipolar anisotropies (see the lower curve in figure~\ref{Fig07}a). Finally, we conclude the presentation of these numerical results by returning to a discussion of the correlation between the magnitude of the zone-centre spin-wave gaps and the behaviour of the components $\chi^{x,z}$, $\chi^y$. One can see that only when $\varepsilon_3$ is greater than $\varepsilon_2$ at zero interplanar coupling $\Delta J_{\perp}=0$ does the $y$ component of the susceptibility at $T=0$ become less than components $\chi^{x,z}$, since the ratio between gaps remains unchanged for any values of $\Delta J_{\perp}$ (see equation~(\ref{eq:ratio_chi_Omega})). \section{Approximate simple tetragonal model} For the model parameters of interest our initial Hamiltonian~(1) can be approximated by a simple tetragonal model Hamiltonian which includes the intraplanar isotropic Heisenberg interaction $J$, the anisotropic DM term ${\bf D}$ that alternates in sign from bond to bond, the XY-like pseudo-dipolar anisotropy $\Delta\Gamma$, and an effective interplanar interaction $J^{eff}_{\perp}$. This effective model is defined by \begin{eqnarray} \label{eq:H_cub} \fl H&=& \sum_{\langle i,j\rangle}[J{\bf S}_{i}\cdot{\bf S}_{j} -\Delta\Gamma S^z_iS^z_j +{\bf D}_{ij}\cdot({\bf S}_{i}\times{\bf S}_{j})] +\sum_{\langle k,k'\rangle}J^{eff}_{\perp}{\bf S}_{k}\cdot{\bf S}_{k'}~~. \end{eqnarray} The single-plane effective Hamiltonian was proposed by Peters {\em et al.} \cite{Peters} long ago, and its reliability was demonstrated in our previous work \cite{Tabunshchyk}. Here the interplanar coupling $J^{eff}_{\perp}$ is added phenomenologically for a simple tetragonal lattice (see figure~\ref{fig:lattice2}), $i$ and $j$ are nearest-neighbour sites in the same CuO plane (indexes $i_1,j_1$ and $i_2,j_2$ in figure~\ref{fig:lattice2}) and $k$, $k'$ are nearest-neighbour sites in adjacent planes (indexes $i_1,i_2$ and $j_1,j_2$ in figure~\ref{fig:lattice2}). Since the interplanar coordination number for interplanar interaction within a simple tetragonal model is half that of the corresponding one for the coupling $\Delta J_{\perp}$, we can approximate $J^{eff}_{\perp}=2\Delta J_{\perp}$. \begin{figure}[h] \centerline{\epsfxsize .4\textwidth\epsfbox{lattice_3D_simple_tetr.eps}} \caption{\label{fig:lattice2} (Colour online) Magnetic structure of the simple tetragonal lattice described by the effective model Hamiltonian of equation~(\ref{eq:H_cub}). Use of this Hamiltonian for this lattice accurately approximates the magnetic response of the La$_2$CuO$_4$ crystal in the LTO phase.} \end{figure} We performed the calculations of the order parameter, spin-wave excitations and susceptibility within the RPA scheme for the effective simple tetragonal model of equation~(\ref{eq:H_cub}), and found that the transition temperature, spectrum, and behaviour of the order parameter and susceptibility almost identical to that on the initial model of equation~(1). In figure~\ref{Fig04} we show representative data for the susceptibility obtained for the initial body-centred orthorhombic model as well as for the effective simple tetragonal one with $J^{eff}_{\perp}=2\Delta J_{\perp}$ -- clearly, the agreement between the predictions for these two models is excellent, so when studying the magnetic properties of the model~(1) on 3D body-centred orthorhombic lattice one can utilize the effective Hamiltonian of the simple tetragonal lattice. Consequently, the magnetism of the La$_2$CuO$_4$ system in the LTO phase can be modelled by the Hamiltonian of equation~(\ref{eq:H_cub}). \begin{figure}[h] \begin{center} \epsfxsize 0.47\textwidth\epsfbox{Fig_04_chi.eps} \end{center} \caption{\label{Fig04}(Colour online) The susceptibility, in units of $1/J$, as a function of $T/T_N$. We present both results from the RPA calculation on the initial model Hamiltonian of equation~(1) with $\Delta J_{\perp}/J=0.1\times 10^{-3}$ and the simple tetragonal model with the Hamiltonian of equation~(\ref{eq:H_cub}) with $J^{eff}_{\perp}/J=0.2\times 10^{-3}$ (for $d/J=0.02$ and $\Delta\Gamma/J=0.42\times10^{-3}$). The curves from these two models essentially coincide, and no differences can seen on this scale. } \end{figure} \section{Conclusions and Discussion} In this paper we presented a theoretical investigation of the body-centred orthorhombic lattice Heisenberg antiferromagnet with in-plane symmetric and anti-symmetric anisotropies, and a weak anisotropic AFM interlayer coupling. Our study was focused on the role of the different interactions in explaining the magnetic properties of a La$_2$CuO$_4$~crystal in the low-temperature orthorhombic (LTO) phase. Due to the transition into the orthorhombic phase, the AFM interplanar coupling between nearest-neighbour spins in the adjacent CuO$_2$~planes exhibits a small anisotropy. We have found that such a small anisotropy plays an important role in magnetic properties of the system. In figures~\ref{Fig06}a,~\ref{Fig07}a one sees a significant change in the behaviour of magnetic susceptibility as a function of temperature by varying the magnitude of the net interplanar coupling $\Delta J_{\perp}$. We also obtained that the (larger) individual superexchange interaction between any two nearest-neighbour spins in the adjacent planes does not affect the physics of the model (figure~\ref{Fig01}). Our results have shown that in the case of an isotropic interplanar coupling, 2D quantum fluctuations dominate over the effects of the 3D interaction, and the transition to the long-range magnetically ordered state, as well as the behaviour of the susceptibility, order parameter and magnon excitation spectrum, are not influenced by the interplanar exchange coupling (however, for a 3D model the $z$ component of the susceptibility will not diverge, as it does in a 2D model). Thus, in the case of the body-centred lattice model of equation~(1) with an isotropic interplanar coupling ($J'_{\perp}=J_{\perp}$) one can analyze the system using a 2D square lattice model with intraplanar anisotropies only. We have also obtained that the initial model Hamiltonian (1) can be effectively replaced by a simpler one with fewer model parameters, namely by the AFM Heisenberg Hamiltonian with DM interaction, XY-like pseudo-dipolar anisotropy, and an effective interplanar interaction $J^{eff}_{\perp}$ (added phenomenologically for a simple tetragonal lattice). Here $J^{eff}_{\perp}/2\sim 10^{-4}$ describes the small anisotropy of the AFM interplanar coupling in the initial system. We emphasize an important conclusion that can be drawn from our results. We have found that in-plane anisotropy introduced into the problem by symmetric XY-like pseudo-dipolar and antisymmetric DM interactions largely determines the behaviour of the magnetic susceptibility, transition temperature into the long-range order state, and the spin-wave gaps in the case of a \emph{3D model} (within the wide range of model parameters of interest). Further, even when one studies a 3D model, the effect of quantum fluctuations is very strong in all temperature regions below the transition temperature, and cannot be ignored. Similar to the results of our previous paper \cite{Tabunshchyk}, we have also obtained the large short-range correlations in the broad temperature region above the N\'eel temperature. Now we comment on the comparison of our results to the experimentally observed anisotropies of the susceptibility \cite{Lavrov} and spin-wave gaps \cite{Keimer} that motivated our work. We can state that \emph{all anisotropic interactions} involved in the model, \emph{i.e.} DM, XY-like pseudo-dipolar, and interplanar ones, are responsible for the unusual anisotropy in the magnetic susceptibility, and the appearance of gaps in spin-wave excitation spectrum. More concretely, by comparing to a purely 2D model, the inclusion of interplanar anisotropy leads to the splitting of either of the in- and out-of plane zone-centre spin-wave modes. While the neutron-scattering experiments find only two gaps, one in-plane mode $\varepsilon_i\approx2.3$~meV and one out-of plane mode $\varepsilon_o\approx 5$~meV, we can infer the following possible situation that is predicted from our results: the in-plane mode $\varepsilon_1$ (which is always larger than $\varepsilon_2$) has a gap with a magnitude of about 10~meV. Indeed, such an in-plane gap can be seen from the result for the spin-wave spectra in the neutron-scattering experiments \cite{Keimer}; other observed gaps corresponds to the out-of plane mode $\varepsilon_3\approx 5$~meV and in-plane mode $\varepsilon_2\approx 2.3$~meV. The magnitude of the gap of the remaining out-of plane mode, $\varepsilon_4$, is relatively small and apparently has not be seen by experiment. Therefore the following hierarchy $\varepsilon_1 > \varepsilon_3 > \varepsilon_2 > \varepsilon_4$ agrees with the experiment. In this paper we established the correlation between the ratio of the in and out-of-plane gaps of the excitation spectrum and the behaviour of $\chi^{x,z}$ and $\chi^{y}$ components of susceptibility in the zero temperature limit. However, the proposed hierarchy of the spin-wave gaps takes place only if the ratio between the $x$ and $y$ components is opposite to that observed in experiment ($\chi^x<\chi^y$). This necessarily leads to the question, would other interactions, \textit {e.g.} ring exchange and/or the interaction between the next nearest neighbour sites \cite{Katanin}, lead to an accurate explanation of the susceptibility data within the RPA scheme? In order to answer this question we have performed calculation for the square lattice AFM Heisenberg model with the DM and XY-like pseudo-dipolar anisotropies by additionally taking into account the ring exchange and the interactions between the next nearest neighbour sites (for the energy scales of these additional interactions see reference \cite{Katanin}). Our results of the RPA calculations have established that ring exchange together with the second and third nearest neighbour in-plane exchanges {\emph{do not change}} the results presented in our earlier paper \cite{Tabunshchyk} regarding the correlation between the ratio of the in and out-of-plane spin-wave modes gaps and the behaviour of the $\chi^x$ and $\chi^y$ components of susceptibility in the zero-$T$ limit, {\emph{viz.}} $\varepsilon_o^2/\varepsilon_i^2\approx\chi^x/\chi^y$. So, physics beyond what has been presented in our previous and this manuscript is important, but that does not imply that a more complicated Hamiltonian with more interactions is necessarily required. A potential resolution of this dilemma can be found in studies based on the quantum non-linear sigma model \cite{Neto}. However, within a theory that accounts for short-wavelength behaviour we will show in a future publication how the ``next" approximation beyond that used in our previous \cite{Tabunshchyk} and present papers fits the experimental data. This allows for the important next problem of the coupling of the anisotropic AFM state to either localized or mobile holes to be examined. \ack We thank Alexander Lavrov, Yoichi Ando, and Marcello Barbosa da Silva Neto for helpful comments. This work was supported by the NSERC of Canada, and NATO. \newpage
proofpile-arXiv_065-3071
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} Understanding the non--equilibrium statistical properties of driven particles in disordered media is a challenging question relevant to many experimental situations. A prominent example are the moving phases of driven vortex lattices in superconductors \cite{GLD}. A key feature of these systems is that the disorder induces anisotropic response and fluctuations which are strongly controlled by the velocity \cite{GLD,KV,kolton}. In spite of its relevance to understand situations of incoherent or plastic vortex flow the simple case of an isolated vortex driven in a $d$-dimensional random potential with $d>1$ has been tackled analitically only by perturbation theory \cite{KV}, valid at high velocities, or by mean field theory \cite{Horner}, valid for $d \gg 1$. In this paper we propose a simple model whose long time behaviour can be computed analitically at $d=2$ for any finite velocity. \section{MODEL} Let us consider the equation of motion, in two dimensions, of a driven isolated vortex at zero temperature, \begin{equation} \eta {\bf v}(t) = {\bf F} + \sum_i {\bf f}_p({\bf r}-{\bf r}_{i}) \label{eq:ecmov1} \end{equation} where ${\bf v}=d{\bf r}/dt$ is the instantaneous velocity of the vortex located at ${\bf r(t)}$, ${\bf f}$ is the driving force and $\eta$ the friction coefficient. We model the disorder as a random arrangement of hard disks with center ${\bf r}_i$ and radius $\xi$. Outside the disks the vortex has a free motion and inside it feels a pinning force, \begin{equation} {\bf f}_p({\bf r}) = -\frac{A_p}{\xi^2} {\bf r} \Theta(1-r^2/\xi^2) \end{equation} where $A_p$ is the amplitude, and $\Theta$ the step function. This disorder models a diluted distribution of pinning centers separated at a distance $d > 2\xi$. In the following we use adimensionalized variables: $\xi$ is the length unit, $A_p$ the energy unit, and $A_p/\xi^2 \eta$ the time unity. Above the depinning transition the motion of the vortex consists in straight segments of free motion interrupted by the collisions with the different pinning centers. At each collision the vortex is delayed and deflected with respect to the free motion. The equation describing the motion of the vortex inside the trap centered at ${\bf r}_i=0$ is, \begin{equation} \frac{d \bf r}{dt} = - {\bf r}+{\bf F} \label{eq:eqcol} \end{equation} The collision starts with the vortex at some initial position ${\bf r}(0)={\bf r}_0$ on the border of the trap, $r^2_0=1$. The solution of Eq.(\ref{eq:eqcol}) is \begin{equation} {\bf r}(t)=({\bf r}_0-{\bf F})e^{-t}+{\bf F} \label{eq:soltrap} \end{equation} After a time interval $\delta t$ the vortex will exit from the trap, therefore $r^2(\delta t)=1,\,\delta t>0$. Using this condition in equation \ref{eq:soltrap} we obtain the following expression for $\delta t$, \begin{eqnarray} e^{-\delta t} = \frac{{\bf f}_p^0.{\bf F} - \sqrt{[{\bf f}_p^0.{\bf F}]^2- (F^2-1)({\bf f}_p^0)^2}}{({\bf f}_p^0)^2} \label{eq:expdt} \end{eqnarray} where ${\bf f}_p^0={\bf f}_p({\bf r}_0)$. The displacement $\delta {\bf r}$ induced by the collision is then given by, \begin{equation} \delta {\bf r} \equiv {\bf r}(\delta t)-{\bf r}_0 = ({\bf r}_0-{\bf F})(e^{-\delta t}-1) \label{eq:dxi} \end{equation} Due to the random distribution of pinning centers, the motion can be considered as a random walk in the long time limit. The fluctuations of ${\bf r}(t)$ are thus induced by the uncorrelated sequence of collisions. Assuming identical pinning centers, the randomness comes exclusively from the random initial conditions of each collision, described by Eq.(\ref{eq:soltrap}). In a long time interval $t$ the vortex collides with a large number $N_c(t)$ of pinning centers. Since the core covers an area $\sim V t \xi$, where $V$ is the mean velocity, we get, \begin{equation} N_c(t)\approx n_p \xi V t. \label{eq:nrocollisions} \end{equation} By symmetry, the long time displacement is along the direction of ${\bf F}$, $\Delta {\bf r}(t)=\Delta r_{\parallel}(t) \hat{F}$, and \begin{equation} \Delta r_{\parallel}(t) \equiv V t \approx N_c(t)\langle \delta r_{\parallel} \rangle + F(t- N_c(t)\langle \delta t \rangle ) \label{eq:Dxi} \end{equation} where $\langle ... \rangle$ denotes an average over a random distribution of initial conditions ${\bf r}_0$ in equations \ref{eq:expdt} and \ref{eq:dxi}. From Eq. (\ref{eq:Dxi}) we get the mean velocity, \begin{eqnarray} V &=& \frac{F}{1-n_p \xi[\langle \delta r_{\parallel}\rangle - F \langle \delta t\rangle]} \label{eq:V} \end{eqnarray} We can now define longitudinal $D_{\parallel}$ and transverse $D_{\perp}$ diffusion constants, \begin{eqnarray} D_{\parallel} &\equiv& \langle [\Delta r_{\parallel}(t)-Vt]^2 \rangle /t \\ D_{\perp} &\equiv& \langle \Delta r_{\perp}^2(t) \rangle /t =\frac{N_c(t)}{t}\langle \delta r_{\perp}^2 \rangle \label{eq:difutrans} \end{eqnarray} To calculate the longitudinal diffusion constant in terms of the single collision displacement we use that $d- V (F \delta t + d - \delta r_{\parallel})/F$ is the random longitudinal displacement with respect to the average longitudinal motion in a single collision, with $d = n_p^{-1/2}$ the longitudinal mean distance between pinning centers of two given consecutive collisions. We thus get, \begin{equation} D_{\parallel}= \frac{N_c(t)}{t} \langle \bigl[ d- \frac{V}{F}(F \delta t + d - \delta r_{\parallel}) \bigr]^2 \rangle \label{eq:difulong} \end{equation} We also define the longitudinal mobility as $\mu_{\parallel} \equiv [\frac{d V}{d F}]$. In terms of single collision quantities we get, \begin{equation} \mu_{\parallel} = \frac{V}{F}\biggl\{ 1 - V n_p \xi \biggl( \langle \delta t \rangle + F \frac{d \langle \delta t \rangle}{dF} - \frac{d \langle \delta r_{\parallel} \rangle}{dF} \biggr)\biggr\} \label{eq:mupara} \end{equation} To define the transverse mobility we need to introduce a small perturbative force $f_{\perp}$. The velocity induced by this force is, \begin{eqnarray} v_{\perp} &=& V n_p \xi [\langle \delta r_{\perp} \rangle - f_{\perp} \langle \delta t\rangle] + f_{\perp} \label{eq:Vtrans} \end{eqnarray} and thus we can define the transverse mobility $\mu_{\perp} \equiv [\frac{d v_{\perp}}{d f_{\perp} } ]_{f_{\perp}\rightarrow 0}$. In terms of single collision quantities we get, \begin{equation} \mu_{\perp} = 1 + V n_p \xi \biggl( \frac{d \langle \delta r_{\perp} \rangle}{df_{\perp}} - \langle \delta t \rangle \biggr)_{f_{\perp}\rightarrow 0} \label{eq:muperp} \end{equation} Finally we can define effective temperatures using generalized Einstein relations, \begin{eqnarray} T_{\tt eff}^{\perp}&=&D_{\perp}/2 \mu_{\perp}\\ T_{\tt eff}^{\parallel}&=&D_{\parallel}/2 \mu_{\parallel} \end{eqnarray} In order to calculate the transport properties defined above ($V$, $D_{\perp}$, $D_{\parallel}$, $\mu_{\perp}$, $\mu_{\parallel}$, $T_{\tt eff}^{\perp}$, and $T_{\tt eff}^{\parallel}$) we need to calculate the first moments of the distribution of $\delta t$, $d (\delta t) /d F$, $\delta {\bf r}$, $\delta r_{\parallel} \delta t$, $d (\delta x_{\perp}) /d f_{\perp}$, $d (\delta x_{\parallel}) /d F$ by performing simple integrals. \begin{figure} \centerline{\includegraphics[height=8.0cm]{FIG1.eps}} \caption{ (a) VF characteristics of the driven particle. The dashed--line is the free flow solution. (b) Correction to the free flow velocity. Lines show the asymptotic forms at small and large velocities. } \label{fig:VF} \end{figure} \section{RESULTS} In Fig. \ref{fig:VF}(a) we show the Velocity - Force characteristics (VF) of our model, calculated from Eq.(\ref{eq:V}). The critical depinning force is $F_c=1$. At zero temperature, for $F<F_c$, the particle is trapped after a transient, and thus $V=0$. At low velocities, $F \rightarrow F_c^+$, the VF curve is strongly nonlinear with $V \sim [n_p \log ( (F-F_c)/2F_c )]^{-1}$ \cite{kolton_unpublished} and at large velocities free flux flow is approached with corrections that scale as $F-V \sim V^{-1}$, as shown in Fig.\ref{fig:VF}(b). \begin{figure} \centerline{\includegraphics[height=8.0cm]{FIG2.eps}} \caption{ (a) Diffusion constants in the longitudinal ($\triangle$) and transverse ($\Box$) directions.(b) Mobility in the transverse ($\Box$) and longitudinal ($\triangle$) directions. Lines show the small and large velocity asymptotic forms in both figures. } \label{fig:difyres} \end{figure} In Fig. \ref{fig:difyres}(a) we show the longitudinal and transverse diffusion constants. $D_{\parallel}$ and $D_{\perp}$ are both non--monotonous functions of the velocity $V$. At small velocity diffusion constants grow linearly with $V$ while at large velocity $D_{\parallel} \sim V^{-3}$ and $D_{\perp} \sim V^{-1}$. Let us note that $D_{\parallel}$ and $D_{\perp}$ cross at a characteristic velocity $V_{\circ}$. This crossing means that the long time diffusion front changes aspect ratio at $V_{\circ}$. For $V<V_{\circ}$ the diffusion front is elongated in the driven direction, while for $V<V_{\circ}$ is elongated in the transverse direction. At $V_{\circ}$ diffusion is isotropic. Interestingly, the same behavior is observed in numerical simulations of interacting vortices in two dimensions \cite{kolton}. In Fig. \ref{fig:difyres}(b) we show the longitudinal and transverse mobilities. We observe that both are velocity dependent and approach the free flux response, $\mu \sim 1$, at large velocities. Since the $\mu_{\parallel}$ is the differential resistance, $dV/dF$, the divergence observed at small velocity is a signature of the depinning transition. $\mu_{\parallel}$ near depinning is high since any small force can reduce strongly the waiting time $\delta t \sim 1/V$ inside the trap. On the contrary, $\mu_{\perp} \sim V$ at small velocity. A small transverse force has a small effect in the trap time $\delta t$ compared with the linear $V$ dependence (Eq.(\ref{eq:nrocollisions})) of the number of collisions per unit time. In Fig. \ref{fig:teff} we show the effective temperatures in the longitudinal and transverse directions. Since at large velocities the mobilities saturate, the velocity dependence of the effective temperatures are dominated by the diffusion constants, and thus $T^{\perp}_{\tt eff}\sim 1/V$ and $T^{\parallel}_{\tt eff}\sim 1/V^3$. At low velocities the effective temperatures are instead strongly determined by the velocity dependence of the mobilities. $T^{\parallel}_{\tt eff}$ reaches a maximum at a intermediate velocity and decreases quickly to zero with decreasing $V$. On the other hand, $T^{\perp}_{\tt eff}$ saturates at a finite value as the velocity vanishes, due to the linear dependence with velocity of both the diffusion constant and the mobility. It is worth noting that only at large velocities and in the transverse direction, the effective temperature is found to be identical, except from numerical factors, to the shaking temperature \cite{KV}. This confirms that the shaking temperature is equivalent to an effective temperature defined from a generalized fluctuation relation only in the limit of large velocity of non--interacting vortices \cite{kolton}. It is important to point out here that the thermodynamic nature of the effective temperatures of this model is still unclear, since the system is non--interacting and strongly driven \cite{CKP}. \begin{figure} \centerline{\includegraphics*[height=8.0cm]{FIG3.eps}} \caption{Effective temperatures in the transverse (a) and longitudinal (b) directions. Lines indicate the asymptotic forms at large velocity.} \label{fig:teff} \end{figure} \section{CONCLUSIONS} We have studied the pinning--induced anisotropic diffusion of driven non--interacting vortices. We find that the diffusion front is elongated in the direction of the driving force at low velocities and in the transverse direction at large velocities. This implies the existence of isotropic diffusion at a characteristic velocity $V_{\circ}$. The analysis of the anisotropic low frequency voltage noise in superconductors could be a possible experimental probe of this result, since diffusion constants and velocity fluctuations are related by generalized Green-Kubo relations \cite{kolton}. Even if the depinning transition does depend on the peculiarities of our model, we find that the main features of the long time fluctuations we obtain at intermediate and large velocities are in agreement with perturbation theory predictions \cite{KV} and with numerical simulations of non interacting vortices for the non--simplified model \cite{kolton_unpublished}. Furthermore, it is easy to show that our model can be solved at any dimension $d>1$ and generalized to more complicated short-range random potentials \cite{kolton_unpublished}. We acknowledge discussions with A. Rosso, A. Iucci, T. Giamarchi, and D. Dom\'{\i}nguez. This work was supported in part by the Swiss National Fund under Division II.
proofpile-arXiv_065-3073
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The presence of quenched randomness leads to many differences in the statistical behavior if compared to ``pure systems''. This is true in many phenomena as transport properties in, for instance, superconductors, or in a rather wide range of cases in magnetism. Consider a domain wall in a magnet, which gets pinned due to impurities. The scenario may vary according to the symmetries of the system and to the character of the disorder, but is described, in most general terms, by an ``energy landscape'' which develops a rich structure due to the the presence of pinning defects \cite{generic}. The most usual and convenient example of such magnets is given by the Ising model -universality class. Disorder is normally introduced as frozen random bond'' and ``random field'' impurities, which can change dramatically the nature of the phases of the model and the character of the phase transition. Strong enough bond disorder creates a spin glass -state, while the random fields couple directly to the order parameter, the magnetization. The criticality in such models is usually studied by finite size scaling, to extract the thermodynamic behavior. However, real (experimental) systems are finite and have boundaries. These break the translational invariance and create differences in the critical behavior between the boundary region and the bulk. The related phenomenon is called ``surface criticality'', and essential is that a whole set of new critical exponents arises, to describe the behavior of various quantities at and close to surfaces \cite{ptcp8,ptcp10}. Here, we investigate by scaling arguments and exact numerical methods this phenomenon in the case of the random field Ising model (RFIM), in three dimensions (3d). In this case, the RFIM has a bulk phase transition separating ferromagnetic and paramagnetic states. The central question that we want to tackle is: how do disorder and the presence of boundaries combine, in a system where the critical bulk properties are already different from pure systems? Though disordered magnets have been investigated earlier for the case of weak bond-disorder \cite{selke,pleimling}, both spin-glasses - a possible future extension of our work - and the RFIM have not been studied \cite{heiko}. One general problem of the 3d RFIM has been how to observe the critical behavior, and understanding the boundary critical behavior provides an independent, novel avenue for such purposes \cite{belanger,antifm,kleemann}. Such experiments are done on a number of systems from diluted antiferromagnets in a field, \cite{belanger,antifm}, to binary liquids in porous media, \cite{dierker}, and to relaxor ferroelectrics \cite{kleemann}. The particular characteristics of the RFIM is a complicated energy landscape, which manifests itself e.g. in the violation of the usual hyperscaling relation of thermodynamics, and in the existence of an associated violation exponent $\theta$ and several consequences thereof. This is analogous to, for instance, spin glasses, and furthermore for surface criticality presents the question how the broken translational invariance combines with the energy scaling. Our results imply that this can be understood by scalings that include both the bulk correlation length exponent $\nu$ and the bulk $\theta$ and novel surface exponents. Moreover, though the bulk RFIM 3d phase transition has been notoriously difficult experimentally, the boundary order parameter, say, should be quite sensitive to the control one (temperature, in experiments and disorder here) and promises thus to make the surface criticality experimentally observable. In the next section we overview the theoretical picture, as applied to the RFIM. Section 3 presents the numerical results, where the emphasis is two-fold. We discuss the surface criticality on one hand, and on the other hand the decay of a surface field induced perturbation is analyzed, since it has characteristics peculiar to a disordered magnet, in contrast to pure systems. Finally, Section 4 finishes the paper with a discussion of the results and future prospects. \section{Surface criticality} The RFIM Hamiltonian with a free surface $S$ reads \begin{equation} \label{RFIM_Ham} H_{RFIM} = -J\sum_{\langle i,j\rangle \notin S}\sigma_i \sigma_j -J_1\sum_{\langle i,j\rangle \in S}\sigma_i \sigma_j - \sum_{i}h_i\sigma_i, \end{equation} where $J$ is the bulk (nearest neighbour) interaction strength while $J_1$ describes the strength of the {\em surface interaction}, in general different from $J$. $\sigma_i$ take the values $\pm 1$. For simplicity, the random fields $h_i$ obey a Gaussian probability distribution $ P(h_i) = \frac{1}{\sqrt{2\pi}\Delta} \exp{\left[-\frac{1}{2}\left(\frac{h_i}{\Delta}\right)^2\right]}$, with a zero mean and standard deviation $\Delta$. One might have also external fields such as a bulk magnetic field $h$ and a surface magnetic field $h_1$ at $S$. Being governed by a zero temperature fixed point, the phase transition of the 3d RFIM can also be studied at $T=0$, where it takes place at a critical $\Delta_c$. The transition is of second order though it also exhibits some first-order characteristics: the order parameter exponent $\beta$ is very close to zero \cite{middleton,rieger,hartmann_m}. The surface criticality of the 3d RFIM is simplified by the fact that the lower critical dimension is two \cite{aizenman,uusi}, thus in the absence of a surface magnetic field $h_1$ just an {\em ordinary transition} can take place. The surface orders only because the bulk does so, and the transition point is the bulk critical point. Even in this case, there is a wide variaty of surface quantities. Derivatives of the {\em surface free energy} $f_s$ (surface ground state energy at $T=0$) with respect to surface fields, as the surface magnetic field $h_1$, yield {\em local quantities} (e.g. the surface layer magnetization $m_1=-\partial f_s/\partial h_1$), while derivatives of $f_s$ with respect to bulk fields produce {\em excess quantities}, such as the excess magnetization $m_s = -\partial f_s/\partial h$, defined by \begin{equation} \frac{1}{V}\int d^dx \ m({\bf x}) = m_b + \frac{S}{V}m_s + O(L^{-2}), \end{equation} where $m(\bf x)$ is the (coarse grained) magnetization at ${\bf x}$ and $V \sim L^d$ and $S$ are the sample volume and its surface area, respectively. One also obtains {\em mixed quantities} by taking second or higher derivatives of $f_s$. We focus on the critical behavior of the local and the excess magnetization ($m_1$ and $m_s$) as well as the excess specific heat $C_s$. The RFIM bulk critical exponents are related via the usual thermodynamic scaling relations, see Table \ref{table1}. The hyperscaling relations, however, have the modified form \begin{equation} \label{hyper} 2-\alpha = \nu(d-\theta), \end{equation} with the additional exponent $\theta$ \cite{braymoore}. The usual way to relate the surface excess exponents to bulk exponents is to note that from the conventional hyperscaling (Eq. (\ref{hyper}) with $\theta = 0$) it follows that the singular part of the bulk free energy $f_b^{(sing)}$ scales with the correlation length $\xi$ as $f_b^{(sing)} \sim \xi^{-d}$. By making the analogous assumption for the surface free energy, $f_s^{(sing)} \sim \xi^{-(d-1)}$, one finds \cite{ptcp10} \begin{equation} \label{pure_scaling} \alpha_s = \alpha + \nu, \quad \beta_s = \beta - \nu. \end{equation} In the case of the RFIM the above becomes less clear: does the $\theta$-exponent get modifed? We assume that the exponent $\theta'$ in $f_s^{(sing)} \sim \xi^{-(d-1-\theta')}$ may in general be different from the bulk exponent $\theta$, and obtain \begin{eqnarray} \label{alpha_s} \alpha_s &=& \alpha + \nu - \nu(\theta-\theta'), \\ \label{beta_s} \beta_s &=& \beta - \nu + \nu(\theta-\theta'). \end{eqnarray} To derive Eq. (\ref{beta_s}), the scaling form $\frac{E_s^{(sing)}}{J} \sim t^{2-\alpha_s} \tilde{E}_s [h/Jt^{-(\gamma+\beta)}]$ is used for the singular part of the excess ground state energy density $E_s^{(sing)}$ (which takes the role of the excess free energy at $T=0$), with $t \equiv (\Delta-\Delta_c)/J$, Eq. (\ref{alpha_s}) and the Rushbrooke scaling law $\alpha + 2\beta + \gamma = 2$. $\gamma$ is the exponent describing the critical behavior of the bulk susceptibility. Scaling relations relating $\beta_1$ to other 'local' surface exponents can also be derived, but it cannot be expressed in terms of bulk exponents alone. \begin{table}[fht] \begin{tabular}{lll} \hline Quantity & Definition & Exponent\\ \hline excess magnetization & $m_s=-\frac{\partial f_s}{\partial h}$ & $m_s \sim (-t)^{\beta_s}$ \\ excess specific heat & $C_s=\frac{\partial^2 f_s}{\partial \Delta \partial J}$ & $C_s \sim |t|^{-\alpha_s}$ \\ surface magnetization & $m_1=-\frac{\partial f_s}{\partial h_1}$ & $m_1 \sim (-t)^{\beta_1}$ \\ \hline \end{tabular} \caption{Surface quantities in terms of the surface free energy $f_s$, and the corresponding critical exponents ($t \equiv (\Delta-\Delta_c)/J$). Note that $T=0$ so that one uses instead of a free energy the ground state energy.} \label{table1} \end{table} \section{Numerical results} The exact ground state (GS) calculations are based on the equivalence of the $T=0$ RFIM with the maximum flow problem in a graph \cite{alavaptcp}; we use a polynomial push-relabel preflow-type algorithm \cite{goldberg,seppala_vk}. If not stated otherwise, we study cubic systems of size $L^3$, $L\leq100$. Free boundary conditions are used in one direction (the free surface under study) while in the remaining ones periodic boundary conditions are imposed. The maximal statistical error in what follows is of the order of the symbol size used, so the error bars are omitted. Note that since in the present case only the ordinary transition is possible, the critical exponents should be independent of the surface interaction $J_1$. Complications arise, however, since in 2d the RFIM is effectively ferromagnetic below the break-up length scale $L_b$, which scales as $L_b \sim \exp{[A(J/\Delta)^2]}$ (see Fig. \ref{lb}) \cite{seppala_Lb, binder}. This means that the surfaces have a tendency to be ordered ``an sich'', and to see the true ordinary transition behavior, one needs $L > L_b$. Thus, we use substantially weakened surface interactions $J_1 \ll J$ to circumvent this problem. \begin{figure}[ht] \includegraphics[width=7c ]{./lb.eps} \caption{The break-up length scale $L_b$ of the 2d surface layer of the 3d RFIM with a strongly paramagnetic bulk, $J=0.05 \Delta$, vs $(J_1/\Delta)^2$. $L_b$ is estimated by looking for a value of $J_1$ such that the surface will be totally ordered with probability $1/2$ while keeping $\Delta$ and $L$ fixed. The solid line corresponds to $A=2.1$.} \label{lb} \end{figure} \begin{figure}[ht] \includegraphics[width=7c ]{./f1.eps} \caption{Mean absolute value of the surface layer magnetization $m_1$ as a function of $\Delta/J$ for various $L$, $J_1=J$. The dashed vertical line corresponds to the critical point of the infinite system, $\Delta/J=2.27$.} \label{m1} \end{figure} \subsection{Surface layer magnetization} Fig. \ref{m1} shows an example of the magnetization $m_1$ of the surface layer close to $\Delta_c$, obtained directly from the spin structure of the GS. We assume the finite size scaling ansatz \begin{equation} \label{m1_ansatz} m_1 = L^{-\beta_1/\nu} \tilde{m}_1[(\Delta-\Delta_c)L^{1/\nu}], \end{equation} where $\tilde{m}_1$ is a scaling function. At the critical point $\Delta=\Delta_c$, Eq. (\ref{m1_ansatz}) reduces to $m_1 \sim L^{-\beta_1/\nu}$. Fig. \ref{m1_crit} is a double logarithmic plot of $m_1$ versus $L$ at $\Delta_c/J = 2.27$ for three $J_1$-values. All three are consistent with \begin{equation} \beta_1/\nu = 0.17 \pm 0.01. \end{equation} Using the bulk value $\nu=1.37 \pm 0.09$ \cite{middleton}, one obtains \begin{equation} \beta_1 = 0.23 \pm 0.03. \end{equation} Fig. \ref{collapse} depicts $m_1L^{\beta_1/\nu}$ versus $(\Delta-\Delta_c)L^{1/\nu}$, and with $\beta_1/\nu=0.17$, $\nu=1.37$ and $\Delta_c/J = 2.27$ one indeed obtains a decent data collapse. With $J_1 \approx J$, however, plotting $m_1(\Delta_c)$ versus $L$ produces a slightly different exponent, $\beta_1/\nu \approx 0.15$, and we could not get good data collapses, probably due to the fact that $L_b$ is large. \begin{figure}[ht] \includegraphics[width=7c ]{./f2.eps} \caption{A log-log plot of the surface layer magnetization $m_1$ as a function of the system size $L$ at criticality, $\Delta/J = 2.27$, for various $J_1/J \ll 1$. The solid lines depict fits, with $\beta_1/\nu = 0.17 \pm 0.01$ for all three cases shown} \label{m1_crit} \end{figure} \begin{figure}[ht] \includegraphics[width=7c ]{./f3.eps} \caption{A scaling plot of the surface layer magnetization $m_1$ in the case $J_1=0$, $J=1$, using $\Delta_c=2.27$, $\nu =1.37$ and $\beta_1=0.23$.} \label{collapse} \end{figure} \subsection{Surface excess magnetization} For the surface excess magnetization $m_s$, we use the finite size scaling ansatz \begin{equation} m_s = L^{-\beta_s/\nu}\tilde{m}_s[(\Delta-\Delta_c)L^{1/\nu}], \end{equation} where $\tilde{m}_s$ is a scaling function. Since $\beta_1$ was found to be independent of $J_1/J$ as long as $J_1/J \ll 1$ (in the limit $L\rightarrow \infty$, the independence of the exponents on $J_1/J$ should hold for {\em any} $J_1/J$), one expects the same to apply for the other exponents as well and we thus consider here only the case $J_1/J=0.1$. At the critical point, $m_s$ grows almost linearly with $L$ (Fig. \ref{ms_crit}), with the exponent $-\beta_s/\nu = 0.99 \pm 0.02$. This yields, by again using $\nu = 1.37 \pm 0.09$, \begin{equation} \beta_s = -1.4 \pm 0.1. \end{equation} \begin{figure}[ht] \includegraphics[width=7c ]{./f4.eps} \caption{A log-log plot of the excess magnetization $m_s$ as a function of the system size $L$ for $\Delta/J=2.27$, $J_1/J=0.1$. A background term of magnitude 1.07 has been substracted from $m_s$ to see the power-law behavior. The solid line is a power-law fit, with $-\beta_s/\nu = 0.99$.} \label{ms_crit} \end{figure} \subsection{Surface specific heat} In GS calculations, the specific heat is computed (recall $T=0$) by replacing the second derivative of the free energy $f$ with respect to the temperature by the second derivative of the GS energy density $E$ with respect to $\Delta$ or $J$ \cite{hartmann}. $\partial E / \partial J$ is the the bond part of $E$, $E_J = L^{-d}\sum_{\langle i,j \rangle} \sigma_i \sigma_j$. The excess specific heat exponent $\alpha_s$ is estimated according to Ref. \cite{middleton} (where the bulk one was considered). The singular part of the excess specific heat obeys \begin{equation} C_s^{(sing)} = L^{\alpha_s/\nu}\tilde{C}_s[(\Delta-\Delta_c)L^{1/\nu}], \end{equation} from which by integration it follows for the singular part of the excess bond energy at criticality, \begin{equation} \label{bondE_form} E_{J,s}^{(sing)}(L,\Delta=\Delta_c) = c_1 + c_2 L^{(\alpha_s-1)/\nu}, \end{equation} where $c_1$ and $c_2$ are constants. Fig. \ref{Cs_crit} is a plot of the excess bond energy, with $J_1/J=0.1$, at the bulk critical point. The fit using Eq. (\ref{bondE_form}) results in $(\alpha_s-1)/\nu=0.22 \pm 0.03$, corresponding to \begin{equation} \alpha_s = 1.30 \pm 0.05. \end{equation} \begin{figure}[ht] \includegraphics[width=7c ]{./f5.eps} \caption{A plot of the absolute value of the excess bond energy $E_{J,s}$ as a function of $L$ for $\Delta/J=2.27$, $J_1/J=0.1$. The solid line corresponds to a fit of the form of Eq. (\ref{bondE_form}), with $c_1=1.1292$, $c_2=0.9756$ and $(\alpha_s-1)/\nu=0.22.$} \label{Cs_crit} \end{figure} \subsection{Magnetization decay close to the surface} Finally we discuss the behavior of the magnetization profiles $m(z)$ (i.e. magnetization as a function of the distance $z$ from the surface), in the case the spin orientation at the surface layer is fixed. This corresponds to applying a strong surface field $h_1$. These are of interest as they reflect spin-spin correlations close to the surface, as studied in Ref.~\cite{parisisourlas} in the slightly different context of comparing two replicas with opposite $h_1$. For the RFIM close to the infinite system bulk critical point, $m(z)$ is affected by the fact that for numerically feasible system sizes the bulk magnetization is close to unity and decreases very slowly with increasing system size (due to the small value of $\beta$) \cite{middleton}. This is demonstrated in the inset of Fig. \ref{interface}, where the distribution of bulk magnetization $m_b$ at the critical point can be seen to be strongly peaked around $m_b = \pm 1$. One can now distinguish three scenarios from sample to sample: if $|m_b| \approx 1$ the applied strong surface field $h_1$ may have the same or opposite orientation, or finally the bulk magnetization $m_b$ may be close to zero. In the first case, the $h_1$ induced spin configuration will be close to the one in the absence of the field. In the second case, $h_1$ will either force $m_b$ to change sign altogether (producing again a flat profile with $m(z) \approx \pm 1$) or induce an \emph{interface} between the two regions of opposite magnetization, as in Fig. \ref{interface}. The third one has a small probability, and thus will not contribute much to the ensemble averaged magnetization profile. The \emph{average} magnetization profile $\langle m(z) \rangle$ can then (for a finite system, at the infinite system critical point) be well approximated by writing \begin{equation} \label{mz_ansatz} \langle m(z) \rangle \approx a + b \langle m_{if}(z) \rangle. \end{equation} Here $a$ and $b$ are weight factors, here constant but in general function(s) of $L$, that tell the relative weight of samples where the magnetization changes inside due to the $h_1$. \begin{equation} \label{if_integral} \langle m_{if}(z) \rangle = \int dw dz_0 P_w(w) P_{z_0}(z_0)m(z,z_0,w) \end{equation} is the profile one would obtain by averaging only over ``single sample'' profiles $m(z,z_0,w)$, corresponding to an interface of width $w$ and position $z_0$ (with probability distributions $P_w$ and $P_{z_0}$, respectively). A simplified model for $m(z,z_0,w)$ is shown in Fig. \ref{interface_model}. From the exact ground state calculations, we identify the profiles corresponding to such interface configurations. This is done by demanding that such profiles have a region where $m(z) < -0.9$ (when $h_1 \gg 0$). The interface width is defined as $w = z_2-z_1$, where $z_1$ and $z_2$ are the smallest $z$'s such that $m(z_1)<0.9$ and $m(z_2)<-0.9$, respectively. The interface position $z_0$ is then given by $z_0 = (z_1+z_2)/2$. By counting the fraction of such profiles, we can estimate $a$ and $b$ in Eq. (\ref{mz_ansatz}). These have the approximate values of $0.39$ and $0.61$, respectively (for a system of size 40x40x80). By using Eqs. (\ref{mz_ansatz}) and (\ref{if_integral}) with $m(z,z_0,w)$ presented in Fig. \ref{interface_model}, as well as the distributions $P_w$ and $P_{z_0}$ measured from the ground state calculations, one indeed obtains an average profile $\langle m(z) \rangle$ that is in reasonable agreement with the true one, see Fig. \ref{profile_comparison}. The \emph{average} magnetization profile $\langle m(z) \rangle$ decays slowly with the distance $z$, not quite reaching zero at the opposite edge of the system in the case at hand. However, a \emph{typical} value of $m(z)$ will be close to $\pm 1$ for all $z$, which persists for accessible system sizes due again to the small value of $\beta$. One may thus observe effects reminiscent of violation of self-averaging, and this would be true also if one would measure the averaged difference $\langle |m(z)-m_{GS} (z)|\rangle$ between the field-perturbed and GS configurations, and the higher moments thereof. These results illustrate simply how the quasi-ferromagnetic character of the 3d RFIM groundstate influences such perturbation studies, a consequence of the in practice limited system sizes one can access in simulations. \begin{figure}[ht] \includegraphics[width=7c ]{./interface.eps} \caption{Main figure: A typical example of a magnetization profile, taken from a single sample, where due to a strong positive surface field $h_1$ at $z=0$ an interface has formed between two regions of opposite magnetization. Inset: Distribution of the bulk magnetization $m_b$ with periodic boundary conditions, 2000 samples. $\Delta/J=2.27$, system size 40x40x80.} \label{interface} \end{figure} \begin{figure}[ht] \includegraphics[width=7c ]{./interface_sketch.eps} \caption{A simple model for a single-sample magnetization profile $m(z,z_0,w)$. The interface is characterized by the parameters position $z_0$ and width $w$.} \label{interface_model} \end{figure} \begin{figure}[ht] \includegraphics[width=7c ]{./profile_comparison2.eps} \caption{Main figure: A comparison between the numerical $\langle m(z) \rangle$ (solid line, averaged over $3000$ samples) and that obtained by using Eqs. (\ref{mz_ansatz}) and (\ref{if_integral}) with $m(z,z_0,w)$ as in Fig. \ref{interface_model} (dashed line). Inset: Distributions of the interface position $P_{z_0}(z_0)$ (solid line) and width $P_w(w)$ (dashed line) obtained from the simulations. $\Delta/J=2.27$, system size 40x40x80.} \label{profile_comparison} \end{figure} \section{Conclusions} In this work we have studied with combinatorial optimization and scaling arguments surface criticality in a random magnet, the 3d RFIM. The surface layer magnetization exponent $\beta_1$ is more than an order of magnitude larger than the extremely small bulk value \cite{middleton, rieger, hartmann_m}. Experimentalists have reported much larger values for $\beta$ \cite{belanger,antifm,kleemann}, which in fact are rather close to our estimate for $\beta_1$. An intriguing possibility in this respect is the direct observation of the surface order parameter in relaxor ferroelectrics via piezoelectric force microscopy \cite{kleemann2}. The excess exponents $\alpha_s$ and $\beta_s$, when inserted into the scaling relations (\ref{alpha_s}) and (\ref{beta_s}), both yield very small values for the correction term $\nu(\theta-\theta')$, assuming $\alpha \approx 0$, $\beta \approx 0.02$ and $\nu \approx 1.37$ \cite{middleton}. This suggests that in fact $\theta' = \theta$, and the excess exponents are related to bulk exponents by the usual scaling laws valid for pure systems, Eq.~(\ref{pure_scaling}). The numerically obtained description of the ordinary surface transition uses the bulk correlation length exponent as in pure systems. All this would merit further theoretical considerations and could also be checked in the four-dimensional RFIM \cite{4drfim}, whose phase diagram is also more complex due to the 3d surfaces which have independently phase transitions. The spin-spin correlations close to the surface and the magnetization profiles in the presence of boundary perturbations have been studied, similarly to the context of looking for self-averaging violations \cite{parisisourlas}. It would be interesting to investigate this aspect in more detail, but in our numerics the most transparent features are due to the two-peaked magnetization distribution of the groundstates, without a perturbing field. On a final note, the observations here concerning surface criticality in a disordered magnet - with a complicated energy landscape - extend directly for instance to spin glasses \cite{spinglasses} and to a wide class of non-equilibrium systems (see \cite{fran}, also for experimental suggestions). Two evident possibilities are looking for the same phenomenology in 3d Ising spin glasses, and in the 3d zero-temperature non-equilibrium RFIM. In the former case, the free surface of a system at $T>0$ is in analogy to the zero temperature 3d RFIM case inherently disordered (the 2d spin glass has a $T=0$ phase transition). In the second case, the situation is much more akin to the one at hand (\cite{fran}) and one should consider as the order parameter the remanent surface magnetization after a demagnetization procedure. {\bf Acknowledgments} A. Hartmann (G\"ottingen), D. Belanger (Santa Cruz) and W. Kleemann (Duisburg) are thanked for useful comments, and the Center of Excellence program of the Academy of Finland for financial support.
proofpile-arXiv_065-3085
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The encapsulation of atomic nitrogen within a fullerene shield has provided a uniquely robust molecular electron spin~\cite{knapp97}. Its unique relaxation properties have enabled the observations of a novel type of electron spin echo envelope modulation (ESEEM)~\cite{eseem05} and attracted attention as a potential embodiment of a bit of quantum information~\cite{harneit}. In high spin systems ($S\ge1$) in liquid solution, a fluctuating zero field splitting (ZFS) has habitually been cited as the dominant relaxation mechanism since transition metal ions were first studied by EPR~\cite{mcgarvey, bloemmorgan}. When relaxation in N@C$_{60}$~(which has electron spin $S=3/2$) was first studied, it was therefore natural to assume that the same ZFS mechanism applied~\cite{knapp98}. However, to date there has been little evidence to support this hypothesis. For example, no temperature dependence has been reported for N@C$_{60}$~in solution; such a study is critical in determining unambiguously which relaxation mechanisms are relevant. Measurements have been reported in CS$_2$ ~and toluene solutions~\cite{dietel99}; however, the analysis of these results ignored the effects of magnetic nuclei in toluene, which we have found to contribute significantly to the relaxation~\cite{mortontolrelax}. Finally, the previous measurements were performed using fullerene solutions that were sufficiently concentrated for (C$_{60}$$)_n$ aggregates to form, so it is difficult to conclude which phase (liquid or solid) the reported T$_1$/T$_2$~times correspond to~\cite{bokare03}. Consequently, the favoured relaxation model of a zero-field splitting (ZFS) fluctuation has little direct evidence to support it, and must be critically re-evaluated. In this letter we report relaxation times for both N@C$_{60}$~and N@C$_{70}$~in CS$_2$~solution, which, conveniently, lacks nuclear spins in the dominant isotopes of its constituents. We find that the temperature dependence of the relaxation times is inconsistent with the previously proposed ZFS mechanism, and suggest an alternate Orbach relaxation mechanism. We extract an energy gap which matches well the first excited vibrational state of the fullerene cage. \section{Materials and Methods} \begin{figure}[t] \centerline {\includegraphics[width=3.2in]{013548JCP1.eps}} \caption{Continuous wave EPR spectrum of N@C$_{60}$ in CS$_{2}$ at room temperature. Each line in the triplet signal is labeled with the corresponding projection $M_I$ of the $^{14}$N nuclear spin. Measurement parameters: microwave frequency, 9.67~GHz; microwave power, 0.5~$\mu$W; modulation amplitude, 2~mG; modulation frequency, 1.6~kHz.}\label{cwEPR} \end{figure} High-purity endohedral N@C$_{60}$ was prepared~\cite{mito}, dissolved in CS$_{2}$ to a final fullerene concentration of 1-2$\cdot 10^{15}$/cm$^3$, freeze-pumped in three cycles to remove oxygen, and finally sealed in a quartz EPR tube. The fullerene concentration used ($\approx1~\mu$M) was well below the cluster formation threshold~\cite{bokare03}. Samples were 0.7-1.4~cm long, and contained approximately $5\cdot 10^{13}$ N@C$_{60}$ spins. Pulsed EPR measurements were performed using an X-band Bruker Elexsys580e spectrometer, equipped with a nitrogen-flow cryostat. T$_2$~and T$_1$~times were measured using 2-pulse (Hahn) electron spin echo (ESE) and inversion recovery experiments, respectively. The $\pi/2$ and $\pi$ pulse durations were 56 and 112~ns respectively. Phase cycling was used to eliminate the contribution of unwanted free induction decay (FID) signals. \Fig{cwEPR} shows the continuous-wave EPR spectrum of N@C$_{60}$~in CS$_2$~at room temperature. The spectrum is centered on the electron g-factor $g=2.0036$ and comprises three narrow lines (linewidth $<0.3~\mu$T) resulting from the hyperfine coupling to $^{14}$N \cite{Murphy1996}. The relevant isotropic spin Hamiltonian (in angular frequency units) is \begin{equation}\label{Hamiltonian} \mathcal{H}_0=\omega_e S_z - \omega_I I_z + a \!\cdot\! \vec{S} \!\cdot\! \vec{I}, \end{equation} where $\omega_e=g\beta B_0/\hbar$ and $\omega_I=g_I\beta_n B_0/\hbar$ are the electron and $^{14}$N nuclear Zeeman frequencies, $g$ and $g_I$ are the electron and nuclear g-factors, $\beta$ and $\beta_n$ are the Bohr and nuclear magnetons, $\hbar$ is Planck's constant and $B_0$ is the magnetic field applied along $z$-axis in the laboratory frame. Each hyperfine line (marked in Fig.~\ref{cwEPR} with $M_I=0$ and $\pm 1$) involves the three allowed electron spin transitions $\Delta M_S=1$ within the $S=3/2$ multiplet. These electron spin transitions remain degenerate for $M_I=0$ but split into three lines for $M_I=\pm 1$. This additional splitting of 0.9~$\mu$T originates from the second order hyperfine corrections and leads to a modulation of the electron spin echo decay~\cite{eseem05}. \section{Relaxation of N@C$_{60}$~in CS$_2$} \label{relaxcstwo} Spin relaxation times T$_1$~and T$_2$~for N@C$_{60}$~in CS$_2$, measured on the central $M_I=0$ hyperfine line, are shown on a logarithmic scale in \Fig{tempcs2} for a range of temperatures (160K to 300K), demonstrating an exponential temperature dependence and a roughly constant ratio T$_2$~$\approx(2/3)$T$_1$~over the full temperature range. This contrasts with previous findings which reported no temperature dependence for T$_2$~\cite{harneit}. Below 160K, the CS$_2$~solvent freezes as a polycrystal, leaving regions of high fullerene concentration around grain boundaries. This dramatically increases the local spin concentration, and T$_2$~becomes extremely short due to dipolar spin coupling (the so-called instantaneous diffusion effect~\cite{klauder62,mims68,salikhov81}). \begin{figure}[t] \centerline {\includegraphics[width=3.3in]{013548JCP2.eps}} \caption{Electron spin relaxation times (T$_1$~and T$_2$) of N@C$_{60}$~in CS$_2$, measured using the central $M_I=0$ line. The ratio T$_2$~$\approx(2/3) $T$_1$~is maintained over the full temperature range for which the solvent remains liquid. } \label{tempcs2} \end{figure} As this is an $S=3/2$ spin system, one might expect several different relaxation times corresponding to the different $\Delta M_S=1$ transitions. However, in the experiments presented in \Fig{tempcs2}, all decays were well described by monoexponentials. Given two similar exponential decays, it is notoriously difficult to extract anything other than a single, average decay constant from an exponential fit. Here, we take advantage of a recently reported mechanism for electron spin echo envelope modulation (ESEEM)~\cite{eseem05} to separate the relaxation times for different electron transitions. This modulation generates an echo intensity for transitions on the $M_I=\pm1$ lines which varies as a function of the delay time, $\tau$, as \begin{equation} \label{eq:eseem} V_{M_I=\pm1}(\tau)= 2 + 3\cos2\delta\tau. \end{equation} The oscillating component arises from the `outer' coherences (from the $M_S=\pm3/2:\pm1/2$ transitions), whilst the unmodulated component arises from the `inner' coherences (from the $M_S=+1/2:-1/2$ transition). If T$_2$~relaxation is included, \Eq{eq:eseem} transforms to: \begin{equation} \label{eq:eseemt2} V_{M_I=\pm1}(\tau)= 2\exp{(-2\tau/\rm{T}_{2,\emph{i}})} + 3\exp{(-2\tau/\rm{T}_{2,\emph{o}})} \cos2\delta\tau , \end{equation} where T$_{2,i}$~and T$_{2,o}$~are the relaxation times of the `inner' and `outer' coherences, respectively. Thus, by fitting to the modulated ESEEM decay, the individual relaxation times T$_{2,i}$~and T$_{2,o}$~can be extracted. T$_1$~and T$_2$~times measured for the high-field ($M_I=-1$) hyperfine line are shown in \Fig{t2t2c60}. T$_1$~was measured in the standard way (inversion recovery), and so only one (average) value was obtained. \begin{figure}[t] \centerline {\includegraphics[width=3.3in]{013548JCP3.eps}} \caption{Electron spin relaxation times (T$_1$~and T$_2$) of N@C$_{60}$~inCS$_2$, measured using the high field $M_I=-1$ line. ESEEM is used to resolve the individual decay rates of the inner and outer coherences (see \Eq{eq:eseemt2}). Dashed curves show corresponding data taken for the central $M_I=0$ line, for comparison. } \label{t2t2c60} \end{figure} The behaviour of T$_1$~appears identical for both central and high-field lines, indicating that relaxation caused by the hyperfine interaction with the nitrogen nuclear spin is negligible. The T$_{2,i}$~measured on the high-field $M_I=-1$ hyperfine line correlates closely with the T$_2$~measured on the central $M_I=0$ line. Remarkably, both of these T$_2$~times remain approximately 2/3 of T$_1$~over the full temperature range studied. For the high-field line, the ratio of T$_{2,o}$~to T$_{2,i}$~also stays constant at about 2/3. The fact that certain ratios between T$_1$, T$_{2,i}$~and T$_{2,o}$~remain constant over a broad temperature range is a strong indication that all of these relaxation times are limited by the same mechanism. In the following section, we review different relaxation mechanisms which might account for the observed temperature dependence. \subsection{ZFS fluctuations} \label{ZFS} Spin relaxation is manifested in fluctuating terms in the spin Hamiltonian and arises from fluctuating magnetic dipoles (either nuclear or electronic), and other motions causing variations in the interactions between the spin and its environment. The trapping of endohedral nitrogen in a high symmetry environment suppresses most of the conventional spin relaxation mechanisms (zero-field splitting (ZFS) interaction, anisotropic $g$ matrix, electron-nuclear dipolar coupling and nuclear quadrupole interaction). Indeed, it has been proposed that the dominant relaxation process arises from small deviations from this ideal symmetric environment, caused by cage deformations from collisions with solvent molecules~\cite{knapp97}. For example, the modulation of the hyperfine interaction through such collisions is a possible relaxation pathway. This was dismissed in earlier reports on the basis that the expected $M_I$ dependence of linewidth that this mechanism predicts is not observed~\cite{knapp97}. However, as all linewidths are likely to be instrumentally limited, this observation did not constitute a rigorous confutation. The mechanism favoured in the literature is that of a ZFS fluctuation, again caused by deformation of the spherical C$_{60}$~cage through solvent collisions~\cite{knapp98}. Given the concentrations of fullerene solution that were reported in these earlier studies, a large amount of fullerene aggregation is expected~\cite{bokare03} and so it is unlikely that the N@C$_{60}$~molecules being studied had any direct contact with solvents. Nevertheless, deformations of the cage, through whichever mechanism (such as collisions with other C$_{60}$~molecules in the cluster), will give rise to some time-varying ZFS. Alternatively, ZFS fluctuations may result from rotational tumbling in molecules that have a permanent non-zero ZFS (such as in N@C$_{70}$). In the case of a degenerate $S=3/2$ system, a fluctuating ZFS term leads, in general, to two different decoherence times~\cite{carrington}, \begin{equation} \label{zfst2t2dega} \left(\rm{T}_{2,\emph{i}}\right)^{-1}=\frac{4}{5}D_{eff}^2\left[\frac{\tau_c}{1+\omega_e^2\tau_c^2}+\frac{\tau_c}{1+4\omega_e^2\tau_c^2}\right] \end{equation} \begin{equation} \label{zfst2t2degb} \left(\rm{T}_{2,\emph{o}}\right)^{-1}=\frac{4}{5}D_{eff}^2\left[\tau_c+\frac{\tau_c}{1+\omega_e^2\tau_c^2}\right], \end{equation} for the transitions that we refer to here as `inner' and `outer' respectively. $D_{eff}^2=D^2+3E^2$,~$D$ and $E$ are the coupling and rhombicity ZFS parameters, $\tau_c$ is the correlation time of the fluctuations, and $\omega_e$ is the electron spin transition frequency. The predicted T$_1$~times arising from the same mechanism are: \begin{equation}\label{zfst1t1dega} \left(\rm{T}_{1,\emph{i}}\right)^{-1}=\frac{8}{5}D_{eff}^2\left[\frac{\tau_c}{1+\omega_e^2\tau_c^2}\right] \end{equation} \begin{equation} \label{zfst2t1degb} \left(\rm{T}_{1,\emph{o}} \right)^{-1}=\frac{8}{5}D_{eff}^2\left[\frac{\tau_c}{1+4\omega_e^2\tau_c^2}\right] \end{equation} The individual values of T$_{1,i}$~and T$_{1,o}$~cannot be resolved in a simple inversion recovery experiment, and thus only their average can be determined (with respective weights 2 and 3). In the fast tumbling limit ($\omega_e \tau_c << 1$), the theory predicts these two T$_1$~times to be identical, and equal to both types of T$_2$, contrary to our observed ratio of 2/3. Moving away from the fast-tumbling limit, values for $D_{eff}$ and $\tau_c$ can be derived given any values for T$_1$~and T$_2$. Since the ratio between these times is dictated purely by $\tau_c$, the fact that the ratios stay fixed implies $\tau_c$, the correlation time of the ZFS fluctuations, stays fixed over the broad temperature range (160 to 300K). This would be surprising, as the viscosity of CS$_2$~changes by an order of magnitude over this temperature range~\cite{kayelaby}. Thus, we conclude that the previously suggested ZFS fluctuation mechanism cannot explain the observed temperature dependence of T$_1$~and T$_2$, nor their mutual correlation, and therefore seek alternative explanations for the behaviour observed. \subsection{Orbach relaxation process} \begin{figure}[t] \centerline {\includegraphics[width=3.1in]{013548JCP4.eps}} \caption{The temperature dependence of T$_1$~ of N@C$_{60}$~is linear in Arrhenius coordinates, consistent with the Orbach relaxation mechanism. An energy gap $\Delta = 32(1)~$meV $\equiv~375$K can be extracted. Because we cannot make a low-temperature approximation in this case, the standard Orbach plot of log(1/T$_1$) vs. 1/T must be adjusted to include the constant of proportionality, $A$ (see Eq.~\ref{orbeq}). The plot is then recursively fit to fine-tune $A$ and obtain the slope, $\Delta/k$. T$_1$~is given in microseconds.} \label{t1orbach} \end{figure} The temperature dependence of T$_1$~is well described by an Orbach relaxation mechanism (see \Fig{t1orbach}). This is a two-phonon relaxation process whose energies are resonant with a transition to an excited electronic state (i.e.~a vibrational or orbital state which lies outside of the space considered by the spin Hamiltonian). The T$_1$~temperature dependence is dictated by the distribution of phonon energies, and is of the form: \begin{equation} \label{orbeq} \rm{T}_1 = A~(e^{\Delta/kT}-1), \end{equation} where $\Delta$ is the energy gap to the excited state and $A$ is some constant which involves terms associated with spin-orbit coupling (and therefore with the ZFS, $^{14}$N~hyperfine coupling and g-tensor in the excited state)~\cite{atkins72}. A fit to the data in \Fig{t1orbach} yields $\Delta=32(1)$~meV. This is a close match to the energy of the first vibrational mode of C$_{60}$~(273~cm$^{-1}$, or 34~meV) which has been theoretically calculated and observed by Raman spectroscopy of C$_{60}$~in CS$_2$~solution at 300K~\cite{chase92, meilunas,vibc60}, indicating that this may be a vibrational spin-orbit Orbach process~\cite{kivelson1,kivelson2}. This first excited vibrational mode, termed $H_g(1)$, breaks the spherical symmetry of the molecule, reducing it to axial. The small difference between $\Delta$ observed here compared with that seen in the Raman spectroscopy of C$_{60}$~could be due to a shift in vibrational energies due to the presence of the endohedral nitrogen atom. The strong correlations observed in the temperature dependence of T$_1$, T$_{2,i}$~and T$_{2,o}$~indicate that the T$_2$~times are also limited by the Orbach mechanism. However, no detailed Orbach theory has been developed for high-spin systems --- developing such a theory lies beyond the scope of the current work. \section{Relaxation of N@C$_{70}$~in CS$_2$} The Raman spectrum of C$_{70}$~is very similar to that of C$_{60}$, while its rugby ball shape provides a permanent non-zero ZFS to an endohedral spin. N@C$_{70}$~is therefore an ideal candidate to further compare the mechanisms of a vibrational Orbach relaxation with one induced by ZFS fluctuations (here, caused by molecular rotations). Using the methods outlined above, we measured T$_2$~(for both the inner and outer coherences) and T$_1$, shown in \Fig{relaxnc70}. \begin{figure}[t] \centerline {\includegraphics[width=3.3in]{013548JCP5.eps}} \caption{Temperature dependence of T$_1$~and T$_2$~times for N@C$_{70}$~in CS$_2$. For comparison, dashed lines show linear fits to the corresponding data for N@C$_{60}$~in CS$_2$~(from \Fig{t2t2c60}).} \label{relaxnc70} \end{figure} \begin{figure}[t] \centerline {\includegraphics[width=3.3in]{013548JCP6.eps}} \caption{Comparison of T$_2$~times for N@C$_{70}$~in CS$_2$~solution with the model described in the text. The curves labeled `ZFS' are derived from Eqs.~\ref{zfst2t2dega} -- \ref{zfst2t1degb}. The `Total' fit to T$_{2,o}$~is achieved by combining the relaxation rate from the fluctuating ZFS model, with an intrinsic decay taken to be 2/3 of T$_{2,i}$. The only free parameter in the model was a constant ZFS parameter, $D$. The contribution of the ZFS model to T$_{2,i}$~and both T$_1$~is shown to be negligible (top panel). } \label{nc70fit} \end{figure} The temperature dependence of T$_1$~is similar to that seen for N@C$_{60}$~in CS$_2$. The first excited vibrational mode of C$_{70}$~is only about 1.7~meV lower in energy than the equivalent mode in C$_{60}$~\cite{dresselrev}. Consistent with this, the T$_1$~temperature dependence seen for N@C$_{70}$~is slightly weaker than measured on the outer line of N@C$_{60}$, though the difference falls within experimental error. While T$_{2,i}$~here bears a strong resemblance to that seen for N@C$_{60}$, T$_{2,o}$~for N@C$_{70}$~shows a non-monotonic temperature dependence, peaking around 230K. We now show that this behaviour can be explained by the presence of the built-in ZFS in N@C$_{70}$, and by the change of rotational mobility of the molecule as the temperature drops. An estimate of the built-in ZFS parameter in N@C$_{70}$~has been reported by aligning the molecules in a liquid crystal, and was found to be $D=2.5$~MHz (0.8~G)~\cite{jakes02}. However, due to the uncertainty in the order parameter ($O_{33}$), this value should be considered as a lower limit of the true ZFS parameter. At higher temperatures (i.e.~in the fast-tumbling regime) this ZFS is averaged out sufficiently so that all relaxation times are identical to those for N@C$_{60}$. However, upon cooling below 250K, the viscosity of CS$_2$~rises sharply~\cite{kayelaby}, thus slowing the N@C$_{70}$~tumbling rate and resulting in incomplete averaging of the ZFS. We simulate this effect using Equations~\ref{zfst2t2dega} and \ref{zfst2t2degb} and find that while T$_{2,o}$~is affected by this mechanism, both T$_{2,i}$~and T$_1$~are not. In this simulation we assume that two relaxation mechanisms are involved. One is the Orbach mechanism which produces the correlations $\rm{T}_{2,\emph{i}}/\rm{T}_1~=~\rm{T}_{2,\emph{o}}/\rm{T}_{2,\emph{i}}=2/3$ over the full temperature range studied, as observed for N@C$_{60}$. The second is the mechanism due to ZFS fluctuation, described above. The Stokes-Einstein-Debye model, \begin{equation} \tau_r=\frac{4\pi \eta a^3}{3k T}, \end{equation} and experimental values for the viscosity of CS$_2$~\cite{kayelaby} are used to obtain the rotational correlation time, $\tau_r$, as a function of temperature. The effective radius of C$_{70}$~was taken to be $5.4~\mathring{\mathrm{A}}$~\cite{khudiakov95}. The experimental data were well fit by this model, using only one fitting parameter, $D$ (given the axial symmetry of C$_{70}$, we assume $E=0$). The result is shown in \Fig{nc70fit}, where the best-fit value for $D$ is 5.5~MHz (2~G). This value is large compared with estimates described in the literature~\cite{jakes02}, however, it is consistent with values for $D$ measured with other modifications of N@C$_{60}$~(for example, $D$ was meausred in N@C$_{60}$ O to be 2.4~G~\cite{oxidepaper}). \Fig{nc70fit} also shows that the ZFS mechanism affects only T$_{2,o}$, and does not produce a noticeable effect on T$_{2,i}$~and T$_1$. \section{Conclusions} In summary, we have reported the temperature dependences of electron spin relaxation in nitrogen doped fullerenes, using ESEEM to resolve the relaxation rates of different coherences of this $S=3/2$ spin. Our findings are contradictory with the previously suggested mechanism of a fluctuating ZFS, which is often assumed to be the dominant mechanism in all high spin ($S\ge1$) systems. Instead, the temperature dependences we observe are strongly suggestive of an Orbach relaxation mechanism, via the first excited vibrational state of the fullerene molecule. The study of electron spin relaxation in the asymmetric N@C$_{70}$~molecule permits us to distinguish this Orbach relaxation mechanism from a fluctuating ZFS mechanism. Additionally, the observation of a coherence time (T$_2$) in N@C$_{60}$~of up to 0.25~ms, the longest for any molecular electron spin, further emphasises the importance of this molecule for quantum information processing. Such times allow in excess of 10$^4$ high fidelity quantum gate operations to be performed~\cite{mortonbb1}, thus meeting the requirements for quantum error correction~\cite{steane03}. \section{Acknowledgements} We acknowledge helpful discussions with Richard George, and thank Wolfgang Harneit's group at F.U. Berlin for providing Nitrogen-doped fullerenes, and John Dennis at QMUL, Martin Austwick and Gavin Morley for the purification of N@C$_{60}$. We thank the Oxford-Princeton Link fund for support. This research is part of the QIP IRC www.qipirc.org (GR/S82176/01). GADB thanks EPSRC for a Professorial Research Fellowship (GR/S15808/01). AA is supported by the Royal Society. Work at Princeton was supported by the NSF International Office through the Princeton MRSEC Grant No. DMR-0213706 and by the ARO and ARDA under Contract No. DAAD19-02-1-0040.
proofpile-arXiv_065-3095
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Radio galaxies (RGs) with extended lobes on opposite sides of their nuclei, or the classical double sources, constitute a significant population of active galaxies. \citet{FR74} classified these objects as Class II (FR II) sources. These are the edge-brightened population of radio sources (typically with bright hotspots), and also the more powerful ones, with luminosities $P_{178 MHz} > 10^{25}$ W Hz$^{-1}$ sr$^{-1}$. Flux limited samples indicate that the comoving densities of RGs were higher during the {\it quasar era} (i.e., between redshifts $\simeq$ 1.5 and 3) as compared to the present epoch \citep*[e.g.,][]{jackson99, willott01, grimes04}. Optical and hard x-ray observations of powerful active galactic nuclei reveal a similar trend for the quasar era \citep[e.g.,][]{ueda03}. The star and galaxy formation rate was also considerably higher in the quasar era. \citet{lilly96} inferred that the observed luminosity density (and hence the star formation rate) of the universe in the UV, optical and near-infrared increases markedly with redshift over $0 < z < 1$. Similarly, from Hubble Deep Field studies \citet*{connolly97} and \citet*{madau98} found a sharp rise in the comoving luminosity density and global star formation rate with redshift, finding that it peaked at $z \simeq 1.5$, and decreased monotonically at higher $z$ out to $z \simeq 3 - 4$. More recently, \citet{bouwens06} found an apparent decrease in the rest-frame UV luminosity function and the cosmic star formation rate density from the peak redshift of $z \sim 3$ upto $z \sim 6 - 10$. Studies made with the Spitzer Space Telescope \citep[e.g.,][]{perez05} also indicate that the infrared luminosity function and the cosmic star formation rate increase with redshift until the quasar era. Submillimeter surveys claimed that the comoving luminosity density has a peak at $z \sim 2-5$ \citep{blain99, archibald01}. This redshift range is somewhat higher than what optical surveys (possibly affected by dust obscuration) infer. At the same time, a more recent sub-mm study \citep{rawlings04a} indicates no compelling evidence for the far infrared luminosity of radio sources to rise with redshift. The above observations have prompted investigations of the effect of RGs on cosmological evolution and distribution of large scale structures in the universe. Preliminary work indicates that RGs can have substantial impacts on the formation, distribution and evolution of galaxies and large scale structures of the universe (e.g., \citealt{GKW01}, hereafter GKW01; \citealt{kronberg01}; \citealt{GKW03}, hereafter GKW03; \citealt*{GKWO}, hereafter GKWO; \citealt*{GKWB04}, hereafter GKWB; \citealt{rawlings04, levine05}). One important aspect of this process is the role played by the huge expanding RG lobes in triggering extensive star formation in a multi-phase intergalactic medium. This idea has been discussed by several authors in order to explain the alignment between large scale optical emission and radio source direction \citep[e.g.,][]{begelman89, deYoung89}. \citet{chokshi97} proposed that RG expansion could trigger much star formation in host galaxies. GKW01 stressed that RGs could impact a large fraction of the filamentary structures in which galaxies form, thus hastening their birth. Similar conclusions were drawn from different lines of argument by \citet{kronberg01} and \citet{furlanetto01}. Recently, \citet{silk05} also argued that efficient ultraluminous starbursts can be triggered by AGN jets. A very significant fraction of the volume of the universe in which star formation has occurred was impinged upon by the growing radio lobes during the quasar era (GKW01 and references therein). When these radio lobes propagating through the protogalactic medium envelop cooler clumps of gas embedded within the hotter gas which fills most of the volume, the initial bow shock compression triggers large-scale star formation, which is sustained by the persistent overpressure from the engulfing radio cocoon. This cocoon pressure is likely to be well above the equipartition estimate \citep{blundell00}. This scenario is supported by many computations, analytical \citep[e.g.,][]{rees89, daly90}, hydrodynamical \citep*[e.g.,][]{deYoung89, mellema02, fragile04, saxton05}, and magnetohydrodynamical \citep[e.g.,][]{fragile05}. This triggered star formation provides an explanation for much of the remarkable radio-optical alignment effect exhibited by high-$z$ radio galaxies \citep*[e.g.,][]{mcCarthy87, chambers88b}. Additional support for jet or lobe-induced star formation comes from the {\it Hubble Space Telescope} images of $z \sim 1$ radio galaxies \citep*{best96} and of some radio sources at higher $z$ \citep[e.g.,][]{bicknell00}. Keck observations \citep{dey97} and sub-mm observations \citep*{greve05} of high $z$ RGs also give evidence for this phenomenon. Clustered Lyman $\alpha$ emitters have been found at high redshifts $(z \sim 2-5)$ close to RGs \citep{venemans05, roderik05}, indicating that RGs form in high density regions and could have significant impact by accelerating star formation. Deep optical HST imaging gives evidence of star formation and a starburst driven superwind induced by AGN jet activity in a $z=4.1$ RG \citep{zirm05}. The expanding RG lobes also could have infused magnetic fields of significant strengths ($\sim 10^{-8}$ Gauss, e.g., \citealt*{ryu98}) into the cosmic web portion of the IGM (GKW01; \citealt{kronberg01}; GKWO; GKWB). Evidence of substantial metalicity in underdense regions of the IGM at $z \sim 4$ \citep[e.g.,][]{schaye03} requires a strong mechanism of spreading metals widely (metalization) at early cosmic epochs. The huge radio lobes could contribute substantially to spreading metals into the IGM, by sweeping out the metal-rich ISM of some young galaxies which they encounter while expanding (GKW03, GKWB). Ascertaining the importance of these processes of star formation, magnetization and metalization via RGs requires addressing the question of what fraction of the relevant volume of the universe did the radio lobes occupy during the quasar era (GKW01). The ``relevant universe'' refers to the volume containing most of the baryons, the majority of which exist as a filamentary structure of warm/hot gas, the WHIM (warm/hot intergalactic medium) with $10^5 <$ T $< 10^7$ K \citep[e.g.,][]{cen99, cen06, dave01}. For the radio lobes to have an important role in impacting star formation and spreading magnetic fields and metals, they need to occupy a significant portion of this relevant volume of baryons, which, however, was a small fraction of the total volume of the universe during most of the quasar epoch. A prerequisite for a more accurate computation of this RG impacted volume is a good model of the evolution of radio sources, both individually and as a function of $z$. Many analytical models have been published which characterize radio sources in terms of their dynamics and power evolution. \citet[ hereafter KA]{KA} showed that the cocoons can have a self-similar growth. Although numerical hydrodynamical studies \citep[e.g.,][]{carvalho02} indicate that radio source sizes grow in a more complex way than self-similar predictions, they are still reasonable approximations overall. The power evolutions in these models are dominated by adiabatic losses as the lobe expands, synchrotron radiation losses in the lobe magnetic field and inverse compton (IC) losses off the cosmic microwave background (CMB) photons. The three models of radio lobe power evolution with time which are considered in detail in this paper are those given by \citet*[ hereafter KDA]{KDA}, \citet*[ hereafter BRW]{BRW} and \citet[ hereafter MK]{MK}. The source linear size evolution in BRW and MK essentially follow the KDA prescription. They differ in the way the relativistic particles are injected from the jet to the lobe, and in treatments of loss terms and particle transport. So there are some significant differences in their predictions for observed powers ($P$) as functions of source size ($D$) and redshift ($z$). The simplest method to study the power evolution of RGs is to examine their radio power -- linear size, or $P$--$D$, diagram. These $P$--$D$ tracks have been used (KDA; MK; \citealt*{machalski04a, machalski04b}) to look for consistency between data and models. These papers compare model tracks with $P$--$D$ diagrams of observed radio sources to evaluate the qualitative success of the models. The innovative radio sky simulation prescription in BRW adds new dimensions to the observed parameter space. Using the RG redshift distribution estimated by BRW from the work of \citet{willott01} on the radio luminosity function (RLF) and any lobe power evolution model, one can get $P$, $D$, $z$, and spectral index $\alpha$ ($P_{\nu} \propto \nu^{-\alpha}$) values for simulated model radio sources. The distributions of these simulated RGs can then be compared to observational data to test the success of the model. In BRW, slices through the [$P$, $D$, $z$, $\alpha$]-space generated by their model are {\it qualitatively} compared with observations for two data sets (3CRR and 7CRS); those authors claim good results, except for plots involving $\alpha$. However, to properly claim success for a theoretical model a {\it quantitative} statistical test is required; we present some in this paper. A quantitative comparison of cosmological radio source evolution model predictions with an observational data sample (the 3C data from \citealt{laing83}) has been done by \citet{kaiser99a}. They considered a progenitor FR II source population being born over cosmic epochs, and evolving according to assumed distribution functions of the model parameters of the KDA and KA models. Constructing simulated samples, they then compared the models' predictions with observations. They used $\chi^2$ statistics in the $[P - D]$ and $[P - z]$ planes to constrain the models. However the binning they used was somewhat arbitrary and the bins appear to be based on the concentration of sources in the observed $[P - D - z]$ planes. Our approach (based on 1- and 2-dimensional Kolmogorov-Smirnov (KS) statistics and correlation coefficients) may be as good as can be done since we are dealing with source characteristics in four dimensions ($P$, $D$, $z$, $\alpha$) and over three observational surveys (3CRR, 6CE and 7CRS) with only a few hundred sources in total. We tried to perform multi-dimensional KS like tests (discussed in \S5.2.1) but the limited sizes of the observational samples precluded any useful results from being obtained. In \S2, we summarize the BRW simulation prescription. We apply this prescription to the KDA, BRW and MK models in \S3. In \S4 we discuss the observational samples to which we will compare the model distributions, and describe how our multi-dimensional Monte Carlo simulations are done. We perform statistical tests comparing the distributions of radio source parameters predicted by each model and those of observational samples in \S5. We vary the parameters of the models, aiming to find the parameters which give the best statistical fit for each model to all three surveys simultaneously. A discussion and conclusions are given in \S6 and \S7, respectively. \section {Initial Population Generation} We follow the prescription given in detail in BRW to generate the initial radio source population. Here we summarize the initial distributions of source ages, redshifts and beam powers; these produce the redshift, beam power and the age at which each model RG will be intercepted by our light cone. This summary and update of the BRW prescription is necessary to define the model parameters. One key difference from BRW is that we assume a consensus cosmology, i.e., a flat universe with $H_0 = 71$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m = 0.3$ and $\Omega_\Lambda = 0.7$ \citep{spergel03}. The cosmological equations are taken from \citet*{carroll92} and \citet{peacock99}. From some initial high redshift, $z_{start}$, well above the peak of the RLF, sources are assumed to be born at an interval, $\Delta T_{start}$, which is short compared to their lifetimes. From the cosmology assumed the redshifts are translated to cosmic times (epochs) and vice versa \citep{weinberg89}. We use $z_{start} = 10$ and take $\Delta T_{start} = 10^6$ years, but the results should be insensitive to values of $z_{start} > 6$ and $\Delta T_{start} < 10^7$ years. After a source is born at a redshift $z_{birth}$, its active lifetime is denoted as $T_{MaxAge}$. A default value of $T_{MaxAge} = 5 \times 10^8$ years is taken. This value is used by BRW, and more recent investigations involving X-ray activity in AGN \citep{barger01}, SDSS optical studies of active galaxies \citep{miller03} and black hole demographics arguments \citep[e.g.,][]{marconi04} all support values of over $10^8$yr. In order to observe a radio galaxy when its nucleus is still actively feeding its jet, it must be intercepted by our light cone at some epoch between the cosmic time of its birth and the time when its beam is switched off, i.e., within an interval of $T_{MaxAge}$ after its birth. For this interception to occur the source must lie inside a certain cosmic volume shell, the ``Relevant Comoving Volume Element'', $V_C$ (BRW). For a spatially flat ($k = 0$) universe, if $r$ is the radial comoving coordinate, $V_C = 4 \pi R^3(t) \left( r_2^3 - r_1^3 \right) / 3$, where $R(t)$ is the scale factor of the universe at cosmic time $t$, and $r_1$ and $r_2$ are the inner and outer radial coordinates of the volume shell (at $t_{birth}$). The value of $V_C$ is the relevant volume at the epoch $z_{birth}$ (or $t_{birth}$). The corresponding proper volume now is $ V_C (z=0) = \left (1+z_{birth}\right)^3~V_C (z_{birth}).$ The sources are assumed to be distributed in redshift according to a gaussian RLF with (BRW Eq.~24) \begin{equation} \rho(z) \propto {\rm exp}\left[- \frac{1}{2} \left( \frac{z-z_0}{\sigma_z} \right)^{2}\right], \end{equation} a distribution that peaks at $z_0$ and has standard deviation of $\sigma_z$. According to the RLF of \citet{willott01}, $z_0 = 2.2$, $\sigma_z = 0.6$, and we use these values in our simulations. \citet{grimes04} have given a more recent computation of the RLF where the values are $z_0 = 1.7, \sigma_z = 0.45$ (see their Table 5). The number of sources born at some cosmic time ($t$), per unit cosmic time, per unit comoving volume element at redshift zero is found from the relation $\rho(t)dt$ = $\rho(z)dz$. For a homogeneous and isotropic universe, this distribution is valid at all epochs throughout the space. At a particular redshift $z_{birth}$, the comoving volume ($V_C$) is found. Then multiplying $V_C$ by $\rho(z_{birth})$ gives the number of sources born at $z_{birth}$ (per solid angle) in the chosen interval in cosmic time which are intercepted by our light cone: $ N_{born} \propto V_C(z=0) ~ \rho(z_{birth}).$ The total number, $N_{born}$, is obtained by using a normalization factor in the above proportionality which takes into account the sky area of the observed data sample. Homogeneity of the universe implies that the sources are randomly distributed within the comoving volume shell. The age of a source, $T_{age}$, is the time after $t_{birth}$ it is intercepted by our light cone; in our computations it is drawn randomly from $0$ to $T_{MaxAge}$, but weighted so that sources are distributed uniformly in volume within the comoving volume shell. In each simulation (run) we have generated a very large number of sources, over a wide range of cosmic time. We find the number of sources born at some $z_{birth}$ which will intercept our light cone, the age $T_{age}$ (denoted by $t$ henceforth) of each source, and the redshift at which we observe it (denoted by $z$ henceforth), which is derived from $T_{obs}$, the cosmic time at which the light we see was emitted from the source. As very powerful sources are much rarer than weaker ones, each of the sources generated is assigned a jet power $Q_0$ (which is assumed to remain constant throughout its age) according to the probability distribution (BRW Eq.~38) \begin{eqnarray} p(Q_0)dQ_0 & \propto & Q_0^{-x}~dQ_0 ~\textrm{ if $Q_{min} < Q_0 < Q_{max}$}, \nonumber \\ & = & 0 ~~~~~\textrm{ if $Q_0 > Q_{max}$ or $Q_0 < Q_{min}$}. \end{eqnarray} Here the power index $x$ is positive, and we initially adopted the values used by BRW: $x = 2.6, Q_{min} = 5 \times 10^{37}$ W, and $Q_{max} = 5 \times 10^{42}$ W. Our best fit values of $x$ are higher and are discussed in \S5. An initial Monte Carlo population generation is completed when $t$, $z$ and $Q_0$ are randomly assigned to each source of the population according to the above prescriptions. Each source in that population is then allowed to evolve according to a model described in the following section, giving the observable quantities other than $z$: $P$, $D$ and $\alpha$. \section {Models of Radio Lobe Evolution} A standard basic model of FR II extragalactic radio sources \citep[e.g.,][]{scheuer74, blandford74} is widely accepted. A powerful RG consists of the central active nucleus and two jets emerging from opposite sides of it. After traveling substantial distances the plasma in these jets collides with a tenuous environment. There the jets terminate in a shock where relativistic electrons are accelerated and hotspots are formed; the plasma passing through the terminal shocks inflate the huge lobes of energetic particles. A bow shock propagates into the surrounding gas ahead of the jets. The radio power evolution models that we compare are those given by KDA, BRW and MK. In brief, the physics of these models differ mainly in the ways in which particles are assumed to be transported from the jet through the hotspot and into the lobe. KDA assume a constant injection index, $p$, for the energy number distribution so $N(E) \propto E^{-p}$, for the radiating relativistic particles while the particles are injected from the hotspots into the lobes. BRW assume that the injection index varies between the different energy regimes, as governed by the break frequencies discussed below. MK assume a constant injection index but also argue that the particles are re-accelerated by some turbulent process in the head during transport to the lobes. Several key points of each model and additional differences are noted in \S\S 3.2 -- 3.4, although the reader should refer to the original papers for a thorough understanding of each model's details. Table 1 lists the default values of the major model parameters (those used by the authors). We varied these parameters in our extensive simulations described in \S4.3. The only parameter whose variation was not considered is the adiabatic index of the external environment, which was adopted as usual as $\Gamma_x=5/3$. \begin{table} \caption{Default Values of the Model Parameters\label{tab1}\tablenotemark{a}} \begin{tabular}{cccc} \hline Parameter & KDA & BRW & MK \\ \hline $\beta$ & 1.9 & 1.5 & 1.5 \\ $a_0$ (kpc) & 2 & 10 & 10 \\ $\rho_0$ (kg m$^{-3}$) & $7.2\times10^{-22}$ & $1.67\times10^{-23}$ & $1.7\times10^{-23}$ \\ $\Gamma_x$ & 5/3 & 5/3 & 5/3 \\ $\Gamma_c$ & 4/3 & 4/3 & \\ $\Gamma_B$ & 4/3 & & \\ $R_T$ & 1.3 & & \\ $\gamma_{min(hs)}$ & 1 & 1 & 10 \\ $\gamma_{max(hs)}$ & Infinity & $10^{14}$ & $10^7$ \\ $p$ & 2.14 & 2.14 & 2.23 \\ $r_{hs}$ (kpc) & & 2.5 & 2.5 \\ $t_{bs}$ (yr) & & $10^5$ & \\ $t_{bf}$ (yr) & & 1 & \\ $\eta$ & & & 0.4 \\ $\epsilon$ & & & 1.0 \\ $\tau$ & & & $2\times10^{-3}$ \\ \hline \tablenotetext{a}{See text and the original papers for parameter definitions.} \end{tabular} \end{table} \subsection {Dynamical Expansion and Emission} In all of the models we consider here the ambient gas around the double radio sources, into which the lobes propagate, is taken to have a power-law radial density distribution scaling with distance $r > a_0$ from the center of the host galaxy (Eq.~2 of \citealt{KA}), \begin{equation} \rho(r) = \rho_0 \left( \frac{r}{a_0} \right) ^ {-\beta} \end{equation} where the central density, $\rho_0$, scale length, $a_0$, and radial density index, $\beta$, are given by the particular model. We follow BRW and assume that the external density profile is invariant with redshift. While such a typical radial density distribution is appropriate on average for small $z$, this may not to be a good approximation at the redshifts corresponding to the quasar era, which witnessed a $10^2$--$10^3$ times higher co-moving density of powerful radio-loud ellipticals \citep[e.g.,][]{jackson99}. We note that for very large sources the density will depart from a single power-law with radius and eventually approach a constant value appropriate to the intergalactic medium at that redshift (e.g., \citealt{GKW87}; \citealt{furlanetto01}). If the ambient density approaches a constant value at radial length scales of $\sim$ 100 kpc, then radio sources grow to somewhat smaller sizes and have larger lobe powers. We will consider this more complicated situation in future work. From dimensional arguments (\citealt[][ or KA]{KA}; \citealt{komissarov98}) the total linear size (from one hotspot to the other) of a radio source at an age $t$ can be expressed as \begin{equation} D(t) = 2 c_1 a_0 \left( \frac {t^3 Q_0} {a_0^5 ~ \rho_0} \right)^{1/(5-\beta)}, \end{equation} where $c_1 \sim 1$, is model dependent, but weakly varying, as discussed below. The jump conditions at the external bow shock and the expression for linear size give the pressure of the head plasma immediately downstream of the bow shock as (KA Eq.~12) \begin{equation} p_h(t) = \frac {18 c_1^{2-\beta}} {\left(\Gamma_x +1\right) \left(5-\beta\right)^2} \left( \frac { \rho_0^3 a_0^{3\beta} Q_0^{2-\beta} } {t^{4+\beta}} \right)^{1/(5-\beta)}. \end{equation} An ensemble of $n(\gamma)$ relativistic electrons with Lorentz factor $\gamma$ in a volume $V$ with magnetic field $B$ emits synchrotron power per unit frequency, per unit solid angle given by (KDA Eq.~2) \begin{equation} P_{\nu} = \frac{\sigma_T c}{6 \pi} \frac{B^2}{2 \mu_0} \frac{\gamma^3}{\nu} n\left(\gamma\right) V \end{equation} in units of W Hz$^{-1}$ sr$^{-1}$, with $\sigma_T$ the Thomson cross-section and $\mu_0$ the permeability of free space. These relativistic electrons are injected into the lobe from the hotspot via the head, an extended region of turbulent acceleration around the hotspot. \subsection {The KDA Model} For the density profile of the external atmosphere this model uses $\rho_0 = 7.2 \times 10^{-22}$ kg m $^{-3}$, $a_0 = 2$ kpc and $\beta = 1.9$. These values are argued to be typical for an elliptical galaxy out to $\approx 100$ kpc from its center \citep*{forman85, canizares87}. The factor $c_1$ (Eq.~4) is given by Eq.\ (32) of KA, which by their Eqs.\ (37) and (38) depends weakly on $R_T$, the axial ratio, defined as the ratio of the length of the source and its width. For $R_T=1.3$, the value adopted by the authors and us, $c_1=1.23$. The ratio of pressure in the head to that in the cocoon was taken by KDA to be (Eq.~38 of KA) $p_h / p_c = 4 R_T ^ 2 $. We follow this prescription; however, the hydrodynamical simulations of \citet{kaiser99b} found this ratio to be an overestimate. The ``improved'' KDA model \citep{kaiser00} obtains an empirical formula for this ratio as (Eq.~7 of \citealt{kaiser00}) $p_h / p_c = \left(2.14 - 0.52 \beta \right) R_T^{2.04-0.25\beta}$. We are exploring this alternative approach and will give results using it in the next paper in this series. The electrons are assumed to be accelerated in the hotspot at time $t_i$, with corresponding initial Lorentz factor $\gamma_i$. The energy distribution of the electrons injected into the lobe is a power law function of $\gamma_i$, $n(\gamma_i) \propto \gamma_i ^ {-p}$; $p$ is taken to be constant. The electron energies evolve in time according to (Eq.~4 of KDA) \begin{equation} \frac{d\gamma}{dt} = -\frac{a_1 \gamma}{3t} - \frac{4 \sigma_T}{3 m_e c} \gamma^2 \left(u_B + u_c\right). \end{equation} Here the lobe electrons undergo energy losses via adiabatic expansion ($V \propto t^{a_1}$ with $a_1 = \left( 4+\beta \right) / \left[ \Gamma_c \left( 5-\beta \right) \right]$ and $\Gamma_c$ the adiabatic index in the cocoon, KDA), IC scattering off the CMB photons and synchrotron losses. The magnetic field (assumed to be completely tangled) with energy density $u_B$ and adiabatic index $\Gamma_B = 4/3$, satisfies $ u_B \propto B^2(t) \propto t^{- \Gamma_B a_1} $. The energy density of the CMB, $u_c$, is taken to be constant for an individual radio source as each source evolves for only a few times $10^8$ years. The KDA model does not distinguish between the head and hotspot, and considers self-similar expansion of the head, where the jet terminates. The cocoon is split into many small volume elements, each of which is allowed to evolve by expanding adiabatically (changing the pressure from head pressure $p_h(t_i)$ to cocoon pressure $p_c(t_i)$) and undergoing the various loss processes. The energy of each volume element in the lobe is equated to the energy it had while in the head minus the work done by the volume in adiabatically expanding from the head to the lobe. The radio emission from such a volume element is calculated, using the expressions of cocoon pressure and the energy distribution function. The total emission at a frequency $\nu$ is then obtained by summing over the contributions from all such small elements in the lobe. The expression for $P_{\nu}$, given by Eq.\ (16) of KDA is a complicated integration over injection time $t_i$. This integration being analytically intractable, we used numerical techniques to get $P_{\nu}$. \subsection {The BRW model} The ambient gas density parameters adopted by BRW are $\rho_0 = 1.67 \times 10^{-23}$ kg m$^{-3}$, $a_0 = 10$ kpc and $\beta = 1.5$. These are based on polarization measurements of lobe synchrotron emission \citep{garrington91}, and X-ray images of massive ellipticals \citep[e.g.,][]{sarazin88, mulchaey98}. A value of $c_1 = 1.8$ is adopted in Eqs.~(4) and (5), as BRW found it to give the best fit between models and data. This model assumes the hotspot to be a compact region (the working surface moving around as in Scheuer's \citeyearpar{scheuer82} ``dentist's drill model'') within the whole head region. Considering the expansion of the head and its bow shock \citep[also][]{begelman89}, the environmental ram pressure is related to the average internal pressure in the head (Eq.~5). The pressure in the lobe in taken to be a constant factor (1/6) of the head pressure. The jet, of constant bulk power $Q_0$, terminates at the hotspot, which is taken to be of constant radius, $r_{hs} = 2.5$ kpc, in BRW. The pressure in the hotspot, $p_{hs}$, is given by the stagnation pressure in the post-jet shock, $p_{hs} = Q_0 / (c A_{hs})$. Here $A_{hs}$ ($= \pi r_{hs}^2$) is the area normal to the jet over which the jet thrust operates. The hotspot magnetic field, assumed to be tangled, is given by $B_{hs}^2 = 3 \mu_0 Q_0 / (c A_{hs})$, where the equipartition assumption has been made. The break frequency for synchrotron radiation in the hotspot is (Eq.~12 of BRW) \begin{equation} \nu_{bh} = \frac {9 c_7 B_{hs}} {4 \left( B_{hs}^2 + B_{CMB}^2 \right)^2 t_s^2}, \end{equation} where $c_7$ is $1.12 \times 10^3$ nT$^3$ Myr$^2$ GHz \citep{leahy91}, and the equivalent magnetic field due to the CMB is $B_{CMB} = 0.318 (1+z)^2$ nT. The synchrotron age, $t_s$, of the electron population is determined by the length of their exposure to the hotspot magnetic field before they reach the lobe. This longest dwell time in the hotspot is taken as $t_{bs} = 10^5$ yr, and the shortest dwell time $t_{bf} = 1$ yr. In \S8.4.2 of BRW it is shown that this model roughly follows the KDA prescription of lobe luminosity, but with two main differences. First, while the particles are injected from the hotspot to the lobe, the injection index is governed by the breaks in the energy distribution of particles (unlike the constant injection index of KDA). Second, the constant hotspot pressure governs the adiabatic expansion losses out of the hotspot (for particles injected into the lobe), while in KDA the head pressure (which evolves with time) drove the adiabatic losses. In BRW the head pressure (Eq.~5) only drives the source expansion. The details of the energy distribution (as a function of the Lorentz factor) of particles in the hotspot are shown in Fig.\ 11 of BRW. Our calculations usually are done assuming the minimum and maximum values of the particle Lorentz factors in the hotspot quoted by BRW: $\gamma_{min(hs)} = 1$ and $\gamma_{max(hs)} = 10^{14}$, although we have examined variations in these parameters. A population in the lobe which emits at a time $T_{obs}$ (when it intercepts our light cone), consists of particles injected from the hotspot between a time $t_{min}$ (those with largest Lorentz factors) and $T_{obs}$ (smallest Lorentz factors). The time $t_{min}$ (found following the prescription in KDA) is the minimum time of injection (found for every $T_{obs}$), when particles can still contribute to the radiation at $\nu$. The final expression for the power emitted ($P_{\nu}$) by a radio source at a frequency $\nu$ is given by the complicated Eq.\ (21) of BRW, which we will not reproduce here. We solved this equation numerically. \subsection {The MK model} The \citet{MK} paper employs the same external density profile and source linear size expansion as does BRW. The MK model essentially follows the common prescriptions of KDA and BRW for lobe luminosity evolution, with the key difference involving the particle transport mechanism. Two cases are considered for the propagation of particles from the termination shock through the hotspot and into the cocoon. In MK's Case A, the whole adiabatic loss between the hotspot and lobe (due to the pressure difference) is computed. However, the authors found that this produced $P - D$ tracks which conflicted with the observational data. So they considered a Case B, which involves some re-acceleration process in the turbulent head region, whereby the adiabatic losses are partially compensated; MK found such a model is a qualitatively better fit to the data. Thus we consider only the case B (with re-acceleration) of the MK model in our present work. This model assumes that electrons are accelerated by the first-order Fermi mechanism at the jet termination shock and are injected into the plasma behind the shock following a power-law energy distribution with a constant injection index $p$. A fraction $\eta$ of the jet power is assumed to be transferred into the accelerated particles at the termination shock. If the bulk Lorentz factor of the jet, $\gamma_{jet} \sim 10$, then $ 2 < p < 2.3 $ \citep[e.g.,][]{achterberg01}. The correct upper and lower limits of particle Lorentz factors $\gamma_{min}$ and $\gamma_{max}$ are not obvious; MK adopt $\gamma_{min} = \gamma_{jet}$. The authors say the results are not sensitive to $\gamma_{max}$; however, our different conclusions on this point are given in \S\S5-6. After being dumped in the primary hotspot by the jet, the electrons encounter turbulent motions of the plasma in transit through the head and finally reach the lobe. In this transition through the head the electrons are subject to synchrotron losses (in the strong magnetic field behind the termination shock) and IC losses off the CMB. The effects of the losses depend on the distribution of the ``escape times'', i.e., the probability distribution of how many particles escape after a certain time interval. A generalized transport process is considered, with $\epsilon$ (denoted as $\alpha$ in MK) being the transport parameter (or the diffusion index). The mean square distance traveled by a particle, $\langle \Delta r^2 \rangle \propto t^{\epsilon}$, with $0 < \epsilon < 2$. In the standard diffusive case, $\epsilon = 1$, with sub- (supra-) diffusive cases being $\epsilon < 1$ ($> 1$). Another new parameter in this model is $\tau$, the ratio of the diffusive transport time and cooling time of a particle at $\gamma_{min}$. During the transport of particles from hotspots to lobes, the details of re-acceleration by various processes have been considered by many previous authors \citep*[e.g.][]{spruit88, begelman90, manolakou99, gieseler00}. MK simply assume that in the presence of reacceleration, the distribution of electrons entering the lobe is described by a power law above a lower cut-off energy, and at higher energies is modified by synchrotron and IC losses. Once the electrons have reached the lobe, they undergo adiabatic, IC and synchrotron losses, similarly to the other models, and their energy evolution is given by Eq.~22 of MK. Similarly to the KDA model, for every time instant $t$, when radiating particles have Lorentz factor $\gamma$, there is an earliest time, $t_i$, at which particles injected into the lobe contribute to the radiation at $t$; $t_i$ can be obtained following the prescription given in Eqs.\ (25) and (26) of MK. The final expression for power emitted at a frequency $\nu$, $P_{\nu}$ is given in Eq.\ (27) of MK. The authors used $r_{hs} = 2.5$ kpc as the hotspot radius; however, we find that the MK model power results are actually independent of the hotspot area $A_{hs}$. From Eq.\ (2) in MK, $u_h \propto 1/A_{hs}$ and from their Eq.\ (6), $t_0 \propto A_{hs}^{1/a}$, where $a = \left(4 + \beta\right) / \left(5 - \beta\right)$. Hence $u_{lobe}$ (MK Eq.\ 5), $b_s$ (MK Eq.\ 22) and $P_{\nu}$ are independent of $A_{hs}$. \section {Observational and Simulated Samples} \subsection {Selection Criteria and Observed Characteristics} These models predict the emission from the radio lobes, which are taken to (and usually do) dominate the emission from extended FR II RGs. As is well known and is discussed in detail in BRW, at relatively low frequencies ($\sim 151$ MHz) the radio flux observed is predominantly the emission from the cocoon or the lobe (with negligible contribution from the hotspots, jets or nucleus), and so these evolutionary models should fit the data best at such frequencies. At GHz frequencies, substantial contributions from Doppler boosted core or jet emission would often be present, especially for old quasars, but the slowly advancing lobes will still emit nearly isotropically. In addition, at these higher frequencies the effects of synchrotron, adiabatic and IC losses are more severe. At very low frequencies ($< 100$ MHz), there are extra complications affecting the emission from synchrotron self-absorption, free-free absorption, and the poorly known low energy cut-off to the relativistic synchrotron emitting particles. Therefore samples such as those produced by the Cambridge group over the past decades, which were observed at between 151 and 178 MHz, and cover much of the northern sky, are most appropriate for this work. We adopt observational samples from complete radio surveys (Table 2), each of which contains all the radio sources within each survey's flux limits and which are found inside smaller sky areas, for deeper surveys. Redshifts have been obtained for the great majority of these radio sources. This lower flux limit brings in a $P - z$ correlation, since $P$ decreases as $z$ increases. To decouple this $P - z$ correlation one must use multiple complete samples at increasingly fainter flux limits. For an individual source in each survey, the following characteristics were considered: the redshift ($z$), the specific power at $151$ MHz ($P_{151}$) in W Hz$^{-1}$ sr$^{-1}$, the total projected linear size ($D$) in kpc, and the spectral index at $151$ MHz ($\alpha_{151}$) converted to the rest frame of the source. The redshifts in the samples are spectroscopically determined for the vast majority of the sources. For the 3CRR catalog the redshift completeness is $100 \%$, for 6CE it is $98 \%$ and it is $92 \%$ for 7CRS. \subsection {Observational Sample Details} Henceforth, 3C, 6C, and 7C refer to the refined surveys 3CRR, 6CE and 7CRS, respectively, as described below. We excluded FR I RGs from the following catalogs and considered only FR II sources (including quasars, weak quasars, high-excitation FR II RGs and low-excitation FR II RGs). \begin{table} \caption{Observational Samples\label{tab2}} \begin{tabular}{cccccccc} \hline Survey & Flux Limit & No. of Sources\tablenotemark{a} & Sky Area \\ & (Jy) & & (sr) \\ \hline 3CRR & $S_{178} \tablenotemark{b} ~ > 10.9 $ & 145 & 4.23 \\ & $S_{151} > 12.4 $ \\ \\ 6CE & $2 \leq S_{151} \leq 3.93 $ & 56 & 0.102 \\ \\ 7CRS & $S_{151} > 0.5 $ & 126 & 0.022 \\ 7CI & $S_{151} \geq 0.51 $ & 37 & 0.0061 \\ 7CII & $S_{151} \geq 0.48 $ & 37 & 0.0069 \\ 7CIII & $S_{151} > 0.5 $ & 52 & 0.009 \\ \hline \tablenotetext{a}{Only FR II RGs considered.} \tablenotetext{b}{Flux at 178 MHz, the frequency at which the 3CRR survey was performed. $S_{178}$ for these sources are converted to flux at 151 MHz, $S_{151}$, using a constant spectral index of 0.8.} \end{tabular} \end{table} 3CRR: This is the Third Cambridge Revised Revised sample of extragalactic radio sources \citep*{laing83}. We adopted the data from the online compilation of the list by Willott\footnote{http://www-astro.physics.ox.ac.uk/$\sim$cjw/3crr/3crr.html}. In 3CRR the observations were done at a frequency of $178$ MHz, so for each 3CRR source $P_{178}$ (specific power at $178$ MHz) was obtained and then converted to $P_{151}$ using a standard average spectral index of $0.8$. Given the closeness of these two frequencies, any reasonable variations in $\alpha$ would make for only small differences in the derived $P_{151}$ values. 6CE: The Sixth Cambridge radio survey by \citet{eales85} is the original 6C survey. We adopt the sample from the reselected and updated version in \citet*{rawlings01}, along with the most recent redshifts, which have been updated online by Rawlings\footnote{http://www-astro.physics.ox.ac.uk/$\sim$sr/6ce.html}. 7CRS: The Seventh Cambridge Redshift Survey is a combination of parts I, II and III of the original 7C survey \citep{mcGilchrist90}. For 7C-I and II we adopt $P_{151}$ and $z$ from \citet{willott03}; (their Tables 2 and 3, which use the present consensus cosmology). The values of $D$ were obtained from a web-site maintained by Steve Rawlings\footnote{http://www-astro.physics.ox.ac.uk/$\sim$sr/grimestable.ascii}; however, the $\alpha_{151}$ values are not available in a collated form in the literature, and only a few individual sources have these values published. Thus we used $\alpha_{151}$ for 7C-III only. For 7C-III, the reduced data, including redshift, flux density in Jy, angular size in arcsec and spectral index between 38 and 151 MHz were kindly provided to us by Chris Willott; from these we computed the relevant observational parameters in the cosmology we use. The observed spectral index between 38 and 151 MHz was taken as the rest-frame 151 MHz spectral index (a fairly good estimate, at least for the higher $z$ sources). The relevant sample can be found in Table 9 (containing both the 7CIII and NEC samples) of \citet{lacy99} or online from the website of Oxford University\footnote{http://www-astro.physics.ox.ac.uk/$\sim$cjw/7crs/7crs.html} (but with a different cosmology). \subsection {The Simulated Surveys} Large radio source populations are randomly generated, according to the source age, redshift and beam power distributions as given in \S2, for each choice of model parameters. Each simulated source, in a population, is then allowed to evolve in age according to a power evolution model discussed in \S3. The evolution is to be done at a rest frame frequency of the source, so if the frequency of observation is $\nu_{obs} = 151$ MHz, and a source is observed at redshift $z$, then it is evolved at a frequency $\nu_{rest} = 151 \times (1+z)$ MHz. The monochromatic power ($P_{151}$ in W Hz$^{-1}$ sr$^{-1}$) each source would emit at the $T_{obs}$ corresponding to it is calculated; this depends on the model as described in \S3. At this cosmic time ($T_{obs}$), its redshift, and hence its distance from us, is found. The flux (in units of Jy = $10^{-26}$ W Hz$^{-1}$ m$^{-2}$) of this source is then obtained (using \citet{peacock99}: Eqs.\ 3.87, 3.76 and 3.10 for a flat universe), given that it emitted $P_{151}$ from the cosmic distance calculated. If the flux for a source is greater than a (lower) survey flux limit (or between two flux limits in the case of 6C) then that source is considered to be detected in the corresponding simulated survey, and counted for the later comparisons with real data. It is assumed in our simulations that the radio jets feeding the lobes (or the central AGN) stay ``on'' only for the time $T_{age}$ corresponding to each source (\S2), which is also taken as the lifetime of a source. After the time $T_{age}$, the relativistic plasma in the lobes continue to radiate but the flux drops very rapidly once the central engine has stopped to feed the lobes. So the sources can be considered to be turned ``off'' instantaneously after $T_{age}$. This assumption is supported by the fact that the radio powers ($P_{151}$) drop substantially while the jets are still on (i.e., within the time $T_{age}$ after birth), as shown by the $P - D$ tracks in \S5.1. To perform our simulations we initially generate an ensemble of a huge number (a few $10^6$) of pseudo-radio sources. After evolving each source by the above prescription, the ensemble is examined to see how many of them would actually be detected in a simulated complete survey. The population size is then chosen for this parameter set in the next run so as to get a comparable number detected in the simulations as are found in the real surveys. To do this, we usually had to generate such ``standard'' ensembles containing $\approx 10^6$ to $10^7$ radio sources. Assuming the observed regions are fair samples of the universe, the population size is proportional to the sky area of a survey. The populations needed in order to simulate the 6C and 7C surveys are generated from that of 3C by reducing the total 3C population size according to the corresponding sky area ratio. Given a 3C initial ensemble of size $S_{3C}$, the populations for 6C and 7C are created by plucking sources randomly from that initial ensemble, and producing populations of sizes $S_{6C} = S_{3C} / 41.5$ and $S_{7C} = S_{3C} / 192.3$. The initial populations generated for comparison with the 6C and 7C data following the above procedure detected more or less comparable numbers of sources in the simulations as compared to the actual surveys. We compute the over (-under) detection factors, defined as the ratios of the number of sources detected in the simulated 6C and 7C surveys to the numbers in the actual surveys divided by the same ratio for the 3C survey. The deviation of these ratios from 1.0 (see discussion in \S6) may be considered a measurement of the statistical (sample) variance. Each model gives the total linear size of a radio source, but we observe each one as projected on the plane of the sky. This is incorporated into the simulations as follows. Sources are considered to be randomly oriented in the sky, with the angle to the line of sight ($\theta$) of each source drawn from a distribution uniform in $(1-\cos \theta)$. The projected length of each simulated source is then, $D_{proj} = D(t) \times \sin \theta = D(t) \times \sqrt{r_N(2-r_N)}$, where $r_N$ is an uniform random number between 0 and 1, and $D(t)$ is the total linear size of the source. For compactness, hereafter we denote the projected size $D_{proj}$ as just $D$. The initial population of sources was generated and each lobe power evolution model was implemented in C, and the other supporting codes were written in IDL. Numerical Recipes in C \citep{press02} were used to speed up the calculations of lobe powers for the huge ensembles of sources. In doing the statistical tests (\S5.2 and \S5.3) we compared the model predictions with the observational samples as follows. In a single run a random initial population of millions of sources was generated such that after evolution of each source in the ensemble and after comparing each to the flux limits, the ensemble produced simulated samples (for the 3C, 6C and 7C catalogs) which were of sizes comparable to or larger than the real surveys. The simulated samples were then reduced in size, if necessary, by uniformly selecting sources from them. In particular, this was done by selecting every ($N_{sim}/N_{samp}$)'th source from a simulated survey, where $N_{samp}$ is the number of sources in one of the real surveys 3C, 6C or 7C and $N_{sim}$ (usually $> N_{samp}$), is the number of sources in the simulated survey. Finally statistical tests (whose results are tabulated) were done on the $[P, D, z, \alpha]$ data from the real surveys and a similar sized simulated sample generated from a single random seed. Each of the $[P - D - z - \alpha]$ plane figures, described in \S6, show the final simulated sample (after reduction to the actual data sample sizes) of the random run done using specific parameters for each semi-analytical model. The plotted cases were among the best overall fits for each model as determined by the KS tests. \section {Results} \subsection {$P- D$ Tracks} \begin{figure} \centerline{\epsfig{file=f1.eps, scale=0.5}} \caption{$P - D$ tracks of three sources with jet powers ($Q_0$) in Watts and redshifts ($z$) of $[1.3 \times 10^{40}, 2.0]$, $[1.3 \times 10^{39}, 0.5]$, $[1.3 \times 10^{38}, 0.2]$ (from top to bottom). Each of the {\it dashed}, {\it dotted} and {\it solid} curves correspond to the tracks predicted by the default versions of the BRW, KDA and MK models respectively. The crosses on the tracks denote source lifetimes of 1, 10, 20, 30, ..., 90, 100, 150, 200, 250, 300 Myr.} \label{fig1} \end{figure} As a radio source gets older, its power ($P$) vs. linear-size ($D$) track becomes steeper. While this is true for all models, the rate of steepening is different in the three models, as seen from Fig.\ 1. These $P$--$D$ tracks have been generated using the default parameters of each model (given in Table 1), by allowing each source (with beam powers and redshifts given in the plot) to evolve at frequency $\nu=151$ MHz. For this Figure (alone) the total linear sizes were converted to the projected sizes assuming an average viewing angle to the line of sight of $39.5^{\circ}$ (following KDA). These tracks are in agreement with the conclusion drawn by MK that their $P$--$D$ tracks are more akin to those presented by KDA. Crude evaluations of the quality of different models and the allowable ranges of parameters for them can be found by comparing the regions in the $P$--$D$ diagram that are actually populated with those that are accessible to models with those parameters (e.g., KDA, MK). On examining different tracks, it is found that the luminosity falls off faster for sources with high power and high redshift. The higher the redshift of a radio source, the shorter the fraction of its life which can be detected by flux limited radio surveys. This point was noted by \citet*{GKW89} and was stressed by \citet{blundell99}, who coined the phrase {\it youth-redshift degeneracy} to describe it. \subsection{1-Dimensional Kolmogorov-Smirnov Tests} \subsubsection{Statistical Tests and Default Parameter Results} For our first attempt to quantitatively compare the simulated radio surveys to the actual data, we used 1-dimensional Kolmogorov-Smirnov (1-D KS) statistical tests. Based on the results of such tests we chose some parameter variations for the models on which two more statistical tests were done (\S5.3). In the 1-D KS tests, each of the distribution's key characteristics $[P, D, z, \alpha]$ of the radio sources detected in the simulated surveys were compared to those of the sources in the real radio surveys 3C, 6C, and 7C, according to procedures given at the end of \S4.3. The KS probabilities, $\cal P$, that the two data sets being compared are drawn from the same distribution function, were taken to be a figure of merit of each model used in the simulation. High values of ${\cal P}$ (close to 1.0) indicate good fit, and very small ${\cal P}$ imply that the model and data distributions are significantly different. We consider twelve test statistics in total (the twelve probabilities found from the KS statistics for comparisons of each of $P, D, z, \alpha$ for each of the three radio surveys), which quantifies the closeness of the model fits to the data. In order to quantify the overall success of a model we would prefer to have a single figure-of-merit instead of twelve individual ones, but there is no obvious way to produce such a statistic, particularly since the three surveys have significantly different numbers of objects. A likelihood type test would involve the product instead of the sum of the KS probabilities, but given the extremely small values of these products we rejected this figure of merit as not providing a useful discrimination between the models. Here we have combined the 1-D KS probabilities in two ways. First, we add the KS probability statistic for comparisons of $P, D, z, \alpha$ (i.e. ${\cal P}(P)+{\cal P}(D)+{\cal P}(z)+{\cal P}(\alpha)$) for the three surveys, weighting the statistic of a survey by the square-root of the number of simulated sources detected in that survey. So the first overall figure of merit of a model, which we denote as ${\cal P}_{[P, D, z, \alpha]}$, is given by: \begin{eqnarray} {\cal P}_{[P, D, z, \alpha]} = \left[ {\cal P}(P)+{\cal P}(D)+{\cal P}(z)+{\cal P}(\alpha) \right]_{3C} + \nonumber \\ \sqrt{\frac{N_{6C}}{N_{3C}}}\left[ {\cal P}(P)+{\cal P}(D)+{\cal P}(z)+{\cal P}(\alpha) \right]_{6C} + \nonumber \\ \sqrt{\frac{N_{7C}}{N_{3C}}}\left[ {\cal P}(P)+{\cal P}(D)+{\cal P}(z)+{\cal P}(\alpha) \right]_{7C}, \end{eqnarray} where $N_{3C}$, $N_{6C}$ and $N_{7C}$ are, respectively, the number of sources detected in each of the simulated surveys with a particular parameter set used in the model. As noted above, if the simulations ``detect'' too many sources as compared to the data, then each of the resulting simulation survey samples for 3C, 6C, 7C are reduced by uniformly removing sources to make the final simulation sample sizes equal to that of the data samples. The second figure of merit we employ adds the KS statistic probabilities for $P$ and $z$ to twice the probability for $D$, i.e. ${\cal P}(P)+2{\cal P}(D)+{\cal P}(z)+{\cal P}(\alpha)$ for the three surveys, using the same weighting method. We denote this as ${\cal P}_{[P, 2D, z, \alpha]}$. This second choice was considered because results for $P$ and $z$ usually correlate (due to flux-limit arguments); thus double weighting the probability for $D$ dilutes the impact of the $[{\cal P}(P), {\cal P}(z)]$ correlation. Unsurprisingly, in most of the runs we have performed the combined test statistics ${\cal P}_{[P, D, z, \alpha]}$ and ${\cal P}_{[P, 2D, z, \alpha]}$ behaved in a similar fashion. Unfortunately, for the complicated problem of radio source cosmological evolution, which involves many parameters and several dimensions, any figure of merit based upon 1-D KS tests is a crude approach in comparing models with observations. We attempted to use multi-dimensional statistical tests \citep*[e.g.,][]{holmstrom95, loudin03} which, in principle, could yield a more robust single figure of merit for the fit of our distributions to the data in $\left[P, D, z, \alpha \right]$ space. Unfortunately, the limited sizes of the observational samples ($< 150$) preclude obtaining reliable results from such generalizations of the KS test. Here we are trying to fit four variables, namely $\left[P, D, z, \alpha \right]$; in practice the minimum useful sample size required would be $\sim 10^4$ for a four-dimensional test. In future work we plan to expand our method to include simulations of large scale radio surveys containing many thousands of sources, such as NVSS and FIRST, which can be made adequately complete in $z$ through optical identifications from SDSS \citep{ivezic04}. Then we might successfully incorporate a multi-dimensional test. We call the parameters governing the power evolution as given in the papers KDA, BRW (including the parameters used for the initial radio source population generation) and MK to be the default ones. For KDA and MK the authors discuss some alternate parameter sets in the respective papers, our defaults are their first and favored parameters. To save space, the statistics of only a subset of all the runs we have performed are shown in the present work. We include the cases which give the best 1-D KS statistical results. We present the KS test results in tables grouped by radio source evolution model, with each entry illustrating a different parameter set. Table 3 gives our results for the KDA model, Table 4 for the BRW model, and Table 5 for the MK model. The tables for each model follow the same format and pattern. Hence we describe only the table entries for the KDA model (Table 3). Each of the Tables 3, 4 and 5 give the individual KS statistic probabilities ${\cal P}(P)$, ${\cal P}(D)$, ${\cal P}(z)$ and ${\cal P}(\alpha)$ for some of the initial runs. The results for each model are given in three consecutive rows. The first column lists the values of the RG source distribution index $x$ and $T_{MaxAge}$ (in Myr) used for the initial population generation in that model run; these two parameters were expected to be most important in governing the numbers of acceptable sources each Monte Carlo simulation would generate. The second column first lists the parameter(s) which has (have) been varied from the default case in the top row(s), and then gives the initial population (ensemble) size used for 3C simulation in that model. The third column notes to which survey KS probabilities given in the next columns correspond. The first row in columns 3, 4, 5, 6 and 7 gives the 3C results, with the second and third rows giving the 6C and 7C results, respectively. The fourth, fifth, sixth and seventh columns show the values of ${\cal P}(P)$, ${\cal P}(D)$, ${\cal P}(z)$ and ${\cal P}(\alpha)$ respectively for each of the surveys, 3C, 6C and 7C. The final, eighth column, lists the combined probabilities, ${\cal P}_{[P, D, z, \alpha]}$ and ${\cal P}_{[P, 2D, z, \alpha]}$, in two consecutive rows, for each particular parameter set. To begin with, an initial population, generated using the default parameters from BRW for RG population generation, was evolved according to the three different default models discussed before. The simulated sources detected (according to the prescription in \S4.3) were compared to the actual data in the 3C, 6C, and 7C catalogs. As shown by the KS test statistics of the first three rows of tables 3, 4 and 5, the model fits are all very poor. The main problem is that too many high-$z$ and too few low-$z$ sources were produced by the models as compared to the data. \subsubsection {Dependence on Source Slope Parameter, $x$} To look for improved agreement between simulation and data, we decided to steepen the beam power distribution function of the sources generated in the initial population. This significant modification was expected to produce fewer high $P$ -- high $z$ sources, and the exponent in the power law distribution of the jet powers, $x$, was increased from $x=2.6$ (as used by BRW) in intervals of $\Delta x=0.2$ or $0.3$. For the KDA and MK models the overall statistics improved the most at $x=3.0$, but were less good for $x=3.3$ or $3.6$. For BRW the $P$ and $z$ fits were best for $x=3.6$, making the overall performance look fairly good, but the $D$ fits were all very bad. As will be discussed further below, the former modification ($x=3.0$) produced a clear overall improvement for the BRW model too. The initial population generated with $x=3$ (but otherwise using the BRW prescription), was evolved according to the KDA and MK models. The corresponding KS statistics are given as the third entries in Tables 3, 4 and 5. For the BRW model, the big population generated using $x=3.6$, had a very strong $P-D$ anticorrelation, producing too many small sources and too few large ones. The combined KS statistics were also much worse than those of the KDA and MK models, so we do not list any BRW model results with $x>3.0$. Some of the 12 KS probabilities for the KDA and MK models (albeit very few for BRW) provide acceptable fits to the data. To search for possible further improvements we varied the other parameters describing the power evolution in the models as described below. Accepting $x=3.0$ as a tentative value for the exponent of the beam power distribution for the generated initial population, we then varied the parameters governing the lobe power evolution of KDA and MK models. For BRW the exponent $x=3.6$ was initially accepted as it gave good fits for $P$ and $z$, though as noted above, the $D$ fit was very poor, and we do not display these results. Simulations were done by setting the parameter values at the end points of physically reasonable ranges; for example we might perform two additional runs using the same initial population but we would set a parameter to half or twice its default value. Simulated surveys were constructed using the parameter listing given in Table 3 (each variation done one at a time) for the KDA power evolution model. Simulations done with higher axial ratios ($R_T = 2.0, 2.5, 3.0, 4.0, 5.0$) which are favored by morphological data, all yielded severe underdetections when compared to the actual number of sources in the catalog. Hence we adopted the default value of the axial ratio, $R_T = 1.3$, as did KDA. The results for the changes of parameters considered for the MK power evolution model are given in Table 5. As seen from the tables, several of the 12 KS probabilities for some cases give acceptable fits, but it is difficult to find a single model where all are really good fits. In other words, none of the models discussed here simultaneously provide good fits to the data from all of the three radio surveys considered. As noted above, $P$ and $z$ seem to correlate together in most cases because they are related when we pick up radio sources by imposing a flux limit on them. In some cases $P$ and/or $z$ fits are good and those to $D$ are bad; and vice versa. The fits to $\alpha$ are almost always poor. The KS statistics for model runs which gave any further improvement over the ``improved'' default case ($x=3$, Default Model Parameters) can be found from Tables 3 -- 5. \subsubsection {Dependence on RG Maximum Age} An important parameter for the generation of the initial population of radio sources according to the BRW prescription is $T_{MaxAge}$. It defines the mean active lifetime of the RG central engine and how long the radio lobes (being fed by jets powered by AGN activity) continue to expand. Hence it is one of the most important parameters to constrain if we are to estimate the fraction of the relevant volume of the universe occupied by radio galaxies during the quasar epoch (\S1). As our ultimate goal involves this relevant volume fraction, we aim to find the value of $T_{MaxAge}$ which gives the best fit to the data for each of the RG evolution models. We performed simulation runs with default parameters for each of the models, using initial populations with $x=3$ (which gives the least bad fits); we then set $T_{MaxAge}$ to values in the range 50-600 Myr (in intervals of 50 Myr), and obtained the following results. For the KDA model, the combined KS probabilities, ${\cal P}_{[P, D, z, \alpha]}$ or ${\cal P}_{[P, 2D, z, \alpha]}$ lacked a single maximum over the range in maximum age considered, and peaked at both 150 Myr and 500 Myr. However the higher peak was adopted, and hence the adopted best maximum age is $T_{MaxAge}=150$ Myr. In the other two models, the combined KS probabilities, ${\cal P}_{[P, D, z, \alpha]}$ or ${\cal P}_{[P, 2D, z, \alpha]}$, varied smoothly over the range in maximum age considered. In the BRW model the single peak was at $T_{MaxAge}$ = 250 Myr, and in the MK model it was at $T_{MaxAge}$ = 150 Myr; hence these were adopted for the subsequent runs. Monte Carlo runs were done with the above best $T_{MaxAge}$ for each model and with $x=2.6$ (the default from BRW), to check if that was better. For BRW the best $T_{MaxAge}$ when combined with $x=3.0$, produced better statistics (less bad $D$ fit), and was hence adopted for later runs. In all cases $x=3.0$ was better than $x=2.6$ by a significant margin. The supporting KS statistics are given in Tables 3, 4 and 5 for KDA, BRW and MK, respectively. Hence we used initial populations with $x=3.0$ and the above ``optimal'' $T_{MaxAge}$ values for each model in subsequent runs. During the simulation runs we found that a few (for KDA and MK models) and some more (for BRW) very small sources ($D < 1$ kpc) were being ``detected'' in the three modeled surveys (mostly in 3C). The actual survey data has negligible numbers of such small sources which would not normally be considered FR II types. As any such small source (with $D < 1$ kpc) will not be regarded as a FR II radio galaxy, we decided to put a linear size cut-off in our simulations. For the KDA and MK models a cut-off of 1 kpc was adopted. For the BRW model we found that a cut-off of 10 kpc gave much better fits than did a 1 kpc cut-off. So in BRW we considered sources only with total linear sizes greater than 10 kpc. The KDA and MK simulations did not produce many sources with linear size $<10$ kpc, hence it did not make much of a difference if we imposed a 10 kpc or a 1 kpc size cut-off. In the results presented henceforth, these $D$ cut-offs have been incorporated. \subsubsection {Dependences on Other Model Parameters} In order to further explore the parameter space of the models in search of better fits, initial ensembles were generated using $x=3.0$, $T_{MaxAge}=150$ Myr for the KDA and MK, and $x=3.0$, $T_{MaxAge}=250$ Myr for the BRW simulations (see \S5.2.3). The following prespription was then followed for each. The sources in one large random population were evolved several times, according to one of the three radio lobe power evolution models. During each evolution one of the model parameter values was varied around its default value (as in \S5.2.2). All the parameters of each model given in Table 1 were varied, with only one variation per evolution run. Only those parameter variations that gave any improvement in statistics over the default parameter case of the same model, or were essentially as good as the default, were considered further. For these parameter sets three more initial populations were generated having the same size and the same $x$ and $T_{MaxAge}$ values for the different models, but with different pseudo-random seeds. These additional ensembles were then evolved using the ``improved'' parameter sets for each model, and the 1-D KS statistics were found. At this point we had the KS test results for a set of four simulations of each of the ``improved'' parameter variations. The three cases involving variations of a single parameter (previous paragraph) which gave the best statistics (highest mean ${\cal P}_{[P, D, z, \alpha]}$ of the 4 runs) were then found. Simulations were then performed in which two of those parameter changes giving better fits were simultaneously employed. If these ``2-change'' variations continued to give better performances, all three changes were incorporated together in a single run, to see if yet better fits could be obtained. \subsubsection {Spectral Index ($\alpha$) Behavior} The spectral index ($\alpha$) at the rest frame of a source at 151 MHz was estimated for each source in the simulated surveys by considering $\log$ [$\nu$ (MHz)] as the independent variable and $\log (P_{\nu})$ as the dependent. The specific powers at the $T_{obs}$ corresponding to the source (\S4.3) were calculated at three frequencies, namely, $151$, $151/(1+z)$ and $151(1+z)$ MHz. A quadratic polynomial was fitted to the $\log (P_{\nu})$ vs. $\log$ [$\nu$ (MHz)] data. The fit coefficients $a_1$ and $a_2$ where $\log P_{\nu} = a_0 + a_1 \log \nu + a_2 (\log \nu)^2$, were obtained. These were used to find the spectral index as $\alpha = -a_1 - 2 a_2 \log (151/(1.0+z))$. The KS tests for the fits for the spectral index ($\alpha$) for all surveys employing each model are uniformly bad (as indicated from the ${\cal P}(\alpha)$ values in Tables 3 -- 5). The poor qualitative fits to the $\alpha$ distribution were already noted by \citet{BRW} for their models. Still, it is the BRW model which gives the least unsatisfactory KS statistics for $\alpha$ fits. The KS statistics for $\alpha$ fits were extremely bad for the KDA model. Here, the spectral index distributions consist of a dense cluster at $\alpha \sim 0.58$, with no sources having smaller $\alpha$ while some have steeper spectral indices up to $\alpha \sim 1.0$. There is a weak $\alpha - D$ anti-correlation until $D \sim 10^3$ kpc, after which there is a trend of increasing $\alpha$ as $D$ increases; but this involves only a few giant sources. The BRW model also produced mostly very poor $\alpha$ fits, but occasionally it gave quasi-acceptable KS statistics, with ${\cal P}(\alpha) \sim 0.01$. Here, the spectral indices are almost uniformly distributed within $\alpha \sim 0.58 - 0.85$, with some sources at smaller $\alpha$. There is also a weak $\alpha - D$ anti-correlation in the BRW model which extends throughout the simulated results. The MK model produced the worst KS statistics for $\alpha$. Here, the spectral indices came out very steep, with $\alpha > 0.9$ almost always found. The distribution includes a cluster at $\alpha \sim 0.9 - 1.0$, and an extension to very steep spectra $(\alpha \sim 1.5)$. Here there is a clear trend for $\alpha$ to be higher as $D$ increases in the simulations for all three catalogs. Thus, it is clear that all of the models considered to date require modifications if they are to produce adequate representations of the observed radio spectral indices. Making such modifications is a key goal of our future work. \subsection{Additional Statistical Tests} In order to check the robustness of the quantitative tests we performed some additional statistical analyses. We selected the cases of parameter variations that gave the highest combined probability, ${\cal P}_{[P, D, z, \alpha]}$, of each model, according to the amplified 1-D KS test results (described in \S5.2.4). We compared these nominally superior parameter sets for each model with the default versions (those with no parameter changes) by performing additional statistical tests on them. \subsubsection{2-Dimensional Kolmogorov-Smirnov Tests} We used the 2-dimensional (2-D) KS test procedure from \citet{press02}, which is based on the work of \citet{fasano87}, which is a variant of an earlier idea due to \citet{peacock83}. The relevant 2-dimensional 2-sample KS probabilities (or the significance level indicating that the two populations are drawn from the same distribution), ${\cal P}$, give a quantitative measure of the model fits. The comparisons of the model simulated samples to the real data samples are done in a way analogous to that for the 1-D KS tests (\S4.3). The 2-D KS probabilities for comparisons of the properties $P, D, z$ and $\alpha$, taken two at a time, for the data and the models, were computed. Table 6 shows results for both the default versions and the parameter sets giving the highest total 1-D KS probability, denoted as {\it varied} in the table. The results are listed in a similar way as are the 1-D KS statistics in previous tables. The first column gives the model and parameter variation (if any). The third, fourth, fifth, sixth, seventh and eighth columns list the KS probabilities for comparisons of $[P-z]$, $[P-D]$, $[z-D]$, $[P-\alpha]$, $[z-\alpha]$ and $[D-\alpha]$ respectively; in each case the three rows give results for 3C, 6C and 7C, respectively. It is non-trivial to compare the models as there are 18 values of ${\cal P}$ which must be considered. The general trends are discussed in \S6. \subsubsection{Correlation Coefficient Analysis} We considered the Spearman partial rank correlation coefficients between the four relevant source characteristics $P, D, z$ and $\alpha$. Following \citet{macklin82}, we calculated the partial rank correlation coefficients with four variables, e.g., \begin{equation} r_{PD, z\alpha} = \frac{r_{PD, z} - r_{P\alpha, z} r_{D\alpha, z}} {\left[ \left(1-r_{P\alpha, z}^2\right) \left(r_{D\alpha, z}^2\right) \right]^{1/2}}, \end{equation} for the correlation between $P$ and $D$ independent of $z$ and $\alpha$. Here the three-variable partial correlation coefficient is \begin{equation} r_{PD, z} = \frac{r_{PD} - r_{Dz} r_{Pz}} {\left(1-r_{Dz}\right)^2 \left(1-r_{Pz}\right)^2}, \end{equation} with $r_{PD}$ being the Spearman correlation coefficient between two variables $P$ and $D$. The significance level associated with the 4-variable correlation is \begin{equation} \Sigma_{PD, z\alpha} = \frac { \left(N_{samp}-5\right)^{1/2} } {2} \ln \left( \frac{1+r_{PD, z\alpha}} {1-r_{PD, z\alpha}} \right), \end{equation} where, $N_{samp}$ is the size of the sample considered. The relevant Spearman partial rank correlation coefficients involving $[P, D, z, \alpha]$ for the data and for the models for those cases for which the 2-D KS tests were done are given in Table 7. The four-variable correlation coefficients ($r_{PD, z\alpha}, r_{Pz, D\alpha}$, etc) were computed by combining the observed data or the model ``simulated'' data for all the relevant surveys: 3C, 6C and 7C III. We do so in order to dilute the tight $[P-z]$ correlation in a single flux-limited complete sample (BRW), and to detect the correlations which exist between other characteristics. The 2-variable correlation, $r_{PD}$, was always negative; however, in the 4-variable case, with the effects of $z$ and $\alpha$ removed, $r_{PD, z\alpha}$ showed a small positive correlation. We also examined the correlation coefficients of the data and model simulations in each survey, 3C, 6C, or 7C, separately. \begin{figure*} \centerline{\epsfig{file=f2.eps, scale=0.9}} \caption{The $[P - D - z - \alpha]$ planes for the observational samples 3CRR, 6CE and 7CRS. The symbols classify the sources into redshift bins as follows; {\it Plus}: $0 \leq z < 0.5$, {\it Triangle}: $0.5 \leq z < 1.0$, {\it Cross}: $1.0 \leq z < 1.5$, {\it Square}: $1.5 \leq z$. The $P-z$ correlations, arising from the flux limits, are clear in the first row. The $D-z$ plane shows the decreasing trend of average size as redshift increases. } \label{fig2} \end{figure*} \begin{figure*} \centerline{\epsfig{file=f3.eps, scale=0.9}} \caption The $[P - D - z - \alpha]$ planes for the 3C, 6C and 7C simulations of the KDA Model. The initial ensemble is formed using $x=3.0$, $T_{MaxAge}=150$ Myr; the power evolution is with parameter changes $\rho_0=\rho_{0~({\rm Default})}/2 = 3.6 \times 10^{-22}$ kg m$^{-3}$, $p_m=2.12$, for a case with initial source population size = 4861474. The symbols are as in Fig.~2. } \label{fig3} \end{figure*} \begin{figure*} \centerline{\epsfig{file=f4.eps, scale=0.9}} \caption The $[P - D - z - \alpha]$ planes for the 3C, 6C and 7C simulations of the BRW Model. The initial ensemble is formed using $x=3.0$, $T_{MaxAge}=250$ Myr; the power evolution is with parameter change $a_0=7.5$ kpc, for a case with initial source population size = 3355926. The symbols are as in Fig.~2. The upward arrow in the 3C panels implies that one data point exists outside the plotted range of the figure. } \label{fig4} \end{figure*} \begin{figure*} \centerline{\epsfig{file=f5.eps, scale=0.9}} \caption The $[P - D - z - \alpha]$ planes for the 3C, 6C and 7C simulations of the MK Model. The initial ensemble is formed using $x=3.0$, $T_{MaxAge}=150$ Myr; the power evolution is with parameter change $\gamma_{max(hs)} = 3 \times 10^8$, for a case with initial source population size = 4861474. The symbols are as in Fig.~2. } \label{fig5} \end{figure*} \section {Discussion} We have performed quantitative tests of three detailed models for RG evolution. This is the first attempt to perform such statistical tests involving 4 radio source observables over 3 complete radio surveys. During our multi-dimensional Monte Carlo simulation procedure we found that it is very difficult to get acceptable simultaneous fits to the radio properties $P, D, z$ and $\alpha$ for all three redshift complete subsamples of the 3C, 6C and 7C radio catalogs. This is true using either the default parameters suggested by each of these three leading models, or when considering extensive variations upon them involving changing one or more of the parameters to plausible different values. Usually the $P$ and $z$ fits were correlated, due to flux limiting arguments discussed before. The fits to the 6C survey were generally better compared to those for 3C and 7C; this is because of the smaller number of sources in the 6C catalog and the nature of the KS test. Our weighting of the ``total 1-D KS probability'' by the square root of the number of sources helps to compensate for this. It was most difficult to get acceptable fits to the faintest sources cataloged in 7C. When varying the model parameters from their default values the greatest improvement came from steepening the power law index for the initial beam power distribution to $x=3$ from $x=2.6$ used by \citet{BRW}. This change improved the KDA and MK model performances greatly (Tables 3 and 5). The KS statistics for the BRW models were never wonderful, especially the $D$ fits (Table 4); nonetheless, varying the maximum age assumed for the sources from 500 Myr to 150 Myr for the KDA and MK models and to 250 Myr for the BRW model also produced better fits. We found the following trends for the ratios of number of sources detected in the 6C and 7C simulations and the number in the actual catalogs, as compared to 3C (Ratio$_{6C}$ and Ratio$_{7C}$ respectively). For the KDA and BRW models, the detection number ratio was more consistent for 6C than for 7C simulations; i.e., Ratio$_{6C}$ was closer to 1.0 (which it should equal ideally) than was Ratio$_{7C}$. For the MK model, the detection number ratios for 6C and 7C (which were in the range $0.7 - 1.2$) were equally consistent. Though we calculated the detection number ratios, we do not display them nor did we formally consider them in comparing the models. These ratios can be made closer to 1 by varying the redshift birth function or the RLF (Eq.~1), and so are not good tests of the models per se. From the 2-D KS test results we find that the $[P-z]$, $[P-D]$ and $[z-D]$ planes can be reasonably fitted by the ``varied'' models, particularly those for KDA and MK. Most of those probabilities are $> 0.2$ for the KDA and most exceed 0.05 for MK. All of the ${\cal P}$'s of the ``varied'' BRW model not involving $\alpha$ are higher than those of the default BRW model. Improvements are also seen for all of the non-$\alpha$ MK ${\cal P}$'s using the ``varied'' model. This is the case for only 7 of 9 ${\cal P}$'s of the KDA ``varied'' model. These models cannot fit any plane involving $\alpha$, with all the $\alpha$-related 2-D KS probabilities $\leq 0.01$ for every model. These 2-D results provide support for the hypothesis that the ``varied'' models based on 1-D KS tests are indeed better fits. By comparing the values of 2-D KS probabilities in the models of Table 6, we conclude that KDA model is the best (having the highest number of ${\cal P}$'s close to 1) in fitting the observational data, very closely followed by MK, and finally BRW. From the 4-variable correlation coefficient results (Table 7) we see that the KDA model is able to match the survey data correlations very closely (at least for $P, D, z$). The matches to the data correlations are less good for the BRW and MK models. The parameter variation cases which were the best fits (i.e., gave highest combined ${\cal P}_{[P, D, z, \alpha]}$) when judged with respect to 1-D and 2-D KS tests, are not necessarily the better cases according to the correlation analyses. The KDA default performs better than the KDA {\it varied} (1-D KS best fit) case. For BRW and MK models, the default and the {\it varied} cases perform comparably (i.e., sometimes the default verison is a better match to the data correlations and sometimes the {\it varied} fit is better). Considering the signs of the four-variable coefficients, the MK model predicts $[P - \alpha]$ anti-correlation and $[D - z]$ correlation which are trends opposite to the survey data and to the other models. The sign of the $[D -\alpha]$ correlation of the surveys is only predicted by MK, while the other models produce an anti-correlation; however, given the very poor $\alpha$ distribution for the MK model this advantage is meaningless. From the correlation coefficient analyses we conclude that the KDA model fits the data most closely, followed by BRW, and finally MK. Similar trends are also seen if we examine the coefficients obtained by considering each survey separately. We plotted slices through the $[P-D-z-\alpha]$ volume ($P$ vs $z$, $P$ vs $D$, $D$ vs $z$, and $\alpha$ vs $z$) for each of the simulated surveys, and examined their consistency by comparing them with the overall trends in the $[P-D-z-\alpha]$ planes of the actual data. The actual data is shown in Fig.\ 2. The simulated data are shown in Figs 3, 4 and 5 for the KDA, BRW and MK models, respectively. The plotted simulations are for one of the best (in a statistical sense) parameter sets for each model. Plots for other good parameter values appear similar, while those for worse parameters (according to our KS summary statistic) look less like the data. Sources are detected out to similar values of redshift, power and size in the 3C simulations as in the data. The KDA and MK models show very similar trends in $P$, $D$, and $z$. The unique features of the BRW model results are discussed below. Unsurprisingly, the values of $P$ and $z$ exist in a cluster in the $P-z$ plane, above a lower curve determined by the flux limit of the survey. All of our simulated surveys of all models miss many of the low $z$ - low $P$ sources seen in the data. Very high $z$ sources ($z>2.5$) are underproduced in all the 7C simulations, and a similar, but less pronounced, trend is also present for 6C. All the 3C simulations present a greater scatter in $P$ for high $P$ -- high $z$ sources ($z>1$) when compared to the data. A few powerful, high $z$ sources are detected in the 3C simulations at $z>2.0$ which are not present in the data. The scatter in $P$ is naturally less in the 6C survey because of the upper (as well as lower) flux limit. Examining the $P-D$ planes of the simulations, we find that the KDA and MK models overproduce small and large high power sources in 3C, and underproduce the large weaker sources. The underproduction of low $z$ sources is manifested in the $P-D$ planes of the 6C and 7C simulations as the absence of less powerful sources (due to the $P-z$ correlation). There is a strong $P-D$ evolution seen in the BRW model, which is most pronounced in the 3C and 6C simulations. The 3C simulation overproduces powerful smaller sources and misses several large ones. The 6C and 7C simulations underproduce less powerful, smaller sources. Again, the KDA and MK models show too weak $P-D$ anti-correlations than does the data (at least for 3C), whereas the BRW model shows too strong an anti-correlation. The 6C and 7C simulations show a paucity of low $z$ and high $z$ sources in the $D-z$ planes of all the models. The KDA and MK models overproduce very small and very large 3C sources at all redshifts. The BRW simulation presents a stronger anti-correlation of linear size with redshift, specially for 3C, where there are no large sources at intermediate redshifts. The $D-z$ evolution (decrease of $D$ as $z$ increases) occurs due to imposing survey flux limits. This is a ramification of the ``youth -redshift degeneracy'' discussed in \S5.1. The high redshift sources show a very steep decline of their luminosities with age (seen from the $P-D$ tracks in Fig.\ 1) and fall below the survey flux limits at young ages, as their radiating particles undergo severe inverse Compton losses off the CMB and adiabatic expansion losses as they are transported from the high pressure hotspot to the lobes. Thus, we can only detect these high $z$ sources at an early age when they are still above the limiting survey flux. These younger high $z$ sources are naturally smaller and yield the weak ``linear size evolution'' (seen in the $D-z$ plane). Both the KDA and MK simulations do not show this effect as clearly as does the actual data. On the other hand, the BRW simulations show stronger $D-z$ anti-correlations than do the data. There are several observational features (including trends in the $[P-D-z-\alpha]$ planes of the data samples) that cannot be explained by any models considered so far. The $[P-D]$ diagram for the 3CRR data show a clear anti-correlation with large scatter. Another interesting feature is the clump of sources in the 6CE $[P-D]$ diagram near $D \sim 100$ kpc, $P_{151} \sim 27.5$ W Hz$^{-1}$ sr$^{-1}$ \citep{neeser95}. Neither of these is reproduced in the models. The KDA and MK model simulations predict too many very large $D > 1$ Mpc and powerful sources (more in 3C, some in 7C), which are not present in the data. This feature has been discussed in \citet{kaiser99a}. The BRW ${\cal P}(D)$ were very low for many cases (especially for 3C), and the BRW $[P-D]$ diagrams for all 3 simulated surveys showed too strong a $[P-D]$ anti-correlation. This arises because the BRW model simulations produce too many small but powerful sources. A possible explanation of this problem could be synchrotron self-absorption of the radiation emitted by such small powerful sources, which is not included in the model. Thus some small sources should fall below the survey flux limit at a frequency of $151$ MHz. Including this effect could improve the relative performance of the BRW model. An important point to remember is that all three models considered here are incomplete in the sense that they do not incorporate enough physics to predict the complete physical conditions prevailing in FR II radio sources. Consideration of additional factors may be necessary in these models. First, the environmental density ($\rho$) could vary with redshift and it must eventually deviate from its power law behavior with distance. The beam power $(Q_0)$ distribution might vary with redshift and the maximum lifetime of AGN activity ($T_{MaxAge}$) could vary with redshift and jet power. Also, the birth function of radio sources with redshift (RLF), could have a greater variation with luminosity. \section {Conclusions and Future Work} We have compared the leading models of radio lobe power evolution for FR II RGs, namely the KDA, BRW and MK models, using a simulated radio survey prescription (following BRW). Each of the dozens of simulated radio surveys we computed required the generation and analysis of a few $10^6$ to $> 10^7$ radio sources and hence substantial amounts of computing power. The total number of Monte Carlo simulations done exceeded $250$ and over a billion individual RGs were evolved; this was necessary to narrow down the set of parameters for each model to the ``best fit'' ranges described in the present work. One-dimensional KS tests were done to narrow down the parameters of the different models to locate more desirable ones. These preferred parameter sets of the models were then compared with the data by using 2-D KS tests, and correlation coefficient analyses. Hydrodynamical modeling of classical double radio sources \citep*[e.g.,][]{hooda94, carvalho02} shows that the pressure in the nearly self-similarly growing lobes falls with time while the hotspot pressure does not vary much. The \citet{KDA} model examined here assumed that the head pressure falls with time (and is proportional to that of the cocoon), so this is a weakness of that model. \citet{BRW} adopted a constant hotspot pressure (implying more adiabatic losses for particles in the hotspot of older sources) while considering the adiabatic expansion of particles out of the hotspots to the lobes. They showed a rough qualitative agreement between their simulated and real 3C and 7C data in the $[P-D-z]$ space. \citet{MK} modified the BRW picture by proposing an acceleration mechanism occurring throughout the head region; they obtained $[P-D]$ tracks in somewhat better accord with 3CRR data, but did not consider $[P-D-z-\alpha]$ distributions. Our much more extensive simulations and statistical analyses, based on KS tests and correlation coefficients, provides a quantitative way to directly compare these three models. We note that despite the hundreds of simulations we computed which did employ substantial variations on the default parameters for each RG model (only a portion of which are displayed in this paper) we could not completely cover the entire plausible parameter space. We also note that other figures of merit could have been devised to distinguish between the goodness of fits of the data to the various simulation results, since no really suitable multi-parameter statistic is available for samples of this size. Keeping these caveats in mind, we believe both that we have covered the vast majority of the sensible parameter ranges and that our choice of combined KS probabilities is a good way to compare different simulations. In this spirit, we now present our conclusions comparing how the models performed in different aspects of consistency between the simulations and data. Our key result is somewhat disappointing. Despite investigating a wide range of parameters we find that no existing model gives excellent fits to all the data simultaneously. However, from the statistical test results, the KDA model appears to give better fits than do the BRW or MK models. Explicitly judging from the 1-D KS test results, the MK model frequently produces acceptable statistics for $P$, $z$ and $D$. The KDA simulations also often give adequate statistics. The BRW simulations do not give as good statistics as do the MK and KDA models. After incorporating the 10 kpc linear size cut-off the statistics for some BRW models improve, but are still not as good as those given by the other two models. According ot the 2-D KS test results, the KDA model fits the data most closely, then comes MK, and finally BRW. From both the 1-D and 2-D KS test results, planes in the $[P-z-D]$ space can be reasonably fitted by some parameter variation of the models (fits determined by higher values of the KS probabilities), but it is difficult to get acceptable $\alpha$ fits. Both the KS tests dictate that the ``varied'' model parameters of all the models (Tables 6 and 7) are better fits to the data as compared to the default parameter values. However, in terms of reproducing the correlations between the source properties (Spearman partial rank correlation coefficient), the default models perform better than (KDA) or comparable to (BRW, MK) the ``varied'' models. The KDA model correlations match the survey data correlations most closely, followed by BRW and then MK. Our analyses used the redshift birth function of radio sources from \citet{willott01}'s radio luminosity function. We conclude that, using \citet{willott01}'s RLF, the KDA and MK models perform better than BRW in fitting the 3CRR, 6CE and 7CRS survey data when compared with respect to KS-based statistical tests, and the KDA model provides the best fits to the correlation coefficients. This is the first in a series of papers which aim at comparing the performances of radio source evolution models. Our goal is to develop one which is a good fit to all the observational data. We are performing similar tests on a modified model we are developing, whose results will be presented elsewhere. This new model incorporates conical jet expansion for a fraction of a radio source's lifetime within the BRW and MK models. This allows us to incorporate a variable hotspot size in the models, which is supported by observations \citep{jeyakumar00}. Here the hotspot pressure varies as a function of the linear size or the source age, which is a more practical possibility for hotspots of RGs evolving over 100s of Myr. In the future, we plan to extend this work by allowing redshift variations in the environmental density profile (in particular, we will allow for variations of $\rho_0$, $a_0$ and $\beta$ with cosmic epoch). We also will consider jet propagation through ambient media which change from power law density decline to constant densities (which change with $z$) at scales around 100 kpc. \citet{barai04} gives preliminary work on the implications of the volumes attained by radio sources considering cosmological evolution of the ambient gas density. Our final aim is to estimate the volume fraction of the relevant universe occupied by radio lobes, and hence to test the robustness of the exciting, but preliminary, conclusion that expanding radio galaxies play a significant role in the cosmological history of the universe. \section*{Acknowledgments} We thank the referee, Steve Rawlings, for several useful suggestions which substantially improved this paper. We thank Katherine Blundell for a helpful conversation, Christian Kaiser for conversations and clarifying correspondence, Konstantina Manolakou for correspondence and providing us with a version of her Fortran code and Chris Willott for correspondence and for sending us the 6C and 7C-III data. We are most grateful to Angela Osterman for her efforts on initial versions of some codes, and acknowledge conversations with Gopal-Krishna and Zeljko Ivezi{\'c}. We also thank Jim Loudin and Hannu Miettinen for correspondence and for a version of their multivariate statistics code. PJW is most grateful for continuing hospitality at the Department of Astrophysical Sciences at Princeton University. This work was supported in part by a subcontract to GSU from NSF grant AST-0507529 to the University of Washington and by Research Program Enhancement funds awarded to the Program in Extragalactic Astronomy at GSU.
proofpile-arXiv_065-3097
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Pulsars are among the most exciting gamma ray sources in the Universe and can serve as unique sites for the study of emission processes in extreme physical environments. The Gamma ray Large Area Space Telescope (GLAST) will increase dramatically our knowledge of gamma ray pulsars physics. In particular the Large Area Telescope (LAT), the main GLAST instrument, will provide more detailed observations of the known gamma ray pulsars and potentially will discover many new pulsars that emit gamma rays. To better understand the capabilities of GLAST for pulsar science we developed {\em PulsarSpectrum}, a program that simulates gamma ray emission from pulsars. This simulator can be easily interfaced with the Monte Carlo software that simulates the response of the LAT. \subsection{Gamma ray pulsars in the GLAST era} Pulsars have been associated with high energy gamma ray astronomy since the first experiment capable to resolve point sources. The {\em Small Astronomy Satellite}(SAS-2), launched in 1972, identified three intense sources in the sky, that in the following turned out to be the Vela, Crab and Geminga pulsars. A signal modulated at the radio period was found for Vela [\refcite{Kniff74}] and Crab [\refcite{Thomp75}], but for Geminga no radio counterpart was found and this source remained quite mysterious until it was identified as an X-ray and gamma-ray pulsar. The {\em Energetic Gamma Ray Experiment} (EGRET) aboard the {\em Compton Gamma Ray Observatory} (CGRO) detected three more pulsars, B1706-44 [\refcite{Thomp92}], B1055-52 [\refcite{Fierro93}] and later B1951+32 [\refcite{Raman95}]. Another one, B1509-58 was observed at energies between 60 keV and 2 MeV by the three CGRO experiments, the {\em Burst and Transient Source Experiment} (BATSE)[\refcite{Wilson93}], the {\em Oriented Scintillation Spectrometer Experiment} (OSSE), [\refcite{Ulmer93}] and the {\em Imaging Compton Telescope} (COMPTEL)[\refcite{Bennet93}], but it was not seen by EGRET, because the spectrum shows a cutoff at energies lower than the instrument's threshold. Recently the millisecond pulsar PSR 0218+4232 was detected by analyzing the EGRET data [\refcite{Kuiper00}], increasing to eight the number of detected gamma ray pulsars, seven of them are radio-loud pulsars and only Geminga seems to be radio quiet. \begin{figure}[ht] \epsfxsize=6cm \centerline{\epsfxsize=5.5cm\epsfbox{procs-figlat.eps}} \caption{The GLAST Large Area Space Telescope. An incident gamma ray converts into a $e^{-}e^{+} $ pair\label{lat}} \end{figure} The GLAST Large Area Telescope (LAT) (fig \ref{lat}), is a pair conversion telescope based on the most advanced high energy detectors. It consists in a precision silicon strip tracker, a hodoscopic calorimeter and a segmented anticoincidence shield for particle background rejection. The LAT high sensitivity (2 10$^{-9}$ ph/cm$^{2}$/s) and effective area ($>$8000 cm$^{2}$) will permit the discovery of a lot of new pulsars: the estimates range between tens to hundreds depending upon the model adopted. Moreover the low dead time of the detector (20 $\mu$s) will allow the detailed reconstruction of pulsar lightcurves. One of the most exciting possibilities of the LAT will be the coverage of the energy window from 30 GeV up to 300 GeV, a still unexplored range. At these energies the theoretical models make different previsions on the high energy spectral cutoff and the spectral coverage of LAT will be of primary importance for constraining and discriminating among the models. In order to study the LAT response to specific gamma ray sources, various simulation packages have been developed. Here we present {\em PulsarSpectrum}, that can simulate the observed pulsars and create new fake pulsars for the LAT threshold detection identification. \section{The {\em PulsarSpectrum} simulator} \subsection{Overview of the simulator} The basic idea behind {\em PulsarSpectrum} is to construct a 2-dimensional histogram representing the differential flux vs. energy and pulsar phase. This histogram contains all the basic informations about lightcurve and spectrum. How it is built depends upon the model we want to use: a phenomenological model, based only on observations, or a physical one, based on a specific theoretical model. At present only a phenomenological model has been implemented because it is more flexible and completely independent from the chosen theoretical scenario. The input parameters of the simulator can be divided in two categories: \begin{itemize} \item {\em Observational parameters}, (i.e the flux or the ephemerides); \item {\em Model-dependent parameters}, (i.e. the spectral index); \end{itemize} These parameters are placed in two specific data files used by both {\em PulsarSpectrum} and the LAT simulation tools. {\em PulsarSpectrum} creates the lightcurve and the spectrum from these parameters and combines them to obtain a two-dimensional matrix that represents the flux in ph/m$^2$/s/keV. The photons are then extracted such that the interval between two subsequent photons is determined by the flux integrated over the energy range of interest. \subsection{The phenomenological model} The currently implemented phenomenological model allows the user to generate pulsar lightcurves in a general way using a single or double Lorenzian peak profile whose shape is determined from random generated numbers. The lightcurve can be generated alternatively from a user-provided profile. This is useful for simulating the already observed gamma ray pulsars. The spectral shape is assumed to be a power law with exponential cutoff (according to [\refcite{NelDj95}]), as in the observed gamma-ray pulsars and can then be modeled as: \begin{equation} \frac{dN}{dE} = K(\frac{E}{E_{n}})^aexp(\frac{E}{E_0})^{-b} \end{equation} The normalisation constant K is determined by the photon flux above 100 MeV and the other four parameters can be varied; the values for the EGRET pulsars are obtained from real data by fitting procedures (e.g. [\refcite{NelDj95}]). \begin{figure}[ht] \centerline{\epsfxsize=7cm\epsfbox{procs-fignv.eps}} \caption{An example of 2-dimensional histogram created by PulsarSpectrum. \label{nv}} \end{figure} \subsection{Timing issues} Once the differential flux histogram is created the time interval between two subsequent photons is computed according to the flux. If the previous photon came at time t$_{0}$ the next photon will appear at $\tilde {t}$ such that: \begin{equation} {A_{eff}}\int_{t_0}^{\tilde t} \int_{E_1}^{E_2}\frac{dN}{dEdAdt}\,dEdAdt\ = 1 \end{equation} The interval between two photons is computed assuming that the pulsar period does not change with time and the photons arrival times are computed into a reference system fixed relative to stars. This is not the "real world". Pulsar timing is affected by more complicate effects, as (1)- The motion of the spacecraft through the Solar System and the relativistic effects due to gravitational well of the Sun (see \ref{barydecorr}); (2)- Period changes with time (see \ref{pchange}). For pulsars in binary systems an additional modulation to the orbital period should be taken into account. For a precise pulsar simulator intent to produce a realistic list of photon arrival times we need to include all these effects (to transform to the observational frame). All these procedures are now implemented in the code and only the binary demodulation is not yet implemented. The real arrival time of a photon from a pulsar must be first barycentered and then phase assigned. \subsubsection{Barycentric effects}\label{barydecorr} The first step to analyze pulsar data is the conversion from the arrival times at the spacecraft, usually expressed in Terrestrial Time TT or TAI, to the arrival times at the Solar System barycenter, expressed in Barycentric Dynamical Time TDB. Taking into account both the motion of spacecraft through space and the general relativistic effects due to the gravitational field of the Sun (i.e. Shapiro delay), the simulator computes the opposite of the barycentric correction by considering the position of the Earth and of the spacecraft in the Solar System, and the position of the Sun. The accuracy for the computation of these "de-corrections" is hard-coded in the program. \subsubsection{Period change and ephemerides}\label{pchange} The rotational energy of a radio pulsar decreases with time and hence the period increases with time. For gamma ray pulsar science the radio ephemerides are fundamental for assigning the correct phase to each photon. If we know the frequency {\em f$(t_{0}$)} and its derivatives {\em $\dot{f} (t_{0})$} and {\em $\ddot{f} (t_{0})$} at a certain time t$_{0}$, known as {\em epoch}, the phase is then: \begin{equation}\label{phit} \phi(t) = int[ f(t_{0})(t-t_{0}) + \frac{1}{2}\dot{f} (t_{0})(t-t_{0})^{2} + \frac{1}{6}\ddot{f} (t_{0})(t-t_{0})^{3}]. \end{equation} The interval between two photons must be "de-corrected" for this effect. In the parameters file the user can specify a set of ephemerides with the relative epoch of validity expressed in Modified Julian Date. The simulator then computes the opportune arrival time such that, after applying the barycentric corrections and the equation \ref{phit}, the correct phase is obtained. \begin{figure}[ht] \centerline{\epsfxsize=8cm\epsfbox{procs-figfv.eps}} \caption{An example of the spectrum of a simulated pulsar created by PulsarSpectrum. The point represents the extracted photons of energy above 100 MeV. \label{fv}} \end{figure} \section{Conclusions} {\em PulsarSpectrum} has been successfully used by the GLAST collaboration for testing the functionality of the LAT Science Analysis Tools, a set of analysis programs specifically designed to analyse the LAT data after launch. Periodically these tools are evaluated through specific Checkout phases. In absence of real data there is strong need of detailed simulated data, most of them provided by {\em PulsarSpectrum}. {\em PulsarSpectrum} is the most probable candidate to be used as pulsar simulator in the GLAST Data Challenge 2, an analysis phase where some months of LAT data are simulated. A new interesting opportunity is now raising from the creation of the LAT Science Groups, to study GLAST science opportunities on specific topics. Our simulator has all the characteristics to fit well the requirements of the LAT Pulsar Science Group.
proofpile-arXiv_065-3108
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{The Planarity of Milky Way Satellites} Holmberg (1969) reported that satellite galaxies of spiral primaries with projected separations $r_{\mathrm{p}} \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle <}\over{\sim}\ $}} 50$~kpc are tend to lie near the short axes of the light distributions of their primaries. Zaritsky et al. (1997) revisited this issue and found evidence for alignment in the same sense as Holmberg for satellites within $r_{\mathrm{p}} \sim 200-500$~kpc. The angular distribution of galaxies has been of recent interest with increased data from large surveys and the possibility of these data to relate the orientations of galaxies to their halos in a statistical way (e.g., Sales \& Lambas 2004; Brainerd 2004; Azzaro et al. 2005, A05), as well as studies of satellites in the Local Group. Kroupa et al. (2005, K05) recently argued that the nearly planar distribution of Milky Way (MW) satellites is a serious challenge to the standard cold dark matter (CDM) paradigm of structure formation. Zentner et al. (2005, Z05) addressed this issue from a theoretical standpoint, and similar results were reported in contemporaneous papers Libeskind et al. (2005, L05) and Kang et al. (2005). They showed that the conclusions of K05 were incorrect for two reasons: first, the statistical analysis of K05 was not valid for small samples, such as the $11$ observed MW satellites, and for such samples the statistic they used is non-discriminatory; second, K05 incorrectly assumed that the null hypothesis for CDM should be an isotropic satellite distribution. \begin{figure}[t] \centering \includegraphics[width=11cm]{zentner_fig1.eps} \caption{ {\it Left:} The cumulative fraction of satellites with an angular position $<\vert \cos(\theta)\vert$ from the major axis of the host halo mass distribution as a function of $\vert \cos(\theta)\vert$. The {\em thin, solid} line represents an isotropic distribution. The {\em dashed, dotted}, and {\em dot-dashed} lines are the distributions of subhalos of $3$ simulated MW host halos, with $V_{\mathrm{max}}^{\mathrm{sat}} \ge 0.075V_{\mathrm{max}}^{\mathrm{host}}$. The {\em thick, solid} line represents the $11$ MW satellites. The MW satellites are placed on the plot by {\em assuming} that the rotation axis of the MW is aligned with the major axis of the halo. {\it Right:} The differential fraction of subhalos as a function of angular displacement from the major axis of the primary, cluster halo. An isotropic distribution is uniform in $\vert \cos(\theta)\vert$. The {\em triangles} show the results from $8$ dissipationless cluster simulations employing adiabatic gas physics. The {\em squares} show results from simulations of the same $8$ clusters including radiative gas cooling and star formation. } \label{f1} \end{figure} Z05 showed that the distribution of CDM subhalos is anisotropic. Subhalos or subsets thereof, the likely sites of galaxy formation, are preferentially aligned near the long axes of the triaxial mass distributions of their primary halos. This is shown explicitly in the left panel of Figure~\ref{f1} for a sample of $3$ simulated approximately MW-sized, CDM halos (see Z05 for details). The angle between the major axis of the primary halo and the position of the subhalo is $\theta$ and an isotropic distribution is uniform in the variable $\cos(\theta)$. The principal axes of the host halo were computed using only particles within $30\%$ of the halo virial radius, to focus on the region where the central galaxy resides. The satellites were selected to have maximum circular velocities $V_{\mathrm{max}}^{\mathrm{sat}} \ge 0.075V_{\mathrm{max}}^{\mathrm{host}}$, where $V_{\mathrm{max}}^{\mathrm{host}}$ is that of the host halo. These satellites are roughly the size required to host the observed MW dwarf satellites (e.g., Kravtsov et al. 2004). The Kolmogorov-Smirnov probability of selecting the simulated subhalo sample from an isotropic distribution is $P_{\mathrm{KS}} \sim 10^{-4}$. In addition, Z05 demonstrated that planar distributions of subhalos, similar to that of the MW satellites, are not unlikely due largely to accretion along preferred directions. Such planes are typically aligned with the major axis of the primary halo. Thus, the MW satellites are consistent with CDM predictions, provided that the pole of the MW is aligned with the major axis of the surrounding halo. The metal-poor globular clusters surrounding the MW and the satellites of M31 show evidence of a similar alignment and new techniques may yield constraints on the orientation of the MW halo (e.g., Gnedin et al. 2005). However, such alignments present a challenge for simple scenarios of disk galaxy formation because the angular momenta of DM halos tend to be perpendicular to halo major axes. The results of Z05, Kang et al. (2005) and L05 are all based on dissipationless $N$-body simulations; however, one of the effects of baryonic dissipation is to make DM halos more spherical than their counterparts in dissipationless simulations (e.g., Kazantzidis et al. 2004). One may inquire whether the alignment of satellites along the principal axes of host halos is as prevalent in dissipational simulations. One might expect differences between dissipational and dissipationless simulations to be small in this regard because both the major axis and the positions of satellites reflect the directions of recent accretion along filaments and because subhalos are biased toward large halo-centric distances compared to DM ($r \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 0.3R_{\mathrm{vir}}$), where the change in shape is small. The right panel of Figure~\ref{f1} is an explicit demonstration that dissipational processes do not significantly alter the alignment of halo and satellites. The figure shows an analysis of the eight cluster halos of Kazantzidis et al. (2004) simulated once with dissipationless, adiabatic gas physics and a second time including radiative cooling and star formation. Though the inner halos in the cooling simulations are significantly rounder, the alignment of host halo and satellites remains pronounced. \section{Is There an Angular Bias Between Subhalos and Dark Matter?} \begin{figure}[t] \centering \includegraphics[width=11cm]{zentner_fig2.eps} \caption{ The shape distribution of DM compared to the distribution of satellite halos. The {\em left} panel shows the axis ratio $(b/a)$, while the {\em right} panel shows the axis ratio $(c/a)$. Each panel is a scatter plot of the axis ratios of {\em all} DM in each host halo on the {\em horizontal} axis against the axis ratios of each of the subhalo populations on the {\em vertical} axis. The {\em triangles} show number-weighted subhalo axis ratios and the {\em squares} represent mass-weighted subhalo axis ratios. All subhalos with $V_{\mathrm{max}}^{\mathrm{sat}} \ge 0.1V_{\mathrm{max}}^{\mathrm{host}}$ are included. } \label{f2} \end{figure} It is interesting to quantify the relationship between the spatial distributions of the smooth, DM components of host halos and the subhalos that reside within them. Do the subhalos simply follow the triaxial mass distribution? There are several potential ways to address this issue, such as computing angular correlations etc., and I discuss two intriguing quantifications in this section. One way to address the relationship of subhalos and DM is through the ratios of the principal axes of inertia denoted $a \ge b \ge c$. For subhalos, the inertia tensor can be computed in two ways. In the first, each subhalo is counted equally and in the second method, each subhalo can be counted in proportion to its bound mass. The result is a ``number-weighted'' inertia tensor and a ``mass-weighted'' inertia tensor. Figure~\ref{f2} shows a comparison between the axis ratios of host DM halos using, computed as specified above, and the mass- and number-weighted axis ratios of their subhalo populations. The sample consists of $26$ hosts with $180\, \mathrm{kms}^{-1} \le V_{\mathrm{max}}^{\mathrm{host}} \le 400 \, \mathrm{kms}^{-1}$ and their subhalos simulated with the ART code (Kravtsov et al. 1997). The particle mass is $m_{\mathrm{p}} = 4.9 \times 10^6 \, h^{-1}\mathrm{M}_{\odot}$, the spatial resolution is $\sim 150 \, h^{-1}\mathrm{pc}$, and each host contains $\lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 2\times 10^5$ particles within its virial radius. The number-weighted axis ratios in Fig.~\ref{f2} show that the full number-weighted satellite populations broadly trace the DM distributions of their host halos. However, notice that the mass-weighted axis ratios are systematically smaller than that of the DM in the host halo. More massive subhalos are more strongly biased toward a flattened distribution than small subhalos, a result consistent with the studies of Z05, L05, and A05. The robustness of this result has been checked by randomly re-assigning the weights (masses in this case) among the subhalo populations. The axis ratios based on these randomized weights are generally similar to the number-weighted axis ratios shown in Figure~\ref{f2}. This angular bias for large subhalos is not entirely surprising. The smallest subhalos have generally been accreted over an extended period of time and interact gravitationally as DM particles, adopting a self-consistent configuration with the host potential. The largest subhalos have typically been accreted more recently so they more faithfully reflect the directions of recent infall, and they tend to be more strongly biased to form in overdense filaments. As a second comparison between substructure and smooth mass, consider the 2D projected fraction of mass in substructure, $f_{\mathrm{sub}}$. This quantity is constrained by measurements of flux ratio anomalies in multiply-imaged quasar systems (e.g., Dalal \& Kochanek 2002), and can be used as a probe of cosmological parameters that influence the growth of small-scale structure. Zentner \& Bullock (2003) and Mao et al. (2004) have made predictions for the mean projected substructure mass fractions, with the former considering a variety of primordial power spectra and dark matter properties. In what follows I show the projected substructure mass fraction as a function of projection angle $\theta$, from the major axis of the host halo. Following Mao et al. (2004), I have computed $f_{\mathrm{sub}}(\theta)$ as a function of projection angle using all mass and substructures within $3$ virial radii of the center of each host, in order to include correlated material associated with each halo. I projected in cylinders of radius $r_{\mathrm{p}} = 0.03R_{\mathrm{vir}}$, comparable to the Einstein radii of strong-lens systems. The result is shown in Figure~\ref{f3}, along with the observed $90$\% confidence region for $f_{\mathrm{sub}}$ measured in quadruply-imaged systems by Dalal \& Kochanek (2002). The mean substructure mass fraction is approximately $f_{\mathrm{sub}} \approx 0.4$\% with a large scatter, consistent with Mao et al. (2004). Interestingly, $f_{\mathrm{sub}}(\vert \cos(\theta)\vert)$ is $\sim 5-6$ times higher for projections near the long axis of the host. If elliptical galaxies are well aligned with their host halos, this result may have important consequences for determinations of $f_{\mathrm{sub}}$ in multiply-imaged quasar systems and several other observed properties of strong lenses. \begin{figure}[t] \centering \includegraphics[width=6cm]{zentner_fig3.eps} \caption{ Substructure mass fractions as a function of projection angle from the major axis of the host halo. The {\em squares} represent the average $f_{\mathrm{sub}}$ measured from all $26$ host halos, using two projections for each host. The {\em outer} errorbars represent the scatter among projections and the {\em inner} errorbars represent the estimated error in the mean of $f_{\mathrm{sub}}$. The shaded band represents the $90$\% confidence region of $f_{\mathrm{sub}}$ from the measurement of Dalal \& Kochanek (2002). } \label{f3} \end{figure} \newpage \acknowledgements I am grateful to my collaborators Brandon Allgood, Oleg Gnedin, Anatoly Klypin, Andrey Kravtsov, Daisuke Nagai, and Eduardo Rozo for their invaluable contributions to this research. I thank James Bullock, Neal Dalal, Stelios Kazantzidis, Chuck Keeton, Ben Metcalf, and Jeremy Tinker for helpful discussions. ARZ is funded by The Kavli Institute for Cosmological Physics at The University of Chicago and by the National Science Foundation under grant No. NSF PHY 0114422. \endacknowledgements
proofpile-arXiv_065-3114
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Estimates from the current pulsar birth rate and the number of supernovae to account for the heavy-element abundance suggest that in total 10$^8$ to 10$^9$ isolated neutron stars exist in our Galaxy. Only a small fraction is detectable as young neutron stars due to their emission of thermal X-rays (for $\sim$10$^6$ years) or pulsed radio emission (for $\sim$10$^8$ years). Proposals that an old ''recycled" population of isolated neutron stars re-heated by accretion from the interstellar medium (ISM) should be detectable by ROSAT, triggered several projects to search for such objects in the ROSAT all-sky survey data. Over the last decade seven very soft X-ray sources with particular characteristics were discovered in ROSAT data. Extreme X-ray to optical flux ratios and low absorption column densities strongly suggest that these objects are nearby isolated neutron stars \citep[see reviews by][]{2000PASP..112..297T,2001xase.conf..244M,haberl2004COSPAR}. Using HST, parallax measurements yield a distance of 117$\pm$12 pc for RX\,J1856.5$-$3754 \citep{2002ApJ...576L.145W}. The detection of relatively high proper motion (PM) for the three brightest stars makes accretion from the ISM highly ineffective and favours the picture of cooling neutron stars with an age of $\sim$10$^5$--10$^6$ years to power the X-rays. Tracing back the apparent trajectories suggests that the brightest of the ROSAT-discovered isolated neutron stars were born in the Sco OB2 complex which is the closest OB association \citep[see e.g.][~and references therein]{2005A&A...429..257M}. The X-ray spectra of the ``magnificent seven", as they are sometimes called in the literature, are thermal and blackbody-like without a hard power-law tail as it is often observed in other isolated neutron stars \citep[e.g.][]{2002nsps.conf..273P}. Typical observed blackbody temperatures kT are in the range of 40$-$110 eV (see Table 1). From five stars X-ray pulsations were detected with pulse periods between 3 s and 12 s and pulsed fractions between 4\% and 18\% (Fig.\,\ref{fig-pulses}). However, for the X-ray brightest star RX\,J1856.5$-$3754 no pulsations with a stringent upper limit on periodic variation of 1.3\% (2$\sigma$ confidence level in the 0.02 - 1000 s range) were found \citep{2003A&A...399.1109B}. A surprising discovery was that the X-ray spectrum and the pulsed fraction observed from RX\,J0720.4$-$3125 changes on a time-scale of years which may be caused by precession of the neutron star \citep{2004A&A...415L..31D}. \begin{table*} \caption[]{X-ray and optical properties of the magnificent seven} \begin{tabular}{lcccccc} \hline\noalign{\smallskip} \multicolumn{1}{l}{Object} & \multicolumn{1}{c}{kT} & \multicolumn{1}{c}{Period} & \multicolumn{1}{c}{Amplitude} & \multicolumn{1}{c}{Optical} & \multicolumn{1}{c}{PM} & \multicolumn{1}{c}{Ref.} \\ \multicolumn{1}{l}{} & \multicolumn{1}{c}{eV} & \multicolumn{1}{c}{s} & \multicolumn{1}{c}{\%} & \multicolumn{1}{c}{mag} & \multicolumn{1}{c}{mas/year} & \multicolumn{1}{c}{} \\ \noalign{\smallskip}\hline\noalign{\smallskip} RX\,J0420.0$-$5022 & 44 & 3.45 & 13 & B = 26.6 & & 1 \\ RX\,J0720.4$-$3125 & 85-95 & 8.39 & 8-15 & B = 26.6 & 97 & 2,3,4,5,6 \\ RX\,J0806.4$-$4123 & 96 & 11.37 & 6 & B $>$ 24 & & 7,1 \\ RBS\,1223$^{(1)}$ & 86 & 10.31 & 18 & m$_{\rm 50ccd}$ = 28.6 & & 8,9,10,11 \\ RX\,J1605.3+3249 & 96 & $-$ & $-$ & B = 27.2 & 145 & 12,13,14,15 \\ RX\,J1856.5$-$3754 & 60 & $-$ & $<$1.3 & V = 25.7 & 332 & 16,17,18 \\ RBS\,1774$^{(2)}$ & 101 & 9.44 & 4 & R $>$ 23 & & 19,20 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} $^{(1)}$ = 1RXS\,J130848.6+212708\\ $^{(2)}$ = 1RXS\,J214303.7+065419\\ References: (1) \citet{2004A&A...424..635H} (2) \citet{1997A&A...326..662H} (3) \citet{2001A&A...365L.302C} (4) \citet{2004A&A...419.1077H} (5) \citet{2004A&A...415L..31D} (6) \citet{2003A&A...408..323M} (7) \citet{2002A&A...391..571H} (8) \citet{1999A&A...341L..51S} (9) \citet{2002A&A...381...98H} (10) \citet{2002ApJ...579L..29K} (11) \citet{2003A&A...403L..19H} (12) \citet{1999A&A...351..177M} (13) \citet{2003ApJ...588L..33K} (14) \citet{2004ApJ...608..432V} (15) \citet{2005A&A...429..257M} (16) \citet{1997Natur.389..358W} (17) \citet{2002ApJ...576L.145W} (18) \citet{2003A&A...399.1109B} (19) \citet{2001A&A...378L...5Z} (20) \citet{2005astroph0503239} \end{table*} \begin{table*} \caption[]{Magnetic field estimates} \begin{tabular}{lcccc} \hline\noalign{\smallskip} \multicolumn{1}{l}{Object} & \multicolumn{1}{c}{dP/dt} & \multicolumn{1}{c}{E$_{\rm cyc}$} & \multicolumn{1}{c}{B$_{\rm db}$} & \multicolumn{1}{c}{B$_{\rm cyc}$} \\ \multicolumn{1}{l}{} & \multicolumn{1}{c}{10$^{-13}$ ss$^{-1}$} & \multicolumn{1}{c}{eV} & \multicolumn{1}{c}{10$^{13}$ G} & \multicolumn{1}{c}{10$^{13}$ G} \\ \noalign{\smallskip}\hline\noalign{\smallskip} RX\,J0420.0$-$5022 & $<$92 & 330? & $<$18 & 6.6? \\ RX\,J0720.4$-$3125 & 1.4$\pm$0.6 & 260 & 2.8$-$4.2 & 5.2 \\ RX\,J0806.4$-$4123 & $<$18 & & $<$14 & \\ RBS\,1223 & $<$9 & 100-300 & $<$10 & 2$-$6 \\ RX\,J1605.3+3249 & & 450-480 & & 9.1$-$9.7 \\ RX\,J1856.5$-$3754 & & & $\sim$1 & \\ RBS\,1774 & & $\sim$700 & & $\sim$14 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{table*} \begin{figure*} \centering \resizebox{\hsize}{!}{{\includegraphics[angle=-90,clip=true]{fhaberl_pulse0720.ps}} \hspace{5mm} \includegraphics[angle=-90,clip=true]{fhaberl_pulse0806.ps}} \vspace{5mm} \resizebox{\hsize}{!}{{\includegraphics[angle=-90,clip=true]{fhaberl_pulse1223.ps}} \hspace{5mm} \includegraphics[angle=-90,clip=true]{fhaberl_pulse0420.ps}} \caption{EPIC-pn light curves folded with the pulse period of four thermal isolated neutron stars. For direct comparison of the pulse profile the flux is normalized to the mean and plotted on the same scale. Except for RX\,J0420.0$-$5022, which shows the softest X-ray spectrum, data from the same energy band was used. To gain statistics data of four observations of RX\,J0420.0$-$5022 were merged \citep{2004A&A...424..635H}.} \label{fig-pulses} \end{figure*} \section {Broad absorption lines} XMM-Newton observations of the thermal isolated neutron stars revealed deviations from the Planckian shape in the X-ray spectra obtained by the EPIC-pn and RGS instruments. Fig.\,\ref{fig-bbfits} shows a comparison of the EPIC-pn spectra of the six best observed thermal isolated neutron stars fitted with an absorbed blackbody model. Large residuals are seen from RBS\,1223 \citep{2003A&A...403L..19H}, RX\,J0720.4$-$3125 \citep{2004A&A...419.1077H} and RX\,J1605.3+3249, in the latter case the deviations were discovered in RGS spectra \citep{2004ApJ...608..432V}. Non-magnetic neutron star atmosphere models \citep[e.g.][]{2002A&A...386.1001G,2002nsps.conf..263Z} can not explain the X-ray spectra: Iron and solar mixture atmospheres cause too many absorption features and deviations from a blackbody model in particular at energies between 0.5 and 1.0 keV which are not seen in the measured spectra. On the other hand the spectrum of a pure hydrogen model is similar in shape to that of a blackbody and does not fit the data either. Moreover, hydrogen atmosphere models over-predict the actually observed optical fluxes by large factors \citep[$\sim$300, see ][]{2002nsps.conf..263Z}. The XMM-Newton spectra can best be modeled with a Planck continuum including a broad, Gaussian shaped absorption line (Fig.\,\ref{fig-gaussfits}). Line centroid energies are summarized in Table\,2. In the EPIC-pn data of RBS\,1223 \citep[see also ][]{2005A&A...INS} and RX\,J0720.4$-$3125 the depth of the absorption line (or the equivalent width) was found to vary with pulse phase. In the cases of RX\,J0806.4$-$4123 and RX\,J0420.0$-$5022 it is not clear to which extent the residuals of the blackbody fits are caused by systematic calibration uncertainties \citep{2004A&A...424..635H}. In particular RX\,J0806.4$-$4123 shows a residual pattern similar to that of RX\,J1856.5$-$3754 which is believed to exhibit a pure blackbody spectrum as seen from the high resolution Chandra LETGS spectrum \citep{2003A&A...399.1109B}. For RBS\,1774 recently an absorption feature at $\sim$0.7 keV was reported from the analysis of EPIC spectra \citep{2005astroph0503239}. At such high energies (the highest line energy reported from the thermal isolated neutron stars) the energy resolution is better and the calibration uncertainties smaller than at lower energies. \begin{figure*} \centering \resizebox{\hsize}{!}{{\includegraphics[angle=-90,clip=true]{fhaberl_spec1856.ps}} \hspace{3mm} \includegraphics[angle=-90,clip=true]{fhaberl_spec0806.ps}} \vspace{3mm} \resizebox{\hsize}{!}{{\includegraphics[angle=-90,clip=true]{fhaberl_spec0420.ps}} \hspace{3mm} \includegraphics[angle=-90,clip=true]{fhaberl_spec1605.ps}} \vspace{3mm} \resizebox{\hsize}{!}{{\includegraphics[angle=-90,clip=true]{fhaberl_spec1223.ps}} \hspace{3mm} \includegraphics[angle=-90,clip=true]{fhaberl_spec0720.ps}} \caption{EPIC-pn and RGS spectra of thermal isolated neutron stars fitted with an absorbed blackbody model. The fits represent the calibration status as available with SAS release 6.0.0.} \label{fig-bbfits} \end{figure*} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[angle=-90,clip=true]{fhaberl_spec1223l.ps}} \vspace{3mm} \resizebox{\hsize}{!}{\includegraphics[angle=-90,clip=true]{fhaberl_spec0720l.ps}} \vspace{3mm} \resizebox{\hsize}{!}{\includegraphics[clip=true]{2004ApJ...608..432V_f1.eps}} \caption{Top two panels: EPIC-pn and RGS spectra of two thermal isolated neutron stars fitted with an absorbed blackbody model including a broad absorption line. Bottom panel: RGS spectrum of RX\,J1605.3+3249 modeled with blackbody continuum and broad absorption line. A possible additional narrow absorption line at 21.5\AA\ is also included \citep[from][]{2004ApJ...608..432V}.} \label{fig-gaussfits} \end{figure} \section{Strongly magnetized neutron stars} An H$_{\alpha}$ emission line nebula was discovered around RX\,J1856.5$-$3754 \citep{2001A&A...380..221V}. Assuming that magnetic dipole breaking powers this nebula and using an age of the star of 5 x 10$^5$ years \citep{2002ApJ...576L.145W} allows an estimate of the magnetic field strength of the neutron star of B $\sim$ 10$^{13}$ G \citep{2002ApJ...580.1043B,2005Truemper}. A similar magnetic field strength of (2.8$-$4.2) x 10$^{13}$ G was inferred from the pulse period history of RX\,J0720.4$-$3125 as observed with ROSAT, Chandra and XMM-Newton over a time span of 10 years \citep{2004MNRAS.351.1099C}. These were the first indications that the group of thermal isolated neutron stars possess strong magnetic fields of the order of $10^{13}-10^{14}$ G. Such strong fields are indeed required to spin down the neutron stars to their current long rotation periods within $10^{5}-10^{6}$ years (still being sufficiently hot to be detected in X-rays) if they were born with msec periods. Cyclotron resonance absorption features in the 0.1$-$1.0 keV band are expected in X-ray spectra from magnetized neutron stars with field strengths in the range of 10$^{10} - 10^{11}$ G or 2 x 10$^{13} - 2$ x 10$^{14}$ G if caused by electrons or protons, respectively \citep[see e.g.][]{2001ApJ...560..384Z,2002nsps.conf..263Z}. Variation of the magnetic field strength over the neutron star surface (as expected for dipole fields) leads to a broadening of the line \citep{2004ApJ...607..420H}. The strong magnetic fields inferred from magnetic dipole breaking effects in at least two of the stars suggests that the broad absorption features seen in the X-ray spectra of thermal isolated neutron stars originate from cyclotron resonance absorption by protons or highly ionized atoms of heavy elements. With a mass to charge ratio of $\sim$2 with respect to protons the latter case would lead to B a factor of $\sim$2 higher than that derived for protons. Different ionization states would result in a series of lines with energies differing by only a few percent, leading to additional broadening of the lines. An alternative possibility for the origin of the absorption line is atomic bound-bound transitions. In strong magnetic fields atomic orbitals are distorted into a cylindrical shape and the electron energy levels are similar to Landau states, with binding energies of atoms significantly increased. E.g. for hydrogen in a magnetic field of the order of 10$^{13}$ G the strongest atomic transition is expected at energy E/eV $\approx$ 75(1+0.13ln(B$_{13}$))+63B$_{13}$, with B$_{13}$ = B/10$^{13}$ G \citep{2002nsps.conf..263Z}. For the line energies found in the spectra of thermal isolated neutron stars this would require similar field strengths to those derived assuming cyclotron absorption. Atomic line transitions are expected to be less prominent at higher temperatures because of a higher ionization degree \citep{2002nsps.conf..263Z}. \section{Conclusions} Although the true origin of the broad absorption lines in X-ray spectra of thermal isolated neutron stars is not clear yet, our current knowledge about the ``magnificent seven" strongly suggests that they are highly magnetized ($10^{13} - 10^{14}$ G), slowly rotating cooling neutron stars. Further timing studies would be very useful to obtain more independent estimates of the magnetic field strength (as they currently only exist from RX\,J0720.4$-$3125). We do not detect radio emission, probably because their radio beam is very narrow due to their large light cylinder radius. The discovery of a few radio pulsars with similar magnetic field strength and long period \citep{2000ApJ...541..367C,2002MNRAS.335..275M,2003ApJ...591L.135M} shows that radio emission can still occur at inferred field strengths higher than the ``quantum critical field" $B_{cr}= m_e^2c^3/e\hbar \simeq 4.4\times 10^{13}$ G. On the other hand, any non-thermal emission from the ``magnificent seven" may so far just fall below the detection threshold of current instruments \citep{2005astroph0503239}. \begin{acknowledgements} The XMM-Newton project is supported by the Bundesministerium f\"ur Bildung und For\-schung / Deutsches Zentrum f\"ur Luft- und Raumfahrt (BMBF / DLR), the Max-Planck-Gesellschaft and the Heidenhain-Stif\-tung. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_065-3117
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} This article presents a detailed numerical study of the thermodynamics and of the dynamics of a model introduced several years ago by Kolafa and Nezbeda\cite{Kol87a} as a primitive model for water (PMW). The model envisions a water molecule as a hard sphere (HS) whose surface is decorated by four short ranged "sticky" spots, arranged according to a tetrahedral geometry, two of which mimic the protons and two the lone-pairs. Despite its original motivation, the Kolafa and Nezbeda model is representative of the larger class of particles interacting via localized and directional interactions, a class of systems which includes, besides network forming molecular systems, also proteins\cite{Lom99a,Sea99c,Ker03a} and newly designed colloidal particles\cite{Man03a}. Indeed, recent developments in colloidal science are starting to provide particles with specific directional interactions\cite{Yet03aNature}. In the same way as sterically stabilized colloids have become the ideal experimental model for realizing the hard-sphere fluid, novel physical chemical techniques will soon make available to the community colloidal analogs of several molecular systems. A colloidal water is probably not far from being realized. Recent work\cite{Zac05a} has focused on the dynamics of colloidal particles interacting with a restricted number of nearest neighbors. In Ref.~\cite{Zac05a, Mor05a} particles are interacting via a limited-valency square well model\cite{Spe94aMP,Spe95aMP,Spe96aMP}, imposing a many body constraint on the maximum number $n_{max}$ of bonded interactions. It has been found that when $n_{max} < 6$, a significant shrinking of the liquid-gas (or colloidal rich-colloidal poor) spinodal takes place. A window of packing fractions values opens up in which it is possible to reach very low temperature (and hence states with extremely long bond lifetime) without encountering phase separation. This favors the establishment of a spanning network of long-living bonds, which in the colloidal community provides indication of gel formation but which, in the field of network forming liquids, would be rather classified as glass formation. The study of the dynamics of the PMW provides a test of the $n_{max}=4$ results, in the absence of many-body interactions and in the presence of a geometric correlation between the bonding sites, retaining the maximum valency. This article, by reporting results on a model which can be at the same time considered as a simple model for the new generation of patchy colloids or for network forming liquids, starts to bridge the gap between these two fields. Thermodynamic and structural properties of several primitive models for water (and other bonded systems) have been studied in detail during the last 30 years\cite{Bra85a,Kol87a,Nez89a,Nez90a,Veg98a}, since this type of primitive models have become one of the landmarks for testing theories of association\cite{Wer84a,Wer84b,Gho93a,Sea96a,Dud98a,Pee03a,Kal03a}. In particular, the theory of Wertheim\cite{Wer84a,Wer84b} has been carefully compared to early numerical studies, suggesting a good agreement between theoretical predictions and numerical data, in the temperature and packing fraction regions where it was possible to achieve numerical equilibration\cite{Veg98a}. Recently, the increased numerical facilities, extending the range of studied state points, have clarified that deviations from the theoretical predictions start to take place as soon as the number of bonds (between different patches) per molecule increases and a network of bonded particles appears\cite{Vlc03a,Dud98a}. Geometric correlations between different bonds, not included in the theory, are responsible for the break down of the theoretical and numerical agreement. Attempts to extend the perturbation theory beyond first order do not appear to be able to cure the problem\cite{Vlc03a}. The PMW is a good candidate for testing new theories of association and, for this reason, it is important to clearly establish numerically the low $T$ behavior of the supercooled liquid state. The equilibrium PMW phase diagram, recently calculated\cite{Veg98a}, includes two crystal regions and a metastable fluid-gas coexistence. All previous studies of primitive models for sticky directional interactions have focused on thermodynamic and static properties of the model. But the ability of fully exploiting the fast developments taking place in colloidal physics \cite{glotzer0,glotzer1} requires understanding not only the equilibrium phases of systems of patchy particles and their modifications with the external fields, but also understanding the kinetic phase diagram\cite{Sci02a}, i.e. the regions in phase space where disordered arrested states can be expected, and when and how these states are kinetically stabilized with respect to the ordered lowest free energy phases. In this respect, it is worth starting to establish the dynamic properties of simple models of patchy interactions, since the simplicity of these models (based on hard sphere and square well interactions) have the potentiality to provide us with an important reference frame and may play a relevant role in deepening our understanding of the dynamic arrest in network forming liquids, in connecting arrest phenomena associated to gel formation\cite{Zac05a,delgado} (the establishment of a percolating network of long lived bonds) and arrest related to excluded volume effects and the dependence of the general dynamic and thermodynamic features on the number and spatial location of patchy interactions. The case of the PMW reported here is a good starting one. In this article we report thermodynamic data, extending the previously available information to lower temperatures and, for the first time, dynamic information obtained solving the Newton equations using a new algorithm based on event-driven propagation. \section{The Model and Numerical Details} In the PMW, each particles is composed of an hard sphere of diameter $\sigma$ (defining the length scale) and by four additional sites located along the direction of a tetrahedral geometry. Two of the sites (the proton sites H) are located on the surface of the hard sphere, i.e. at distance $0.5 \sigma$ from the center. The two remaining sites (the lone-pair sites LP) are located at distance $0.45 \sigma$. Besides the hard-sphere interaction, preventing different particles to sample distances smaller than $\sigma$, only the H and LP sites of distinct particles interact via a square well (SW) potential $u_{SW}$ of width $\delta=0.15 \sigma$ and depth $u_0$, i.e. \begin{eqnarray} u_{SW}=-u_0~~r<\delta \\ \nonumber ~~~~~=0~~~~~~r>\delta, \end{eqnarray} where $r$ is here the distance between H and LP sites. The choice of $\delta=0.15 \sigma$ guarantees that multiple bonding can not take place at the same site. The depth of the square well potential $u_0$ defines the energy scale. Bonding between different particles is thus possible only for specific orientations and distances. In the linear geometry, the maximum center-to-center distance at which bonding is possible is $1.1\sigma$ since the LP site is buried $0.05\sigma$ within the hard-core, a value typical of short-range colloid-colloid interactions. We have studied a system of $N=350$ particles with periodic boundary conditions in a wide range of packing fraction $\phi \equiv \pi/6 n \sigma^3$ (where $n$ is the number density) and temperatures $T$, where $T$ is measured in units of $u_0$ ($k_B=1$). We perform both Monte Carlo (MC) and event driven molecular dynamics. In one MC step, an attempt to move each particle is performed. A move is defined as a displacement in each direction of a random quantity distributed uniformly between $\pm 0.05~\sigma$ and a rotation around a random axis of random angle distributed uniformly between $\pm 0.5$ radiant. Equilibration was performed with MC, and monitored via the evolution of the potential energy (a direct measure of the number of bonds in the system). The mean square displacement (MSD) was also calculated to guarantee that each particle has diffused in average more than its diameter. In evaluating the MSD we have taken care of subtracting the center of mass displacement, an important correction in the low $T$ long MC calculations. At low $T$ simulations required more than $10^9$ MC steps, corresponding to several months of CPU time. We have also performed event driven (ED) molecular dynamic simulations of the same system, modeling particles as constant density spheres of diameter $\sigma$ and mass $m$. The momentum of inertia is diagonal and equal to $m \sigma^2/10$ . The algorithm implemented to propagate the newtonian trajectory in the presence of patchy square well interaction is described in details in Appendix~\ref{appendicecris}. In ED dynamics, time is measured in units of $\sigma \sqrt{m/u_0}$. Assuming as $m$ the mass of the water molecule, as $u_0$ a typical value for hydrogen bond ($\approx 20 kJ/mol)$ and as $\sigma$ the nearest neighbor distance in water ($0.28 nm$), the unit of time corresponds $\approx 0.3 ps$. All static quantities have been evaluated with both MC and MD configurations finding no differences. Pressure, measured in units of $u_0/\sigma^3$, has been calculate as sum of three contributions. A trivial kinetic contribution, equal to $nk_BT$. A positive HS contribution and a negative contribution arising from the SW interaction. Details of the calculation of $P$ in both MC and ED simulations is provided in the Appendix \ref{pressure}. \section{Results: Static} \subsection{Potential Energy $E$} Since in the PMW each site can take part to only one bond, due to geometric constraints fixed by the small value of $\delta$, the lowest energy configuration is defined by four bonds per particles, corresponding to a ground state energy per particle $E_{gs}=-2 $ (in units of $u_0$). Of course, this absolute ground state value may not be accessible at all $\phi$, due to the strong constraints introduced by the bonding geometry. According to Wertheim's first order thermodynamic perturbation theory, the $T$ and $\phi$ dependence of the potential energy per particle $E$ is given by\cite{Kol87a,Nez89a,Veg98a} \begin{equation} E-E_{gs}= \frac{2}{1+c} \label{eq:u} \end{equation} where \begin{equation} c=0.5 \bigg\{ \left[ 1+192 ( e^{\frac{1}{T}} -1) \phi J \right]^{0.5} -1 \bigg\} \end{equation} \begin{equation} J=\frac{c_1 (1-\phi/2)-c_2 \phi (1+\phi)}{(1-\phi)^3} \end{equation} with $c_1=2.375 \times 10^{-5}$ and $c_2=2.820 \time 10^{-6}$\cite{Veg98a}. The Wertheim theory, which assumes uncorrelated independent bonds, predicts as low $T$ limit of Eq.~\ref{eq:u} an Arrhenius $T$-dependence, \begin{equation} \lim_{T \rightarrow 0} E-E_{gs}= \frac{4}{\sqrt{192 \phi J}} e^{-0.5/T} \label{eq:ulowT} \end{equation} i.e. with an activation energy of half bond energy. It is worth observing that such an Arrhenius law, with an activation energy equal to $0.5$ characterizes the low $T$ dependence of the energy in the $n_{max}$ model\cite{Mor05a,Zac05a} [a model of particles interacting via a SW potential with an additional constraint on the maximum number of bonds], where no geometric correlation between bonds is imposed. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{epot.eps} \includegraphics[width=0.45\textwidth]{epot2.eps} \caption{Potential energy for the PMW. The top panel shows data for all studied isochores as a function of $T$. The lower panel shows an enlargement of the low $T$ region, where the network is fully developed. Note that for this model, the lowest possible energy is $E_{gs}=-2$.} \label{fig:ene} \end{figure} Fig.~\ref{fig:ene} shows the $T$ dependence of the potential energy for different isochores. As discussed in details in the following, for $\phi \lesssim 0.24$ a phase separation is encountered on cooling, preventing the possibility of equilibrating one-phase states below $T \approx 0.11$. For $\phi > 0.24$ the system remains homogeneous down to the lowest investigated $T$. The low $T$-behavior is expanded in Fig.~\ref{fig:ene}-(bottom). With the extremely long equilibration runs performed, proper equilibration is reached only for $T \gtrsim 0.09$. The enlargement of the low $T$ region shows that the absolute ground state value $-2 u_0$ is closely approached at $\phi \approx 0.3$. At higher or smaller $\phi$, the potential energy appear to approach a constant value larger than $-2 u_0$. Consistent with previous claims\cite{Veg98a}, high $T$ data are very well represented by first order thermodynamic perturbation theory. Systematic deviations between theory and simulation data appears as soon as the number of bonds per particle becomes bigger than one. Comparing the simulation data with the Wertheim theory, it is confirmed that the physics of the network formation is completely missing in the first-order perturbation theory. \begin{figure}[tbh] \centering \includegraphics[width=0.45\textwidth]{evsrhowert.eps} \caption{Potential energy vs. $\phi$ along isotherms. Symbols: simulation data. Lines: Wertheim's theory.} \label{fig:enevsrho} \end{figure} Fig.~\ref{fig:enevsrho} shows the $\rho$ dependence of $E$ along isochores. At high $T$ ($T>0.13$), a monotonic decrease of $E$ is observed, caused by the increased bonding probability induced by packing. In this $T$ region, the number of bonds is at most of the order of two per particle. Completely different is the situation for lower $T$. The $\phi$ dependence becomes non-monotonic. There is a specific value of the packing fraction ($\phi \approx 0.3$) at which the lowest energy states are sampled. In the following we define the optimal network packing fractions as the range of packing fractions for which it is possible to fully satisfy the bonds in a disordered homogeneous structure. At $\phi \approx 0.3$, the number of bonds at the lowest investigated $T$ (the lowest $T$ at which equilibration is feasible with several months of computation time) is about $3.8$ per particle, i.e. about 95\% of the bonds are satisfied. The range of optimal $\phi$s appears to be rather small. Indeed for packing fractions lower or higher than this optimal $\phi \approx 0.314$, the formation of a fully connected network is hampered by geometric constraints: at lower $\phi$, the large inter-particle distance acts against the possibility of forming a fully connected network, while at large $\phi$, packing constraints, promoting close packing configurations are inconsistent with the tetrahedral bonding geometry. Not surprisingly, $\phi=0.314$ is within the range of $\phi$ values which allow for a stable open diamond crystal phase ($0.255<\phi<0.34$)\cite{Veg98a}. A reduction of the geometric constraints (as in the $n_{max}$ model\cite{Zac05a,newzacca}) increases the range of optimal $\phi$. It is worth also noting that the liquid side of the spinodal curve is close to the region of optimal network $\phi$. The existence of a convex form for the potential energy (here for $\phi \gtrsim 0.3$) has been observed in several other models for tetrahedral networks, including models for water (and water itself\cite{Sci97c}). It has been pointed out that a negatively convex $\phi$ dependence is indicative of a destabilization of the free energy\cite{Sci97c} and a precursor of a possible liquid-liquid critical point (in addition to the lower $\phi$ gas-liquid one). Liquid-liquid critical points have been observed in several models for water\cite{Poo92a,Poo93c,Yam02a,Poo05JPCM,Pas05aPRL,Bro05JCP}. Indeed, the Helmholtz free energy $A$ is related to $U$ (the sum of the kinetic and potential energy) via $A=U-TS$, where $S$ is the entropy. The curvature of an isotherm of $A$ must be positive for a homogeneous phase of a specified volume $V$ to be thermodynamically stable. The curvature of $A$ can be expressed as \begin{equation} \left( \frac{\partial^2 A}{\partial V^2}\right)_T= \left( \frac{\partial^2 U}{\partial V^2}\right)_T-T \left( \frac{\partial^2 S}{\partial V^2}\right)_T \label{eq:d2a} \end{equation} Since $P=-\left( \frac{\partial A}{\partial V}\right)_T$ the inverse compressibility $K_T= -1/V (\partial V/\partial P)_T$ is related to the curvature of $A$ by \begin{equation} \frac{1}{ K_T}= V \left[ \left( \left( \frac{\partial^2 U}{\partial V^2}\right)_T-T \left( \frac{\partial^2 S}{\partial V^2}\right)_T \right) \right] \end{equation} The curvature of $A$ is thus proportional to $1/K_T$ for fixed $V$. Since $1/K_T$ must be positive for a thermodynamically stable state, for the range of $V$ in which $\left( \frac{\partial^2 U}{\partial V^2}\right)_T<0$, the contribution of the internal energy reduces the thermodynamic stability of the liquid phase. The liquid remains stable where $U$ has negative curvature only because the contribution of the entropic term in Eq. ~\ref{eq:d2a} is large enough to dominate. Yet entropic contributions to these thermodynamic quantities are suppressed as $T$ decreases, due to the occurrence of the factor of $T$ in the second term on the right-hand side of Eq.~\ref{eq:d2a}. Hence the $U-V$ data suggest that at lower $T$ a single homogeneous phase of the liquid will not be stable for certain values of $V$, leading to a separation into two distinct liquid phases of higher and lower volume. Due to the predominant role of $E$ in the free energy at low $T$, the possibility of a phase separation of the PMW liquid into two liquid phases of different $\phi$, for $\phi>0.3$ and $T$ lower than the one we are currently able to equilibrate should be considered. \begin{figure}[tbh] \centering \includegraphics[width=0.45\textwidth]{epotusut.eps} \caption{Arrhenius representation ($E-E_{gs}$ vs $1/T$) of the potential energy around the optimal network density. The dashed line, shown as a reference, has an activation energy of $-u_0$.} \label{fig:eneusut} \end{figure} Fig.~\ref{fig:eneusut} shows $\ln(E-E_{gs})$ vs $1/T$. At the optimal $\phi$, the energy of the fully connected state is approached with an Arrhenius law, characterized by an activation energy of $\approx 1 u_0$, clearly different from the $0.5$ value predicted by the Wertheim theory. For larger $\phi$ values, data suggest that the lowest reachable state has an energy different from $-2 u_0$, consistent with the expectation that on increasing $\phi$, geometric constraints forbid the development of a fully connected network even at the lowest $T$. \subsection{$P$} The Wertheim's prediction for the $T$ and $\phi$ dependence of the PMW pressure (the equation of state) is \begin{eqnarray} P=P_{HS}~~~~~ \\ \nonumber - n k_BT \frac{96 (e^{1/T} -1)}{(1+c)^2} \frac{c_1 \phi ( 1+ \phi - 0.5 \phi^2 ) - 2 c_2 \phi^2 (1+2 \phi)}{(1-\phi)^4} \end{eqnarray} where $P_{HS}$ is the pressure of the HS fluid at the same packing fraction. $P_{HS}$ is very well represented by the Carnahan-Starling EOS\cite{Han86a} \begin{equation} P_{HS}= n k_B T \frac{( 1 + \phi + \phi^2 - \phi^3)}{(1 - \phi)^3} \end{equation} The Wertheim EOS predicts a vapor-liquid critical point at $T_c=0.1031$ and $\phi_c=0.085 $\cite{Veg98a}. The vapor-liquid spinodals calculated according to the Wertheim theory and from simulation data are reported in Fig.~\ref{fig:phase}. The numerical estimate is provided by locating, along isochores, the highest $T$ state point in which phase separation is observed and the $T$ at which the small $q$ limit of the structure factor is smaller than five. These two state points bracket the spinodal locus. It is interesting to compare the liquid-gas spinodal of the PMW with the corresponding spinodal of the symmetric spherical square well potential with same depth and well width $\delta=0.15$. In that case, the critical point is located at $T_c \approx 0.56$ and $\phi_c \approx 0.212$\cite{Pag05aJCP} and the high packing fraction (the liquid) side of the spinodal extends beyond $\phi=0.6$. The net result of decreasing the surface available to bonding and of limiting to four the maximum number of nearest neighbors which can form bonds is the opening of a wide region of $\phi$ values where (in the absence of crystallization) an homogeneous fluid phase is stable (or metastable). This finding is in full agreement with the recent work of \cite{Zac05a}, where a saturated square well model was studied for different values of the maximum valency. Indeed, it was found that when the number of bonds becomes less then six, the unstable region (the surface in the $(\phi-T)$ plane encompassed by the spinodal line) significantly shrinks, making it possible to access low $T$ states under single phase conditions. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.5\textwidth]{phase.eps} \vspace{0.10cm} \caption{Thermodynamic phase diagram for the PMW. The theoretical Wertheim prediction for the locus at which $(\partial P/\partial V)_T=0$ is compared with numerical estimates of the spinodal, calculated by bracketing it via the locus of point at which $S(0) \approx 5$ and the locus of points where a clear phase separation is detected. The location of the bond percolation line is also reported.} \label{fig:phase} \end{figure} \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{pwert.eps} \includegraphics[width=0.45\textwidth]{plog.eps} \caption{Isochores of $P$ according to the Wertheim theory (top) and as calculated from the simulation data (bottom). Symbols refer to simulation data. The same sequence of $\phi$ values is shown in both panels.} \label{fig:pvsT} \end{figure} \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{pcomp.eps} \caption{Components of the pressure at $\phi=0.314$. The total $P$ is decomposed in ideal gas, HS and bonding components. Note the isochoric minimum in $P$ around $T=0.105$, a signature of an isobaric density maximum.} \label{fig:pcomp} \end{figure} Fig.~\ref{fig:pvsT} shows $P(T)$ for different isochores. In agreement with previous analysis, $P$ is well represented by the Wertheim theory only at high temperature. At low $T$ several interesting features are observed: (i) for $\phi<0.25$, isochores end in the spinodal line. (ii) in the simulation data, a clear difference in the low $T$ behavior is observed between the two studied isochores $\phi =0.288$ and $\phi=0.314$ . While in the $\phi =0.288$ case $P(T)$ decreases continuously on cooling, in the $\phi=0.314$ case the low $T$ behavior of $P$ is reverses and $P$ approaches a positive finite value on cooling. This different low-$T$ trends indicated that for $ \phi \lesssim 0.3$, on cooling the network becomes stretched (negative pressures), in the attempt to preserve the connected bonded state. This implies that at low $T$, there is a driving force for phase separating into a fully connected unstressed network and a gas phase. This also suggests that the spinodal curve ends at $T=0$ around $\phi =0.3$. At $\phi \approx 0.3$, the packing fraction is optimal for the formation of an unstressed fully connected network at low $T$. The bond formation on cooling does not require any stretching and it reverses the $T$-dependence of $P$. (iii) Between $0.3 \lesssim \phi \lesssim 0.38$ a minimum of $P$ appears. The existence of a minimum in $P(T)$ along isochores evidences the presence of density anomalies (i.e. expansion on cooling along isobars) since points in which $(\partial P/\partial T)_V=0$, by a Maxwell relation, coincide with points in which $\alpha\equiv (\partial V/\partial T)_{P}=0$, i.e. with points in which density anomalies are present. The simplicity of the model allows us to access the different contributions to $P$ and investigate the origin of the increase of $P$ on cooling. In the PMW, apart from the trivial kinetic component contribution, the only positive component to $P$ arises from the HS interaction. Interestingly enough, the HS component increases on cooling. Such an increase in the HS repulsion, indirectly induced by the formation of the bonding pattern, in the range $0.30 \lesssim\phi \lesssim 0.36$ appears to be able to compensate the decrease in the bonding component of $P$. To confirm the presence of density anomalies it is instructive to look at the $V$ dependence of $P$ along isotherms, shown in Fig.~\ref{fig:pvsrho}. Again, the simulation data are consistent with the Wertheim theory predictions only at large $T$ and indeed it was already noted that no density anomalies are found within the theory\cite{Kol87a}. The simulation data also show a clear crossing of the isotherms around a volume per particle $v=1.4$ and $1.7$, corresponding to $\phi=0.314$ and $\phi=0.38$. Again crossing is indicative of the presence of density anomalies. The increase of $P$ on cooling, between $\phi=0.314$ and $\phi=0.38$ suggest also a possible emergence of a second Van der Waal-type loop (in addition to the gas-liquid one) for $T$ lower than the one we are currently able to equilibrate. The possibility of a second critical point between two liquid phases of different densities has been discussed at length in the past\cite{Sci05a}, following the discovery of it\cite{Poo92a} in one of the first models for water\cite{Sti74a}. \begin{figure}[tbh] \centering \includegraphics[width=0.45\textwidth]{pvsvwert.eps} \includegraphics[width=0.45\textwidth]{pvsvext.eps} \vspace{-0.50cm} \caption{Isotherms of $P$ according to the Wertheim theory (top) and as calculated from the simulation data (bottom) as a function of the volume per particle $v \equiv n^{-1}$. Symbols refer to simulation data. Note the crossing of the different isotherms at $v=1.4$ and $1.7$. } \label{fig:pvsrho} \end{figure} \subsection{$g(r)$} The PMW radial distribution functions for $T>0.15$, have been reported previously\cite{Kol87a}. Here we focus on the interesting structural changes observed during the development of the bond network in $g_{OO}$ and $g_{H-LP}$, a $T$-region which was not possible to access in the previous simulations. The $g_{OO}$ provides information on the center to center particle correlation while $g_{H-LP}(r)$ contains information on the bonding and on the attractive component of the pressure. \begin{figure}[tbh] \centering \includegraphics[width=0.45\textwidth]{grn020.eps} \includegraphics[width=0.45\textwidth]{grn055.eps} \includegraphics[width=0.45\textwidth]{grn07258.eps} \caption{Particle-particle radial distribution function at $\phi=0.105$, $\phi=0.288$, $\phi=0.380$.} \label{fig:groo} \end{figure} Fig.~\ref{fig:groo} shows $g_{OO}(r)$ at three different packing fractions. In the interval $1<r<1.1$ the function is highly peaked, a consequence of the distance imposed by bonding. Outside the bonding distance ($r>1.1$), $g_{OO}(r)$ shows significant oscillations only at low $T$. A peak, beside the bonding one, is observed at $r\approx 1.7$ corresponding to the characteristic distance between two particles bounded to the same central particle in a tetrahedral geometry. The absence of the information about the geometry of the bonding sites in the theory of Wertheim is responsible for the absence of the peak at $1.7 \sigma$ and the breakdown of the predictive ability of the Wertheim theory as soon as a particle is engaged in more than two bonds. A few observations are in order when comparing the $\phi$ dependence of $g_{OO}(r)$: At low $\phi$, the tetrahedral peak at $r \approx 1.7$ is the only peak in $g_{OO}(r)$. When $\phi$ approaches the optimal network density a clear tetrahedral pattern develops and $g_{OO}(r=1.7)$ becomes larger than two. The tetrahedral peaks at $\approx 1.7$ is followed by oscillations extending up to $4\sigma$. At even larger $\phi$, there is still a residual signature of tetrahedral bonding at $1.7 \sigma$, but the depletion region for $r>1.1 \sigma$ is not developed any longer, signaling a competition between the HS packing (which favor a peaks at positions multiple of $\sigma$) and the local low density required by bonding. Fig.~\ref{fig:grn60} compares, at $\phi=0.314$, the OO, HH and H-LP radial distribution functions in linear scale. In all three functions, the progressive structuring induced by the bonding is clearly evident. Even $g_{HH}(r)$ shows very clear signs of spatial correlations, which are induced by the tetrahedral geometry of the bonding and by the geometry by which the bonding between $H$ and $LP$ propagates. Indeed, in the PMW model the interaction between different $H$ sites is zero. \begin{figure}[tbh] \centering \includegraphics[width=0.45\textwidth]{grOOn060.eps} \includegraphics[width=0.45\textwidth]{grHHn060.eps} \includegraphics[width=0.45\textwidth]{grHLPn060.eps} \caption{Radial distribution functions for $OO$, $HH$ and $H-LP$ pairs at the optimal network density $\phi=0.314$. Insets in $g_{OO}$ and $g_{_{H-LP}}$ provide enlargements of the contact region. On cooling a significant structure appears, associated to the intense bonding.} \label{fig:grn60} \end{figure} \subsection{$S(q)$} The structure factor of the system, defined in term of the particle's center coordinates $\vec r_i$ as, \begin{equation} S(\vec q)=<\frac{1}{N}\sum_{i=1}^N e^{i \vec q \cdot (\vec r_i - \vec r_j)}> \end{equation} provides information on the wave vector dependence of the density fluctuations. In isotropic systems, $S(q)$ is function of the modulus $q$. The behavior of $S(q)$ at small $q$ provides indication on the phase behavior, since an increase of $S(q)$ at small $q$ indicates the development of inhomogeneities with length-scale comparable to the system size studied. As an indicator of the location of the phase boundaries (of the liquid-gas spinodal line), we estimate the locus of points in $(T,\phi)$ where $S(q)$ for the particles centers becomes larger than 5 at small $q$. This locus is reported in Fig.~\ref{fig:phase}. For $\phi \gtrsim 0.28$ $S(q)$ does not show any sign of growth at small $q$ in the region of $T$ where equilibration is feasible, being characterized by values of $S(q)$ at small $q$ of the order of $0.1$. This confirms that, at this packing fraction, there is no driving force for phase separation, since the average density has reached a value such that the formation of a fully connected network of bonds does not require a local increase of the packing fraction. It is also important to stress that at $\phi =0.288$, at the lowest studied $T$, the average number of bond per particle is $3.8$, and hence the system is rather close to its ground state and no more significant structural changes are expected on further cooling. \begin{figure}[tbh] \centering \includegraphics[width=0.45\textwidth]{sqn020.eps} \vskip 0.8cm \includegraphics[width=0.45\textwidth]{sqn055.eps} \vskip 0.25cm \includegraphics[width=0.45\textwidth]{sqn072.eps} \caption{Particle-particle structure factor at $\phi=0.105$, $\phi=0.288$, $\phi=0.3858$. Note that at $\phi=0.105$, an intense signal develops at small $q$, related to the approach to the spinodal instability. Small $q$ intensity is completely missing at the higher $\phi$ shown. } \label{fig:sqoo} \end{figure} Fig.~\ref{fig:sqoo} shows $S(q)$ at $\phi =0.105$, $\phi =0.288$ and $\phi=0.385$. The $\phi =0.105$ case has been chosen to show the significant increase in $S(q)$ associated to the approach of the spinodal curve. The case $\phi=0.288$ shows both the absence of a small $q$-vector divergence and the clear development of the typical $q$-pattern of tetrahedral networks. On cooling the peak at $q\sigma=2\pi$, characteristic of excluded volume interactions splits in two parts. A pre-peak around $q\sigma \approx 5$ and an intense peak around $q\sigma \approx 8$. The case $\phi=0.385$ confirms that the packing fraction is now so high that a full tetrahedral network cannot develop and the splitting of the main peak in two distinct components is very weak and visible only at the slowest investigated $T$. \subsection{Percolation} The PMW, as all other models based on HS and SW interactions, is particularly suited for calculation of bond properties, since a bond between particle $i$ and $j$ can be unambiguously defined when the pair interaction energy between $i$ and $j$ is $-u_0$. In the case of continuous potentials such a clear cut bond definition is not possible and several alternative propositions have been put forward\cite{hill,Con04aJPCM}. We focus here on the connectivity properties of the equilibrium configurations. We use standard algorithms to partition particles into clusters. Configurations are considered percolating when, accounting for periodic boundary conditions, an infinite cluster is present. More explicitly, to test for percolation, the simulation box is duplicated in all directions and the ability of the largest cluster to span the replicated system is controlled. If the cluster in the simulation box does not connect with its copy in the duplicated system then the configuration is assumed to be non-percolating. The boundary between a percolating and a non-percolating state point has been defined by the probability of observing infinite clusters in 50$\%$ of the configurations. The resulting percolation line is reported in Fig.~\ref{fig:phase}. State points on the right side of the line are characterized by the presence of an infinite cluster. Still, at this level of definition, percolation is a geometric measure and it does not provide any information on the lifetime of the percolating cluster. The percolation line, like in simple SW potentials, crosses the spinodal curve very close to the critical point. Differently from the SW, the percolation locus does not extend to infinite $T$, since at high $T$, even at large $\phi$, the reduce particle surface available for bonding prevents the possibility of forming a spanning network with a random distribution of particles orientations. Along the percolation line, about 1.5 bonds per particle are observed, with a small trend towards an increase of this number on decreasing $\phi$. In terms of bond probability $p_b$, this correspond to $p_b \approx 0.375$, not too different from the bond percolation value of the diamond lattice, known to be $0.388$\cite{Sta92book}. \section{Dynamics} Thermodynamic and static properties of the PMW presented in the previous section clarify the location of the regions in which the bond network forms, the region where the liquid-gas phase separation takes place and the region at high $\phi$ where packing phenomena start to be dominant. In the following we present a study of the diffusion properties of the model in the phase diagram, with the aim of locating the limit of stability of the liquid state imposed by kinetic (as opposed to thermodynamic) constraints. \subsection{$MSD$} We focus on the mean square displacement $<r^2(t)>$ of the particle centers, as a function of $T$ and $\phi$, calculated from the newtonian dynamic trajectories. Fig.~\ref{fig:msd} shows $<r^2(t)>$ for a few selected isochores. For short time $<r^2(t)> = <v_T^2> t^2$, where $<v_T^2>=3/2 k_BT$ is the thermal velocity. At high $T$, the short-time ballistic behavior crosses over to a diffusion process ($<r^2> \sim t$) directly. At low $T$, the ballistic short-time and the diffusive long time laws are separated by an intermediate time window in which $<r^2(t)>$ is approximatively constant, an indication of particle caging. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{r2RHO025.eps} \includegraphics[width=0.45\textwidth]{r2RHO060.eps} \includegraphics[width=0.45\textwidth]{r2RHO0859.eps} \caption{Mean square displacement for different $T$s, at three different $\phi$ values. Top: $\phi=0.131$, Center: $\phi=0.314$, Bottom: $\phi=0.450$.} \label{fig:msd} \end{figure} Several features of $<r^2(t)>$ are worth pointing: (i) For $\phi \lesssim 0.209$, the spinodals are encountered on cooling before the caging process is visible. The phase separation process sets in well before particles start to feel the caging process. (ii) The static percolation curve reported in Fig.~\ref{fig:phase} has no effect on dynamics. There is no dynamic arrest at the static percolation transition. (iii) For $\phi$ such that a well developed tetrahedral network can form, it is possible to cool the system down to temperatures at which, on the scale of simulation, arrest is observed, in the absence of any phase separation. $<r^2(t)>$ develops a clear intermediate region where only the dynamic inside the cage is left. At this $\phi$, the caging is not associated to excluded volume interactions, but to the formation of energetic bonds\cite{Sci96b}. (iv) The plateau value in $<r^2(t)>$ is a measure of the localization length induced by the cage. To visualize the $\phi$ dependence of the localization length, we show in Fig.~\ref{fig:isoD} $<r^2(t)>$ for three different state points ($\phi$-$T$) with the same long time diffusivity. The cage length is always significantly larger than the typical $HS$ value ($<r^2(t)> \sim 0.01)$ and grows on decreasing $\phi$. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{isoD.eps} \caption{Mean square displacement along a constant $D$ path. Note the $\phi$ dependence of the plateau at intermediate times, which provides an estimate of the caging length. } \label{fig:isoD} \end{figure} \subsection{Diffusion Coefficient} The long time limit of $<r^2(t)>$ is, by definition, $6 D t$, where $D$ is the diffusion coefficient. The $\phi$ and $T$ dependence of $D$ is shown in Fig.~\ref{fig:DMD}. We show $\log(D)$ both vs. $T$ and vs $1/T$. Again, a few considerations are in order: (i) The range of $D$ data covers about five orders of magnitude. The data for $\phi<0.24$ are limited in $T$ by the phase separation process, while the data for $\phi>0.26$ are limited by computational resources, since equilibration can not be reached within several months of calculations. (ii) Data for $\phi>0.26$ crosses around $T\approx 0.105$, suggesting a non monotonic behavior of the $\phi$ dependence of the dynamics. (ii) The early decay of $D$ with $T$ can be described with a power-law $|T-T_{MCT}|^{\gamma}$. Power law fits, limited to the region of $T$ between $T=0.11$ and $T=0.15$, cover the first two-three orders of magnitude in $D$, in agreement with previous studies of more detailed models for water\cite{Gal96b,Sci96b,Sta99a} and with the previously proposed MCT interpretation of them\cite{Gal96b,Fab99b,Sci97b,Fab98a}. (iii) A cross-over to an Arrhenius activated dynamics is observed at low $T$. Activated processes become dominant in controlling the slowing down of the dynamics. The activation energy is $\approx 4$ close to the optimal network $\phi$, suggesting that at low $T$ diffusion requires breaking of four bonds. The cross-over from an apparent power-law dependence to an Arrhenius dependence has also been observed in simulations of other network forming liquids, including silica\cite{Hor99a,Voi01a} and more recently water\cite{Xul05aPNAS}. The low $T$ Arrhenius dependence also suggests that in the region where bonding is responsible for caging the vanishing $D$ locus coincides with the $T=0$ line. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{dvsT.eps} \includegraphics[width=0.45\textwidth]{dvsunosuT.eps} \caption{Temperature dependence of the diffusion coefficient along isochores. The dashed line is an Arrhenius dependence with activation energy equal to $4$.} \label{fig:DMD} \end{figure} Particularly interesting is the behavior of $D(\phi)$ along isotherms. An almost linear dependence at small $\phi$ (up to $\phi=0.235$) is followed by a non monotonic behavior. Below $T=0.11$, a diffusion anomaly is observed in the $T$ and $\phi$ region where the tetrahedral network develops. Around $\phi=0.34$ an isothermal compression of the system generates a speed up of the dynamics. Above $\phi \approx 0.35$, $D$ starts to decrease again on increasing packing. Diffusivity anomalies of the type observed in the PMW are found in several tetrahedral network forming liquids, including water\cite{Sca00a}. The explanation for this counterintuitive $\phi$ dependence of the dynamics is to be found in the geometric constraints requested by the tetrahedral bonding requiring an open local structure. Increasing $\phi$ destroys the local bonding order with a resulting speed up of the dynamics. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{DvsrhoMD.eps} \caption{Diffusion coefficient along isotherms. Note the non-monotonic behavior which develops for $T<0.11$.} \label{fig:diff} \end{figure} \subsection{Isodiffusivity (and arrest) lines} A global view of the dynamics in the $(T-\phi)$ plane is offered by the isochronic lines, i.e. the locus of state points with the same characteristic time\cite{Tol01a}. In the present case we focus on the isodiffusivity lines. The shape of the isodiffusivity lines, extrapolated to $D \rightarrow 0$ provides a useful indication of the shape of the glass transition line\cite{Fof02a,Zac02a,Sci04a}. Fig.~\ref{fig:isod} shows the isodiffusivity lines for several different values of $D$, separated each other by one order of magnitude. The slowest isodiffusivity lines are only weakly $T$ dependent at low $\phi$. For small values of $D$, iso-diffusivity lines start from the right side of the spinodal, confirming that slow dynamics is only possible for states with $\phi>\phi_c$. At large $\phi$ the isodiffusivity lines bend and become parallel to the $T$ axis, signaling the cross-over to the hard-sphere case. Extrapolating to zero the $T$ (or $\phi$) -dependence of $D$ it is possible to provide estimates of the dynamic arrest line. In the present model, the low $T$-dependence of $D$ along isochores is well modeled by the Arrhenius law and hence technically arrest is expected at $T=0$. The shape of the iso-diffusivity lines suggests that the vertical repulsive glass line (controlled by excluded volume effects) starting at high $T$ from the HS glass packing fraction meets at a well defined $\phi$ the $T=0$ bond glass line. The shape of the PMW isodiffusivity lines is very similar to the short-range square well case, for which a flat $T$-independent "attractive" glass line crosses (discontinuously) into a perpendicular $\phi$ independent "repulsive" glass line\cite{Daw00a,Zac02a}. Differently from the SW case, in the PMW the equivalent of the attractive glass line extends to much smaller $\phi$ values, since the reduced valency has effectively reduced the space in which phase separation is observed\cite{Zac05a}. It is also worth pointing that the shape of the isodiffusivity lines at low $\phi$ is similar to the shape of the percolation line. As in all previously studied models\cite{Zac02a,Zac05a}, crossing the percolation line does not coincide with dynamics arrest, since the bond lifetime is sufficiently short that each particle is able to break and reform its bonds. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{isoddiff.eps} \caption{Isodiffusivity lines in the $(T-\phi)$ plane. An excursion of five orders of magnitude in $D$ values is explored. All lines start from the spinodal and end at infinite $T$ at the corresponding HS location. At small $D$, lines cannot be continued above $\phi=0.5$ since there the HS interaction is dominant and the system crystallizes. Extrapolating along isochores the observed Arrhenius functional form suggest an ideal $D=0$ arrest line at $T=0$.} \label{fig:isod} \end{figure} \subsection{$D$ vs. $E-E_{gs}$} At the optimal network density, the low $T$ behavior of both $D$ and $E-E_{gs}$ (which, as discussed above, is also a measure of the number of broken bonds) is Arrhenius. This suggests to look more carefully in the relation between the activation energy of the two processes. One possibility is offered by a parametric plot of $D$ vs $E-E_{gs}$ in log-log scale, so that the slope of the straight line provides the ratio of the two activation energies. Such a plot is shown in Fig.~\ref{fig:DvsE}. We find the remarkable results that close to the optimal network $\phi$, the slope of the curve has exponent four, i.e. $D \sim (1-p_b)^4$, where $p_b$ is the probability that one of the four bonds is formed (and hence $1-p_b$ is the probability that one of the four possible bonds is broken), suggesting that the elementary diffusive process requires the breaking of four bonds. A functional law for diffusion in a tetrahedral model of this type was proposed by Teixera\cite{Tei90a} to interpret the $T$ dependence of $D$ in water in the context of the percolation model developed in Ref.~\cite{Sta80b}. A similar dependence has been recently reported for a model of gel-forming four-armed DNA dendrimers\cite{Sta05a}. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{dvsemd.eps} \caption{Diffusion Coefficient vs $E-E_{gs}$ for different $\phi$ values. The dashed line is a power-law with exponent four.} \label{fig:DvsE} \end{figure} \subsection{$D$ - MD vs. MC} All dynamic data presented above refer to event-driven newtonian dynamics. Indeed, Monte Carlo simulations intrinsically miss dynamic informations being based, in their simpler formulations, on random displacements of the individual particles. Still, if the random displacement in the trial move is small as compared to the particle size the sequence of MC steps can be considered a possible trajectory in configuration space. When this is the case, the number of MC-steps (each step being defined as an attempted move per each particle) plays the role of time in the evolution of the configurations in configuration space. In the absence of interactions, a particle evolved according to the MC scheme diffuses with a bare diffusion coefficient $D_{MC}^0$ fixed by the variance $\delta_{MC}^2$ of the chosen random displacement along each direction (in our calculations we have used an uniform distribution of displacements with a variance of $\delta_{MC}^2=(0.1)^2/12$, corresponding to $D_{MC}^0= 3 \delta_{MC}^2/6$ in units of $\sigma^2$/MC-step). If needed, $D_{MC}^0$ provides a mean to associate a physical time to the MC-step. At low $T$, when slow dynamic processes set in (favored by bonding or by packing), it is expected that the microscopic dynamics becomes irrelevant (except for a trivial scaling of time). The escape from the cage created by neighboring particles is indeed a much rare event as compared to the rattling of the particles in the cage. Under these conditions, the slow dynamic processes become independent on the microscopic dynamics, and hence Newtonian, Brownian and MC show the same trends. Fig.\ref{fig:DMCMD} shows that this is the case for three $\phi$ values. In all cases, at low $T$, the $T$ dependence of $D^{MC}$ and $D$ is identical. Moreover, the scaling factor between $MC$ and $MD$ dynamics is independent of $\phi$, suggesting that at low $T$, with the chosen units, the relation $D^{MC}/D_{MC}^0= \xi D$ holds. From comparing MC and MD data we find that the proportionality constant $\xi \approx 10 $ and shows no state-point dependence. To confirm that caging is fundamental to observe independence of the slow-dynamics from the microscopic one, we look at the shape of $<r^2(t)>$ (Fig.~\ref{fig:msd}), finding that at the $T$ at which MC and MD dynamics start to coincide a significant caging is present. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{D060.eps} \includegraphics[width=0.45\textwidth]{D072.eps} \includegraphics[width=0.45\textwidth]{D0859.eps} \caption{Comparison between the $MD$ and $MC$ diffusion coefficient at three different $\phi$ values. The MC data are also shown multiplied (by a common factor $0.1$) to better visualize the low $T$ overlap.} \label{fig:DMCMD} \end{figure} Since the microscopic time of the MC dynamics is not affected by temperature (being always fixed by the variance of the random displacements) it is interesting to consider the relation between $D$ and $E-E_{gs}$ also for $D^{MC}$, shown in Fig.~\ref{fig:mcdvse} at the optimal network density $\phi=\phi=0.314$. Again, the slope of the curve has exponent four, but compared to the MD case, the region of validity of the power-law covers the entire range of $T$ studied, from very high $T$ (where the number of bonds is negligible) down to the lowest equilibrated temperature, covering more than 4 order of magnitude. The validity of the relation $D \sim (1-p_b)^4$ extends up to high $T$, when the system is well above percolation and there is no evidence of a tetrahedral network (as shown in the structural data reported in Fig.~\ref{fig:sqoo} and \ref{fig:groo}). The extended validity of the power-law, with an exponent exactly equal to the valence of the model is highly suggestive and, in principle, very important for theoretical considerations, since it appears to cover either the region of temperature where liquid dynamics is observed, either the low $T$ states where signatures of slow dynamics (see Fig.\ref{fig:msd}) are very well developed. The limit of validity of this finding needs to be carefully checked in other primitive models with different valence and with more realistic models of network forming liquids. \begin{figure}[tbh] \centering \vspace{0.10cm} \includegraphics[width=0.45\textwidth]{dvsemc.eps} \caption{Relation between $D^{MC}$, normalized by the bare MD diffusion constant $D^0_{MC}$ and $E-E_{gs}$ for MC dynamics. Note that the MC data follow over more than five orders of magnitude a simple fourth-power law (full red line). } \label{fig:mcdvse} \end{figure} \section{Conclusions} Results presented in this manuscript cover several apparently distinct fields. To start with, results presented here can be discussed in relation to the dynamic and thermodynamic properties of water. We have shown that the thermodynamic of the PMW includes, beside the compressibility anomalies reported before\cite{Kol87a}, also density anomalies (at much lower $T$). The source of the density anomalies is shown to be associated to the establishment of the bond network in the tetrahedral geometry. On cooling (along isochores) the energetic driving force which favors the formation of the bond, due to geometric constraints associated to the formation of the open tetrahedral structure, forces the pressure to increase and hence generating a density maximum state point. The simplicity of the PMW allows us also to clearly detect an optimal network density, at which the ground state of the system (i.e. the state in which each particle is involved in four bonds) can be closely approached. At this $\phi$ the $T$-dependence of the potential energy is the most pronounced, generating a minimum in the isothermal $\phi$ dependence. The presence of a minimum in $E(\phi)|_T$ is highly suggestive since it indicates\cite{Sci97a} the possibility of a liquid-liquid phase separation at $T$ lower than the one we have been able to equilibrate. We have also shown that at this optimal $\phi$, low $T$ dynamics slows down with the fourth power of the probability of broken bonds, i.e. the dominant component to dynamics arises from single particle motions, and specifically of the particles which happen to have all four bonds broken at the same time. We have also shown that, like in real water, diffusion anomalies are observed. At low $T$, the decrease of the diffusivity on increasing $\phi$ is reversed once the optimal network density is reached. For higher $\phi$, the progressive destruction of the bond network due to the increased packing fastens the dynamics. For even higher $\phi$, $D(\phi)$ resumes its standard decreasing behavior associated to the approach of the excluded volume glass transition. Diffusion and density anomalies in the PMW models are thus strongly related, similarly to what has been observed in more realistic models for water\cite{Err01aNature}. The simplicity of the model is crucial in clarifying these aspects since the hard-core and square well interactions guarantee the absence of volumetric effects related to the $T$-dependence of the vibrational amplitudes. A second interesting aspect of the presented results concerns the dynamics in network forming systems. The present study provides a complete characterization of the dynamics in the entire $(\phi-T)$ plane, from the smallest possible $\phi$ liquid state points up to the close packed state. From the reported data, the relative role of the energy and of the packing in controlling the dynamics stands up clearly. The isodiffusivity lines are essentially parallel to the $\phi$-axis (i.e. $T$ controlled) in the network low $\phi$ region and are essentially parallel to the $T$-axis (i.e. $\phi$ controlled) at larger $\phi$. Interesting enough, along isochores, low $T$ dynamics follows and Arrhenius law, the landmark of strong-glass forming behavior\cite{fragile,Deb01aNature}. The Arrhenius law is foreseen by a $T$ region where dynamics has a strong $T$ dependence, compatible with a power-law dependence. In this power-law region the first signatures of caging in the mean square displacement are observed. Similar changes in the dynamics have been observed in previous studies of silica\cite{Hor99a,Hor01aPRE,Voi01a}, water\cite{Xul05aPNAS} and silicon\cite{Sas03aNatMat}. In particular, for the case of silica and water, it has been suggested that the region where dynamics start to feel the presence of energetic cages can be interpreted in terms of mode coupling theory\cite{Sci96b,Sci01aPRL,Sta99aPRL,Fab99a,Kob01aJNCS,Sci00a,Hor99a,Hor01aPRE}. Dynamics at the optimal network $\phi$ is particularly suggestive. Although in the present model, slowing down of the dynamics prevents equilibration of the supercooled liquid to very low $T$, at the lowest $T$ simulations the average number of bonds has gone up to 3.8 per particle. In this respect, further structural and dynamic changes are hard to foresee. This suggests that the Arrhenius behavior is retained down to $T=0$. Such speculation is reinforced by the numerical values of the activation energy of $D$ which is found to be $\approx 4 u_0$, i.e. corresponding to the breaking of four bonds. This suggests that in network liquids, the limited valency imposed by the directional forces fixes a well defined energy of the local configuration and a discrete change of it which is reflected in the Arrhenius behavior. The presence of a limited valency and a well defined bond energy scale appears to be the key ingredient of the strong liquids behavior\cite{Mor05a}. It is also worth to explore in future works the possibility that the optimal network density plays, in studies of one component systems, the same role as the reversibility window\cite{reversibility} in bulk alloy glasses. Connections with the concept of self-organization in network glasses\cite{naumis} should also be pursued. A further aspect of this work concerns the relative location between the liquid-gas spinodal and the kinetic arrest lines, whose shape is inferred by the study of the isodiffusivity lines. As in the short range SW model\cite{Zaccapri,Fof05aPRL}, the kinetic arrest lines ends in the right side of the spinodal, i.e. in the liquid phase. But differently from the SW case, the limited valency has shifted the right side of the spinodal to very small $\phi$ values, $\phi \lesssim 0.25$. Indeed, the limited valency effectively disfavors condensation of the liquid phase reducing the driving force for phase separation and making it possible to generate low packing fraction arrested states in the absence of phase separation, i.e homogeneous single phase stable in equilibrium, at low $T$\cite{Sci04bCPC}. The possibility to access low $T$ homogeneous supercooled states for $\phi>0.25$ characterized by a glassy dynamics, driven by the bonding energy as opposed to packing, confirms the findings of the zero-th order model with limited valency reported in Ref.~\cite{Zac05a}. The absence of geometric correlation between the bonding sites, the key ingredient of the maximum valency model\cite{Zac05a} is thus not crucial for the stabilization of the network. The role of the geometric constraint appears to be the reduction in the range of $\phi$ values where the fully bonded disordered state can be reached. Two different arrest mechanisms characterize the dynamics of network systems. Arrest due to the formation of energetic cages, with an arrest line which runs almost parallel to the $\phi$ axis, and arrest due to excluded volume effects, with an arrest line parallel to the $T$ axis. These two lines are reminiscent of the attractive and repulsive glass lines observed in short-range attractive colloids\cite{Fab99a,Ber99a,Daw00a,Zac02a,Sci02a}. Connecting the results presented in this article with previous studies of network forming liquids\cite{Hor99a, Sci96b}, it is tempting to speculate that mode-coupling theory predicts satisfactory the shape in the $(\phi-T)$ plane of the dynamic arrest lines. Still, while in the region where excluded volume controls caging the relative error in the location of the glass line is limited , in the case in which bonding mechanism is dominant in generating arrest, the location of the MCT line can be significantly distant from the actual dynamic arrest line (technically located at $T=0$, being dynamics Arrhenius), due to the role of activated bond-breaking processes which offer a faster channel for the decay of the correlations. The evaluation of the MCT lines for the present model, in principle feasible within the site-site approach developed by Chong and Goetze\cite{Cho98a,Cho02b} or within the molecular approach developed by Schilling\cite{Sch97a,Fab99b} can help clarifying this issue. The possibility of an intersection between the excluded volume arrest-line (starting at high $T$ from the HS glass packing fraction ) and the bond-controlled $T=0$ arrested line is particularly suggestive. The shape of the iso-diffusivity lines supports the possibility that the vertical repulsive glass line meets at a well defined $\phi$ the $T=0$ bond-controlled glass line. If this scenario is correct and general, one would conclude that the fragile and strong kinetic behavior is intimately connected to the dominant mechanism of arrest (fragile for excluded volume and strong for bonding) and, more interestingly, that strong behavior can only be observed when the interaction potential is such that less than six neighbors are present (i.e. in network forming systems). Indeed, only under these circumstances the suppression of the liquid-gas phase separation makes it possible to approach the $T=0$ bond-controlled glass line. An additional comment concerns the relation between gel and glass arrest states. Results reported in this article confirm, one more, that in this class of models the geometric percolation line does not have any impact on the dynamic arrest, since at percolation the lifetime of the bond is still rather small. Only when the system is well inside the percolation region, the bond lifetime has slowed down significantly to affect all measurements of global connectivity with an intrinsic time scale shorter than the bond lifetime (as for example finite frequency shear viscosity). Indeed, already long time ago it was noted for the case of water\cite{Sta80b} that bond percolation is irrelevant to any thermodynamic or dynamic anomaly. More sophisticated models, incorporating bond-cooperativity or significant entropy contributions to bonding (as the case of polymeric gels) may reduce the differences between dynamic arrest states and percolation\cite{Sta05a}. Despite the difference between percolation and arrest lines, if one consider the present model as a system of colloidal particles with sticky interactions, one would be led to call the arrested state at $0.3 \lesssim \phi \lesssim 0.5$ a gel, led by the fact that the arrested state has a low $\phi$ open connected structure. Similarly, if one consider the PMW as a model for a network liquid, one would be led to name the same arrested state a network glass. While we cannot offer any resolution to this paradox with the present set of data, future work focusing on the shape of the wavevector dependence correlation functions and the resulting non ergodicity parameters can help clarifying this issue and confirm/dispute the hypothesis on the differences between gels and glasses recently proposed\cite{Ber99a,Zac05a,Fof05bJCP}. At the present time, we can only call attention on the fact that a continuous change from energetic cages to excluded volume cages takes place on increasing $\phi$. A final comment refers to the propensity of the system to form disordered arrested states. Despite the relevant amount of supercooling\cite{Veg98a}, in all studied state points where a network structure is present, we have not observed any sign of crystallization. The kinetic suppression of the crystallization phenomenon can be traced to the similar energy characterizing the crystal state and the fully bonded disordered state, vanishing the energetic driving force toward crystallization. The observed propensity to form gel states as opposed to crystalline states manifested by the studied model (which can be seen also as a model for short-range sticky colloid particles as well as globular proteins with aeolotopic interactions\cite{Lom99a}) may well explain the difficulty of crystallizing some class of proteins. It also warn us about the relevance of the dynamic arrest phenomenon in the future attempts to build a colloidal diamond photonic crystal, made of particles with short-ranged patchy interactions. \section{Acknowledgements} We thank E. Zaccarelli. We acknowledge support from MIUR-FIRB and CRTN-CT-2003-504712. \section{Appendix: An event-driven algorithm for hard spheres with patches.} \label{appendicecris} In an event driven (ED) algorithm, events such as times of collisions between particles and cell crossing have to be taken into account. All these events have to be ordered. Code must be written in such a way that locating the next event and insertion/deletion of new events have to be performed efficiently. In literature, several ED algorithms for simulating hard-sphere systems exist and several propositions on how to handle such events efficiently have been reported. One elegant approach, proposed twenty years ago by Rapaport \cite{Rapaport}, arranges events into an ordered binary tree (calendar of events) so that insertion, deletion and retrieving of events can be done with an efficiency $O(\log N)$, $O(1)$ and $O(\log N)$ respectively, where $N$ is the number of events in the calendar. We adopted this solution to handle the events calendar in our simulation, adding only a redefinition of event time in order to avoid round-off problems which are found when extremely long simulation runs are performed. \subsection{Motion of rigid bodies} The orientation of a rigid body can be conveniently represented by the $3$ column eigenvectors ${\bf u}_i$ (with $i=1,2,3$) of the inertia tensor expressed in the laboratory reference system. These vectors form an orthogonal set and can be arranged in a matrix ${\bf R}$, i.e. \begin{equation} {\bf R} = {}^t ( {\bf u}_0\ {\bf u}_1\ {\bf u}_2 ) \end{equation} where ${}^t A$ indicates the transpose of the matrix $A$. This matrix is such that if ${\bf x}$ are the coordinates of the laboratory reference system and ${\bf x'}$ are the coordinates of the rigid body reference system, it turns out that: \begin{equation} {\bf x}' = {\bf R} {\bf x} \end{equation} In what follows, we assume that the three eigenvalues of the inertia tensor are all equal to $I$. Naming ${\bf w}=(w_x,w_y,w_z)$ the angular velocity of a free rigid body, the matrix ${\bf \Omega}$ is defined as \begin{equation} {\bf \Omega} = \begin{pmatrix}0 & -w_z & w_y\cr w_z & 0 & -w_x\cr -w_y & w_x & 0\end{pmatrix} \label{Eq:omegmat} \end{equation} Knowing the orientation at time $t=0$, the orientation ${\bf R}(t)$ at time $t$ is: \cite{LandauMec,Goldstein}: \begin{equation} {\bf R}(t) = {\bf R}(0) ({\bf I} + {\bf M}) \label{Eq:Rt} \end{equation} where ${\bf M}$ is the following matrix: \begin{equation} {\bf M} = - \frac{\sin(wt)}{w} {\bf \Omega} + \frac{1-\cos(wt)}{w^2} {\bf \Omega}^2 \label{Eq:Mmat} \end{equation} and $w = \|{\bf w}\|$. Note that if $w=0$ then ${\bf R}(t) = {\bf R}(0)$. To derive Eq. \ref{Eq:Rt}, consider that: \begin{eqnarray} {}^t R(t) &=& ({\bf u}_1(t)\ {\bf u}_2(t)\ {\bf u}_3(t) ) \\ \nonumber &=& ({}^t ({\bf I} + {\bf M}) {\bf u}_1\ {}^t ({\bf I} + {\bf M}) {\bf u}_2\ {}^t ({\bf I} + {\bf M}) {\bf u}_3\ ) \end{eqnarray} where we remember that ${\bf u}_i$ are column vectors. Hence, if ${\bf w} = w {\bf\hat n}$, we have after some algebra: \begin{equation} {\bf u}_i (t) = {\bf u}_i \cdot {\bf\hat n}\; {\bf\hat n} + \cos(wt) ( {\bf u}_i - {\bf\hat n}\cdot {\bf u}_i\> {\bf\hat n} ) + \sin(wt) \; {\bf\hat n}\times{\bf u}_i \end{equation} that is the so-called {\it Rodriguez's formula} or {\it Rotation formula}, i.e. a rotation of an angle $wt$ around the axis ${\hat n}$. To conclude if one has to update position and orientation of a rigid body, that is freely moving, this can be accomplished doing: \begin{subequations} \label{Eq:surf} \begin{equation} {\bf x}(t) = {\bf x}(0) + {\bf v} t \label{Eq:surfa} \end{equation} \begin{equation} {\bf R}(t) = {\bf R}(0) ({\bf I} + {\bf M}) \label{Eq:surfb} \end{equation} \end{subequations} where ${\bf x}(t)$ is the position of the center of mass of the rigid body at time $t$ and $\bf v$ is its velocity. \subsection{Hard-Sphere with interacting patches} In the present model, each particle is modeled as an hard sphere with $n$ spherical patches arranged in fixed site locations. In the present case, the site-site interaction is a SW potential, \begin{equation} u_{SW}= \begin{cases} -u_0 & \hbox{if}\ \ r < \delta \cr 0 & \hbox{otherwise}\cr \end{cases} \end{equation} where $\delta$ and $u_0$ are the width and the depth of the SW . For the following discussion, the SW interaction can be visualized as a sphere of diameter $\delta$ centered on the site location. Similarly, one can visualize the particle as a rigid body composed by the hard-sphere joined to the spheres located on the sites. In what follows, we identify a particle with the resulting surface. Defining distance $d_{AB}$ between two particles $A$ and $B$ as the shortest line connecting two points on distinct particles, i.e. \begin{equation} d_{AB} = \min_{i_A,i_B}{d_{i_A i_B}} \label{Eq:distance} \end{equation} where $i_{A},i_{B}\in\{0,\dots n\}$ and $0$ labels the hard sphere, $1\dots n$ label the $n$ spherical patches and $d_{i_A i_B}$ is the distance between the two spherical patches $i_A$ and $i_B$. \subsection{Prediction of time-of-collision} \subsubsection{Finding the contact time} We separate the collisions between two particles in the hard-sphere part of the potential and the site-site interaction part. The time of collision $t_{hs}$ between the hard sphere cores can be evaluated as usual \cite{Rapaport} . The smallest time of collision among all $n^2$ spherical patch pairs is $t_{st}$. Time of collision of the two particles is \begin{equation} t_c = \min\{t_{hs},t_{st}\} \end{equation} To find the time-of-collision of two interacting patches we assume that it is possible to bracket it. I.e., we assume (see further subsections) that the time of collision $t_{st}$ is such that $ t_1 < t_{st} < t_2 $ where the product $d(t_1) d(t_2) < 0$. Thus, the ``exact`` time of collision is provided by the root of the following equation: \begin{equation} \| r_{i_A}(t) - r_{i_B}(t) \| = \delta \label{Eq:tc} \end{equation} where $r_{i_A}$ and $r_{i_B}$ are the two site locations. \subsubsection{Linked lists and Centroids} As described in \cite{Rapaport} to speed up a ED molecular dynamics of hard spheres one can use linked lists. For a system of $N$ identical particles inside a cubic box of edge $L$, we define the ``centroid'' \cite{torquato1,torquato2} as the smallest sphere that contains the particle (the HS and the spherical patches). Linked lists of centroids may be quite useful to reduce the number of objects to check for possible collisions and in addition they can be used to restrict the time interval within which searching for the collision. We divide the box into $M^3$ cells so that into each cell contains at most one centroid. After that we build the linked lists of these centroids and handle these lists as done in a usual ED molecular dynamics of hard spheres \cite{Rapaport}. This means that whenever an object cross a cell boundary one has to remove such an object from the cell the particle is coming from and add this object to the cell where it's going to. Now consider that one has to predict all the possible collisions of a given particle, which is inside a certain cell $m$. As for the hard spheres case we take into account only the particles inside the adjacent cells (see \cite{Rapaport} for more details) and we predict the times of collisions with these objects. Consider now two particles $A$ and $B$ at time $t=0$ and their centroids $C_A$ and $C_B$. Three possible cases arise: \begin{enumerate} \item $C_A$ and $C_B$ do not overlap and, evaluating their trajectory no collision between the two centroids is predicted. In this case $A$ and $B$ won't collide as well. \item $C_A$ and $C_B$ do not overlap but they will collide: in this case, we calculate two times $t_1$ and $t_2$, bracketing the possible collision between $A$ and $B$: $t_1$ is defined as the time when the two centroids collide and start overlapping and $t_2$ is the time when the two spheres have completely crossed each other and do not overlap any longer. \item $C_A$ and $C_B$ overlap: in this case $t_1 \equiv 0$ and $t_2$ is defined as the time at which the two centroids stop overlapping. \end{enumerate} \subsubsection{Fine temporal bracketing of the contact time} Here we show how a refined bracketing of solution of Eq. \ref{Eq:tc} can be accomplished. First of all we give an overestimate of the rate of variation of the distance between two patches $i_A$ and $i_B$, that is: \begin{eqnarray} \dot d_{i_A i_B}(t) &=& \frac{d}{dt} \left ( \| \BS r_{i_A}-\BS r_{i_B} \| - \delta \right )\cr &\le & \frac{\BS {\dot r}_{i_Ai_B} \cdot \BS r_{i_A i_B}}{\| \BS r_{i_A i_B}\|} \le \| \BS v_{i_A i_B} \| \cr & = & \| \BS V_{AB} + {\bf\omega}_A \times (\BS r_{i_A} - \BS R_A) - {\bf\omega}_B \times (\BS r_{i_B} - \BS R_B) \|\cr & \le & \| {\bf V}_{AB} \| + \| {\bf\omega }_A \| L_A + \| {\bf\omega}_B \| L_B = \dot d^{max}_{i_A i_B} \label{Eq:distOver} \end{eqnarray} where the dot indicates the derivation with respect to time, ${ \bf r}_{i_A}$, ${\bf r}_{i_B}$ are the positions of the two sites with respect to laboratory reference system, ${\bf v}_{i_Ai_B}$, is the relative velocity of the two sites, ${\bf V}_{AB}$ is the relative velocity between the centers of mass of the two particles, $\BS R_A$ and $\BS R_B$ are the positions of their centers of mass and \begin{subequations} \label{Eq:lalb} \begin{equation} L_A \ge \max_{{\bf r}'\in A} \{\|{\bf r}'-{\bf R}_A\|\} \end{equation} \begin{equation} L_B \ge \max_{{\bf r}'\in B} \{ \|{\bf r}'-{\bf R}_B\|\} \end{equation} \end{subequations} Having calculated an overestimate of $\dot d_{i_A i_B}(t)$ we can evaluate an overestimate of $\dot d_{AB}$ that we call $\dot d_{max}$: \begin{equation} \dot d_{max} = \max_{i_A i_B} \{ \dot d_{i_A i_B}^{max} \} \label{Eq:dmax} \end{equation} Using Eq. (\ref{Eq:dmax}) we can easily find an efficient strategy to bracket the solution. In fact the following algorithm can be used: \begin{enumerate} \item Evaluate the distances between all sites that may interact $\{d_{i_Ai_B}(t)\}_{i_Ai_B}$ at time $t$ (starting the first time from $t_1$). \item Choose a time increment $\Delta t$ as follows: \begin{equation} \Delta t = \begin{cases} \frac{d_{AB}(t)}{\dot d_{max}}, & \hbox{if}\> d_{AB}(t) > \epsilon_f \hbox{;} \cr \frac{\epsilon_d}{\dot d_{max}}, & \hbox{otherwise.} \cr \end{cases} \end{equation} where the two arbitrary parameters $\epsilon_d $ and $\epsilon_f$ satisfy $\epsilon_d < \epsilon_f \ll \min\{L_A,L_B\}$. \item Evaluate the distances at time $t+\Delta t$. \item If for at least one pair of patches $(i_A,i_B)$ we find that the product $d_{i_Ai_B}(t+\Delta t) d_{i_A i_B}(t) < 0$ we have bracketed a solution. We then find the collision times and the collision points solving Eq. (\ref{Eq:tc}) for all pairs. Choose the smallest collision time and terminate. \item if pairs of patches are such that $0 < |d_{i_Ai_B}(t+\Delta t)| < \epsilon_d$ and $0 < |d_{i_Ai_B}(t)| < \epsilon_d$, for each of these pairs evaluate the distance $d_{i_Ai_B}(t+\Delta t /2)$, perform a quadratic interpolation of these $3$ points $(t,d_{i_Ai_B}(t))$, $(t+\Delta t/2, d_{i_Ai_B}(t+\Delta t/2)$, $(t+\Delta t, d_{i_Ai_B}(t+\Delta t)$ and find if the resulting parabolas have zeros. If yes refine the smallest zero solving again Eq. (\ref{Eq:tc}) for all these pairs. \item Increment time by $\Delta t$, i.e. \begin{equation} t\rightarrow t+\Delta t \end{equation} \item Go to step 1, if $t<t_2$. \end{enumerate} If two particles undergo a ``grazing'' collision, i.e. a collision where the modulus of the distance stays smaller than $\epsilon_d$ during the collision, the collision could not be located by the previous algorithm due to failure of the quadratic interpolation. We have chosen to work with $\epsilon_d \approx 10^{-6}$. For such a choice, we have not observed grazing collisions during the simulation runs (which would appear in the conservation of the energy). The basic algorithm can be improved with simple optimizations. For example one can calculate $\dot d_{i_A i_B}^{max}$ as follows: \begin{equation} \dot d_{i_A i_B}^{max} = \| {\bf V}_{AB} \| + \| {\bf\omega }_A \| L_{i_A} + \| {\bf\omega}_B \| L_{i_B} \end{equation} where \begin{subequations} \label{Eq:lalb2} \begin{equation} L_{i_A} = \|{\bf r_{i_A}}'-{\bf R}_A\| \end{equation} \begin{equation} L_{i_B} = \|{\bf r_{i_B}}'-{\bf R}_B\| \end{equation} \end{subequations} and if $d_{AB}(t) > \epsilon_f$ the time increment can be evaluated in the following optmized way: \begin{equation} \Delta t = \min_{i_A i_B}\{d_{i_Ai_B}(t)/d_{i_A i_B}^{max}\} \end{equation} \subsection{Collision of two particles} At the collision time, one has to evaluate the new velocities of centers of mass and the new angular velocities. If $\BS x_C$ is the contact point then the velocities after the collision can be evaluated as follows: \begin{subequations} \label{Eq:elcoll} \begin{equation} \BS v_A \rightarrow \BS v_A + m_A^{-1} \Delta p_{AB} \BS{\hat n} \label{Eq:elcolla} \end{equation} \begin{equation} \BS v_B \rightarrow \BS v_B - m_B^{-1} \Delta p_{AB} \BS{\hat n} \label{Eq:elcollb} \end{equation} \begin{equation} \BS w_A \rightarrow \BS w_A + \Delta p_{AB} I_A^{-1} (\BS r_A -\BS x_C)\times \BS{\hat n} \label{Eq:elcollc} \end{equation} \begin{equation} \BS w_B \rightarrow \BS w_B - \Delta p_{AB} I_B^{-1} (\BS r_B -\BS x_C)\times \BS{\hat n} \label{Eq:elcolld} \end{equation} \end{subequations} where $\BS{\hat n}$ is a unit vector perpendicular to both surfaces at the contact point $\BS x_C$, $I_A$, $I_B$ are the moments of inertia of the two colliding sticky particles, $m_A$, $m_B$ their masses and the quantity $\Delta p_{AB}$ depends on the type of the collision. If we define \begin{equation} v_c = ( \BS v_A + \BS w_A\times\BS (\BS x_C - \BS r_A ) - \BS v_B - \BS w_B\times\BS (\BS x_C - \BS r_B ) ) \cdot \BS {\hat n} \label{Eq:vc} \end{equation} If the collision occurring between particles is an hard-core collision, one has: \begin{equation} \Delta p_{AB} = -2 v_c \label{Eq:factorHS} \end{equation} if the collision occurred between two spherical patches already bonded (i.e. prior the collision the distance between the two sites is $<\delta$ one has: \begin{equation} \Delta p_{AB}= \begin{cases} -2 v_c & \hbox{if}\ \ v_c^2 < 2 u_0 /M_{red} \cr - v_c + \sqrt{v_c^2 - 2 u_0/M_{red}}& \hbox{otherwise} \label{Eq:factorHSvc} \end{cases} \end{equation} where \begin{equation} M_{red}^{-1} = m_A^{-1} + m_B^{-1} + I_A^{-1}\|(\BS r_A -\BS x_C)\times \BS{\hat n}\| + I_B^{-1}\|(\BS r_B -\BS x_C)\times \BS{\hat n}\| \label{Eq:mred} \end{equation} Finally if the collision occurs between two patches that are not bonded (i.e. the distance between the two sites is $>\delta$ prior the collision) we have: \begin{equation} \Delta p_{AB} = - v_c + \sqrt{v_c^2 - 2 u_0/M_{red}} \end{equation} \section{Appendix: Evaluating the pressure} \label{pressure} \subsection{Evaluating the pressure in the ED code} We define the quantity: \begin{equation} \Delta A_{\alpha\beta}(t) = V \int_0^t {\cal P}_{\alpha\beta}(t') dt' \label{Eq:intstresspress} \end{equation} where ${\cal P}_{\alpha\beta}$ is the molecular pressure tensor, \begin{equation} {\cal P}_{\alpha\beta} V = \sum_{i=1}^{N} M_i V_{i\alpha} V_{i\beta} + \sum_{i=1}^{N}\sum_{j>i}^{N} F_{ij\alpha} (R_{i\beta}-R_{j\beta}) \label{Eq:stresstens} \end{equation} The sums in the previous expression involve components (denoted by greek letters), ${\vec V}_i$, ${\vec R}_i$ and ${\vec F}_{ij}$, which are the velocity, the position of center of mass of $i$-th particle (mass $M_i$) and the total force acting between particle $i$ and $j$ respectively. In the presence of impulsive forces, the stress tensor defined in Eq. (\ref{Eq:stresstens}) is not well defined, while the integral in Eq. (\ref{Eq:intstresspress}) is well defined. Consider the time interval $(t,t+\Delta t)$. During this interval the quantity $\Delta A_{\alpha\beta}(t) $ will vary due to the collisions occurring between particles. The variation $\delta A(t)$ of $\Delta A(t)$, is: \begin{displaymath} \delta A_{\alpha\beta}(t) = \sum_i^{N} M_i V_{i\alpha} V_{i\beta} \delta t + R_{i\alpha} \delta P_{i\beta} \end{displaymath} where $\delta t$ is the time elapsed from last collision occurred in the system and $\delta \BS P_{i}$ is the variation of momentum of particle $i$ after the collision, i.e. \begin{equation} \delta P_{i\alpha} = \Delta p_{AB} {\hat n}_{\alpha} \end{equation} where $\Delta p_{AB}$ is the quantity defined in Eq. (\ref{Eq:vc}). From $\Delta A_{\alpha\beta}(t)$ and $\Delta A_{\alpha\beta}(t+\Delta t)$ the average pressure over the interval $\Delta t$ can be evaluated as follows: \begin{equation} P = \frac{1}{3 V} \sum_\alpha \frac{\Delta A_{\alpha\alpha}(t+\Delta t) - \Delta A_{\alpha\alpha}(t)}{\Delta t} \label{Eq:press} \end{equation} \subsection{Evaluating $P$ in MC} In the analysis of MC configurations, pressure has been calculated as sum of three contributions. A trivial kinetic contribution, equal to $nk_BT$; a positive HS contribution, which requires the evaluation of the hard sphere radial distribution function $g_{_{HS}}(r)$ at distance $\sigma$ and a negative contribution arising from the SW interaction, which requires both the evaluation of the $H-LP$ radial distribution function $g_{_{H-LP}}(r)$ at distance $\delta$ as well as the evaluation of $<R_{H-LP}(r)>$. For a pair of $H$ and $LP$ sites whose distance is $r$, the quantity $R_{H-LP}$ is defined as the projection of the vector joining the centers of the two particles associated to the two sites along the direction of the unitary vector joining the two sites. The ensemble average $<...>$ is performed over all pairs of $H$ and $LP$ sites at relative distance $r$ \cite{Kol87a}. The resulting expression for $P$ is \begin{eqnarray} P =n k_B T (1 + \frac{4 \phi } ~~~~~~&~& \\ \nonumber \left[ g_{_{HS}}(\sigma)- 8 \frac{\delta^2}{\sigma^3} (1-e^{-1/T}) <R_{H-LP}(\delta)> g_{_{H-LP}}(\delta) \right]) \end{eqnarray}. \bibliographystyle{./apsrev}
proofpile-arXiv_065-3156
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Gaseous envelopes surrounding galaxies out to radii of 100-200 kpc are a common prediction of galaxy formation models (e.g., White \& Rees 1978; Maller \& Bullock 2004). Generally the dark matter and gas in galaxy halos are assumed to be coupled. As the gas falls onto the galaxy it comes into equilibrium with the dark halo and is subsequently able to cool and rain down onto the galaxy's disk. Absorption line experiments are one of the sole methods of tracing the low density gas as it cools and falls onto the galaxy. Therefore, comparing the kinematics of a galaxy's dark matter halo to absorption line systems in the halo is a direct test of the radius at which the gas comes into equilibrium with the dark matter halo and the dark matter$-$baryon connection. Several studies have suggested that a large fraction of the Ly$\alpha$ absorbers with column densities of N$_{HI} \sim 10^{13-17.3}$ cm$^{-2}$ reside in galaxy halos (e.g., Chen et al. 2001; Bowen, Pettini, \& Blades 2002). This has generally been derived from a correspondence between the velocity of the absorber and systemic velocity of the galaxy. Chen et al. examined 34 galaxy - absorber pairs over a wide range of redshifts ($z$ = 0.075 - 0.892), impact parameters (18 - 250 $h^{-1}_{70}$ kpc), and galaxy luminosities (low surface brightness - $\sim3$L$_*$) to determine how the extent of a galaxy's diffuse gaseous halo scales with galaxy properties. They concluded that the typical L$_*$ galaxy is surrounded by an extended gaseous halo of radius $\sim 260~h^{-1}_{70}$ kpc with a covering fraction of 94\% for gas with N$_{\rm HI} > 10^{14}$ cm$^{-2}$. Bowen et al. (2002) studied a sample of 8 galaxies at small projected distances from a QSO and found that within 285 $h^{-1}_{70}$ kpc of a galaxy their results implied a covering factor of $\sim 100$\% for gas with N$_{\rm HI} > 10^{13}$ cm$^{-2}$. These results suggest the galaxy formation scenario outlined above is on the right track for reproducing the observed baryon content of extended galaxy halos. It has also been suggested that these low column density Ly$\alpha$ absorbers ($\sim 10^{14}$ cm$^{-2}$) are not related to individual galaxies, but are part of the extended cosmic web (e.g., Dav\'e et al. 1999; Penton, Stocke \& Shull 2002; Stocke et al. 2005). In various CDM simulations the Ly$\alpha$ absorbers trace fluctuations in the baryon density originating from the epoch of structure formation (e.g., Dav\'e et al. 1999; Zhang et al. 1998). Dav\'e et al. find that at every redshift these low column density Ly$\alpha$ absorbers originate from diffuse gas in the unshocked intergalactic medium (IGM), while stronger absorbers are generally found close to galaxies. This is observationally supported by the work of Penton et al. (2002), who find the median distance between a low column density Ly$\alpha$ absorber and a galaxy is $\sim 500~h^{-1}_{70}$ kpc, twice the median distance between bright galaxies in their sample. They therefore suggest that the majority of the low redshift Ly$\alpha$ absorbers are associated with large-scale structures, rather than individual galaxies. To further investigate the relationship between galaxies and Ly$\alpha$ absorbers, studies correlating absorption line velocities with the detailed kinematics of the associated galaxy have recently been completed (Steidel et al. 2002; Keeney et al. 2005; Cot\'e et al. 2005, hereafter C05). Steidel et al. examined Mg II absorption line systems, or gas with column densities N$_{HI} > 10^{17}$ cm$^{-2}$, at redshifts of 0.44 $\le z \le 0.66$. In the simulations discussed above, these strong absorbers are generally associated with the gaseous halos surrounding L$_*$ spiral galaxies. They found in all 5 of their cases the velocity offset of the absorption line was in the right sense to be qualitatively explained by an extension of the disk rotation to the line of sight. (The lines of sight are well beyond where the optical rotation curves were measured, by a factor of 2 to as much as a factor of 6.) The extension of disk rotation was not a simple rotation model however; the absorption line velocities had to be explained with thick rotating halo gas layers (Charlton \& Churchill 1998). In contrast to the expected results of gas at large radii following the overall kinematics of the galaxy, the work of C05 finds several examples of gas which is not in rotation with the galaxy. C05 examined low column density absorption line systems (N$_{\rm HI} < 10^{14}$ cm$^{-2}$) between $59 - 181~h_{70}^{-1}$ kpc from the center of a spiral galaxy. In their sample of 5 low column density absorption line systems near galaxies, the velocities of two absorbers can be fit by an extended warp or thick rotating gas layers. The other three systems can not be fit by any sort of extended rotating disk. In addition, Ly$\alpha$ absorption was not detected along 3 lines of sight that were more than $265~h^{-1}_{70}$ kpc from the neighboring spiral galaxy. Historically, the link between Ly$\alpha$ absorbers and galaxies has been investigated using optical searches for galaxies in the vicinity of the absorbers (e.g., McLin et al. 2002; Penton et al. 2002; Impey, Petry, \& Flint 1999). Since the absorbers trace the warm-hot {\it gaseous} intergalactic medium (IGM), an obvious comparison is to study the {\it gaseous} components of galaxies. In addition, comparing the detailed kinematics of the galaxies to the absorption line systems has been limited to the studies discussed above, with inconclusive results as to the actual relationship. Here we present Arecibo\footnote{The Arecibo Observatory is part of the National Astronomy and Ionosphere Center, which is operated by Cornell University under a cooperative agreement with the National Science Foundation.} observations of two edge-on spiral galaxies with Ly$\alpha$ absorbers $< 125~h^{-1}_{70}$ kpc along the major axis of the galaxies. Several questions with regard to the relationship between Ly$\alpha$ absorbers and galaxies are addressed, including: Do the Ly$\alpha$ absorbers near galaxies trace gas in the galaxies' extended dark matter halos, and if so, is the gas rotating with the dark matter halos?; Do the Ly$\alpha$ absorbers near galaxies represent galactic waste, galactic fuel, or are they simply part of large scale cosmic filaments?; Do the Ly$\alpha$ absorbers represent the presence of a low surface brightness galaxy, rich in gas, but not easily detected in the optical? This paper proceeds by describing the Arecibo observations and data reduction, and subsequently presents the results of the observations. Finally, the implications of the results are discussed in the context of the questions presented above. \section{Observations and Data Reduction} This paper presents neutral hydrogen data of the regions around 2 Ly$\alpha$ absorbers in the sightlines towards PG 1211+143 and Ton 1542 that are at similar velocities to the nearby spiral galaxies IC 3061 and UGC 7697, respectively. These HI data are from the Arecibo Radio Telescope and were observed as part of a large project to map the gaseous environment of 17 low-redshift Ly$\alpha$ absorbers along 4 sightlines. Driftscan maps 0.5$^\circ$ $\times$ 0.5$^\circ$~in size were made in November and December 2001 and April to May of 2002 around the 4 sightlines. Each individual drift scan was spaced by 50\arcsec, and the integration time per beam was 14.4 seconds. Twenty-one driftscan maps were observed for each source, resulting in a total integration time per beam of 302.4 seconds. A calibration diode (cal) was fired at the beginning and end of each scan. Cals that were fired while there was a continuum source in the beam were excluded from the calibration, as they do not give an accurate measurement of the system response. The beam size in the final gridded cubes is 215\arcsec, or $\sim3.6$\arcmin, and the velocity coverage extends from approximately $-$1035 \ km s$^{-1}$~ to 12,750 \ km s$^{-1}$. For this paper we concentrate on the cubes centered on the background sources PG 1211+143 and Ton 1542 as these were the only cubes which had absorbers at a similar velocity to a gas-rich spiral galaxy. The other two sightlines and low redshift Ly$\alpha$ absorbers will be discussed in a future paper. The sensitivity achieved in each 5 \ km s$^{-1}$~channel is $\sim2$ mJy/bm or 26 mK (1$\sigma$) for the cube centered on PG 1211+143 and $\sim1.7$ mJy/bm or 22 mK (1$\sigma$) for the cube centered on Ton 1542. These values were calculated with the MIRIAD imstat routine on line-free channels in the vicinity of the galaxies of interest (IC 3061 and UGC 7697, respectively). These limits correspond to a 5$\sigma$ HI mass limit of 5200 D$^{2}$(Mpc) and 4420 D$^{2}$(Mpc) ($\Delta$v = 50 \ km s$^{-1}$), respectively. Using the distances listed in Table 1 for the galaxies of interest here, this corresponds to 7.5 $\times 10^{6}$ ${\rm M}_\odot$~at 38 Mpc for the cube centered on PG 1211+143 and 3.6 $\times 10^{6}$ ${\rm M}_\odot$~at 28.7 Mpc for the cube centered on Ton 1542. The data reduction was completed with IDL routines developed at the observatory. A robust median fit was originally used for the bandpass correction, but was found not to work in the strips that contained both a large galaxy and continuum sources because the median value contained emission. The solution that resulted in the maps presented here is similar to the method presented for recovering extended sources in the HIPASS data (Putman et al. 2003a). Each strip of 120 samples was broken up into 3 sections, the median was computed for each section, and the minimum of these medians was used for the final bandpass correction. The system temperature was subsequently removed using a robust mean over the channels of the spectra. Twenty-one 0.5$^\circ$ $\times$ 0.5$^\circ$~ maps (composed of 36 scans each) were subsequently averaged together to obtain the HI data presented here. In the final calculation of the galaxies' HI masses (Table 1) and presentation of the spectra (Figures 4 and 8), a second order polynomial was fit to a $\sim 500$ \ km s$^{-1}$~ range around each spectrum. The absorption line data for the absorber closest in velocity to the large spiral galaxy for PG 1211+143 and Ton 1542 are summarized in Table 2. These absorbers are taken from the sample of Penton et al. (2004) where the STIS spectra and details of the observation and reduction are presented. Both absorption line profiles appear to be single components. For PG 1211+143 we have also looked at the archived STIS E140M spectrum to confirm that the line appears to be a single component at higher resolution. \section{Results} \subsection{IC 3061 and the PG 1211+143 Sightline} Figure 1 shows the HI distribution of IC 3061 as observed by Arecibo over a $25^{\prime} \times 25^{\prime}$ Digital Sky Survey image centered on the galaxy. The entire field of our Arecibo observations is shown in velocity map of Figure 2, as well as the beam in the lower right corner. The position of the background source (PG 1211+143) used to detect the absorption line is shown in both plots. Unfortunately, IC 3061 is at the edge of the field so a small fraction of the galaxy has been cut-off. The data presented have a uniform number of drift scans across the map. IC 3061 is the only galaxy detected in the data cube from $-1200 - 5000$ \ km s$^{-1}$. There are also no other galaxies present in this spatial and velocity region of the sky as catalogued by NED (considering only those galaxies with measured redshifts). The first galaxy to appear in the cube after IC 3061 is at $\sim 6600$ \ km s$^{-1}$, nicely corresponding to the velocity of an absorber (6623 \ km s$^{-1}$; Penton et al. 2002). This and other objects detected in this cube will be discussed in Rosenberg et al. (2005). The galaxy at the lower left of Fig. 1 is NGC 4208 at -81 \ km s$^{-1}$~ (de Vaucouleurs et al. 1991), and was off the edge of our HI survey. Table 1 lists the properties of IC 3061 from this survey and NED. The HI mass of this galaxy is $5.3 \times 10^{9}$ ${\rm M}_\odot$~from our Arecibo data, using a distance of 38.0 Mpc from the Tully-Fisher relation (Solanes et al. 2002). A previous calculation of this galaxy's HI mass with Arecibo, using integrated 21-cm line profiles and applying a correction for the galaxies extended nature, is $4.3 \times 10^{9}$ ${\rm M}_\odot$~(Haynes \& Giovanelli 1986). The dynamical mass of IC 3061 is $2.8 \times 10^{11}$ ${\rm M}_\odot$, and was calculated using M$_{\rm dyn}$ $=$ r$_{\rm HI}$V$^{2}/$G and V$= \Delta$v$/2$sin$i$, where r$_{\rm HI} = 52~h^{-1}_{70}$ kpc (these data), $\Delta$v$ =310$\ km s$^{-1}$~(these data) and $i= $82$^\circ$~(Huchtmeier \& Richter 1989). The sightline of the absorber is 11\arcmin~from the optical center of IC 3061 and 7\arcmin~ from the edge of the observed rotating HI disk. In projection, at a distance of 38 Mpc, this is $122~h^{-1}_{70}$ kpc from the optical center and $77~h^{-1}_{70}$ kpc from the 1.4 $\times 10^{19}$ cm$^{-2}$ column density contour at the HI edge. Note that our estimate for the size of the HI disk is severely limited by our beamsize which is $40~h^{-1}_{70}$ kpc at the distance of the galaxy. Figure 2 depicts the velocity distribution of IC 3061 with the contours of the integrated intensity map overlaid and the position of the absorber sightline labeled. This figure represents the entire field of view observed by our Arecibo map. The velocity of the galaxy extends from $\sim$2150 \ km s$^{-1}$~to $\sim$2490 \ km s$^{-1}$. The absorber along the PG 1211+143 sightline with the closest velocity to this galaxy is at 2130 \ km s$^{-1}$, lower than the central velocity of the galaxy despite being on the receding side of the disk. The properties of this absorption line are listed in Table 2. The absorber is indistinguishable from a single line profile and has a column density of 10$^{13.76}$ cm$^{-2}$, while the column density limit of our HI observations per channel (5 \ km s$^{-1}$) is $1.2 \times 10^{18}$ cm$^{-2}$ (5$\sigma$). The closest absorber in velocity to the 2130 \ km s$^{-1}$~absorber is at 4944 \ km s$^{-1}$ (Penton et al. 2004). The projected position of the absorber is 20$^\circ$~from being along the major axis of the galaxy as shown in Figures 1 and 2. A rotation curve was not fit to this data due to the limited number of independent beams across the galaxy. Figure 3 depicts a right ascension-velocity map through IC 3061 at the declination of the absorber. This figure also shows that the velocity of the absorber is closer to the velocities on the far side of the galaxy disk rather than on the near side. Table 3 outlines the spatial and kinematic differences between IC 3061 and the absorber. Figure 4 shows the integrated HI spectrum of IC 3061. It is a beautiful double-horned profile, representative of its galaxy type (SBc) and edge-on nature. IC 3061 is included in the Flat Galaxy Catalog of Karachentsev et al. (1993) and is considered a member of the Virgo Cluster (Bingelli, Popescu \& Tammann 1993). It is 1.8$^\circ$~or 1.2 Mpc in projection from the center of the cluster. Based on figure 1 of Solanes et al. (2002), IC 3061 is in a region of relatively low HI deficiency compared to other parts of the Virgo cluster. This makes sense considering IC 3061's regular HI structure. \subsection{UGC 7697 and the Ton 1542 Sightline} Figures 5 and 6 show the HI distribution of UGC 7697. As in Figures 1 and 2 of IC 3061, Figure 5 is the HI data for UGC 7697 over a $25^{\prime} \times 25^{\prime}$ Digital Sky Survey image centered on the galaxy, and Figure 6 is the velocity map showing the entire field of our Arecibo observations centered on the background source Ton 1542. UGC 7697 has the same situation as IC 3061 in that it is at the spatial edge of the cube. The closest galaxies to UGC 7697 detected in the Arecibo data cube are two at approximately 1300 \ km s$^{-1}$~ in the extreme north of the cube. These galaxies do not have previously published redshifts and will be discussed in Rosenberg et al. (2005). The properties of UGC 7697 are shown in Table 1. The HI mass of this galaxy is 2.2 $\times 10^{9}$ ${\rm M}_\odot$~from our Arecibo data, using a distance of 28.7 Mpc from Tully-Fisher measurements (Solanes et al. 2002). Haynes et al. (1999) found a total HI mass of $1.7 \times 10^{9}$ ${\rm M}_\odot$~at this distance. Using the Arecibo data and the same method outlined for IC 3061, the dynamical mass of this galaxy is $1.1 \times 10^{11}$ ${\rm M}_\odot$, where r$_{\rm HI}=35.4~h^{-1}_{70}$ kpc, $\Delta$v$=234$\ km s$^{-1}$, and $i=$ 83$^\circ$~ (Huchtmeier \& Richter 1989). The sightline of the absorber is 12\arcmin~ from the optical center of UGC 7697 and 7\arcmin~ from the edge of the observed rotating HI disk. In projection, at a distance of 28.7 Mpc, this is $100~h^{-1}_{70}$ kpc from the optical center and $58~h^{-1}_{70}$ kpc from the HI edge measured at 1.2 $\times 10^{19}$ cm$^{-2}$. As previously stated, we are limited by our beamsize when estimating the size of the HI disk because the Arecibo beam is $30~h^{-1}_{70}$ kpc at 28.7 Mpc. Figures 6-8 show that the velocity of UGC 7697 extends from $\sim$2410 \ km s$^{-1}$~to $\sim$2660 \ km s$^{-1}$. The absorber with the closest velocity to this galaxy is at 2563 \ km s$^{-1}$~and, given its position on the side of the galaxy with 2410 \ km s$^{-1}$, is not following the gradient of rotation of the galaxy if you extend the disk beyond the HI emission detectable here. The properties of the absorption line are shown in Table 2. The column density of the single component absorber is $10^{14.3}$ cm$^{-2}$, while the column density limit of the HI observations is $10^{18}$ cm$^{-2}$ (5$\sigma$ for a 5 \ km s$^{-1}$~channel). The closest absorber in velocity to the absorber at 2563 \ km s$^{-1}$~is at 1895 \ km s$^{-1}$, a difference of almost 700 \ km s$^{-1}$ (Penton et al. 2004). The projected position of the absorber is 9$^\circ$~from an extension of the major axis of the galaxy as shown in Figures 5 and 6. Figure 7 is a right ascension-velocity map through UGC 7697 at the declination of the absorber and also shows that the velocity of the absorber is comparable to velocities on the far side of the disk rather than on the near side. Table 3 presents the spatial and kinematic differences between UGC 7697 and the nearby absorber. Figure 8 shows the integrated HI spectrum of UGC 7697 is another double-horned profile indicative of its edge-on nature and galaxy type (Scd). UGC 7697 is not considered part of the Virgo Cluster, although it is not far from the extreme northern part of the cluster (7.9$^\circ$~or 3.9 Mpc projected distance from the center). This galaxy and absorber sightline were also examined by C05 and VLA maps were presented which confirm the results shown here of a smoothly rotating disk. The mass we obtain from the Arecibo data is 1.84 times the mass they obtained from the VLA, most likely representing the extended flux missed in the VLA observations. \section{Discussion} In both of the cases presented here the gas detected in absorption within $125~h^{-1}_{70}$ kpc of the center of the galaxy is not rotating with the galaxy. This result indicates that although dark matter halos are presumed to extend out to $\sim 200$ kpc in these types of spiral galaxies (e.g., using satellites and halo stars, Kochanek 1996; Prada et al. 2003: or lensing results, Koopmans, de Bruyn, \& Jackson 1998), the gaseous baryons and dark matter are not directly linked at large radii. Also, though we commonly use the gas to trace the dark matter content of galaxies (e.g., de Blok \& Bosma 2002; Simon et al. 2005), at some radius this no longer holds. What is that radius and what dictates when the gas and dark matter become tightly coupled? Figure 9 represents three possible scenarios for the relation of the gas detected in absorption to the galaxies. The gas could be in the halo and counter-rotating, in the halo and infalling, or in the halo and outflowing. The final possibility is that the gas detected in absorption is not related to the galaxies' halos and simply traces a background/foreground cosmic filament. These possible scenarios for the relationship between the galaxies and absorbers are outlined below and the questions with regard to a dark matter$-$baryon link at large radii are also discussed. \subsection{Galaxy Fuel?} The idea of the IGM feeding galaxies with star formation fuel has been developed in several models since the initial paper on the relationship between cooling gas and galaxy formation by White \& Rees (1978) (e.g. Murali et al. 2002; Yoshida et al. 2002; Maller \& Bullock 2004 and references within these papers). The models typically have cool clouds at distances up to $\sim 150$ kpc from the center of the galaxy, beyond the projected radii of the gas traced by the absorbers discussed here. The shock heated gas is decoupled from the dark matter as it comes into the galaxy, but must reach equilibrium with the rotating halo during the accretion process. It is possible the gas detected here has been recently accreted and is still coming into equilibrium with the rotating dark matter halo, but how does the gas begin to condense if it is counter-rotating with respect to the galaxy? In the case of our own Galaxy, the high-velocity clouds (HVCs) represent clouds of gas that do not follow a simple model of Galactic rotation and have been proposed to represent condensing, infalling gas (e.g., Maller \& Bullock 2004). The HVCs have higher column densities (typically $10^{18-20}$ cm$^{-2}$) than the gas traced here, but a comparison of the relative kinematics is useful if one takes into account the velocity component being measured for the absorbers is parallel to the galaxies, while the velocity component measured for HVCs is towards the Galaxy at the position of the Sun. The two systems studied here have the absorber roughly along the major axis of the galaxy and the infall scenario of Fig. 9 can be examined to understand the potential relationship between the absorber and the galaxy. For IC 3061, assuming the rotation curve remains flat out to $122~h^{-1}_{70}$ kpc, the gas traced by the absorber deviates from galactic rotation by -360 \ km s$^{-1}$. In the case of UGC 7697 the gas traced by the absorber deviates by +153 \ km s$^{-1}$~from an extension of flat rotation out to $100~h^{-1}_{70}$ kpc. However, one should also consider the absorbers are not exactly along the plane of the galaxies, and this may affect the expected rotation. Recent work on the rotation of gas at high latitudes in spiral galaxies has shown that as you move into the halo, the gas is rotating more slowly, typically on the order of 25-50 \ km s$^{-1}$~slower at heights several kpc above the plane (e.g., Swaters et al. 1997; Barbieri et al. 2005; Chaves \& Irwin 2001). Taking this decrease into account, the maximum effect possible is for the gas to stop rotating such that it has the systemic velocity of the galaxy at higher latitudes. Therefore the minimum deviation between the velocities of the absorbers and the expected rotation of the gas is 32 \ km s$^{-1}$~for UGC 7697 and 202 \ km s$^{-1}$~for IC 3061 (see Table 3). The velocity component measured for the absorption line system (V$_{\rm los}$), the unknown velocity component tangential to our line of sight (V$_{\rm tan}$), and the potential infall velocity into the galaxy (V$_{\rm infall}$) for the case of IC 3061 are shown in Figure 9. For UGC 7697, the galaxy is rotating in the opposite direction and the absorber velocity measured (V$_{\rm los}$) is still counter to the galaxy's rotation, so for the infall scenario the absorber would be in front of the galaxy, rather than behind. Figures 10 and 11 show the magnitude of the tangential velocities and infall velocities implied by this infall scenario for the absorbers near IC 3061 and UGC 7697 respectively. If the gas traced by the absorbers is behind IC 3061 or in front of UGC 7697, the tangential velocity component could potentially change the counter-rotating nature of the absorbers into infalling gas clouds. This infall velocity can also be compared more directly to the velocity measured for HVCs towards part of our Galaxy. Figure 10 shows the relationship between the tangential velocity (V$_{\rm tan}$), the infall velocity (V$_{\rm infall}$), and the distance (from the galaxy) to the absorber for IC 3061 for the absorbing gas to no longer be counter-rotating. This plot uses the magnitude of the absorber velocity along the line of sight relative to the galaxy. Since as discussed above, the gas may be rotating more slowly at higher latitude than in the main disk of the galaxy, we use the minimum possible difference between the expected velocity of gas at that radii and the velocity of the absorber, or the difference between the velocity of the absorber and the systemic velocity of the galaxy. Fig. 10 shows that with the magnitude of the absorber's velocity component along the line of sight relative to the galaxy at $|$V$_{\rm los}$$|$ = 202 \ km s$^{-1}$, the infall velocity would be greater than 250 \ km s$^{-1}$~for all distances less than $200~h^{-1}_{70}$ kpc from the galaxy center. This is comparable to some of the very high velocity HVCs relative to the Galaxy (the maximum is approximately $|$V$_{\rm GSR}$$|$ = 300 \ km s$^{-1}$; Putman et al. 2002), and suggests infall is a possibility for the gas traced by the absorber. However, we note that many HVCs are expected to be closer than 150 kpc (e.g., Maloney \& Putman 2003; Putman et al. 2003b; Wakker 2001; Maller \& Bullock 2004), and placing the gas traced by the absorber at distances below $150~h^{-1}_{70}$ kpc from IC 3061 requires very high infall velocities ($> 350$ \ km s$^{-1}$). Figure 11 shows the same relationship between tangential velocity (V$_{\rm tan}$), infall velocity (V$_{\rm infall}$), and distance to the gas cloud for UGC 7697. This gas cloud could more easily by infalling than the case of IC 3061. The difference between the velocity of the absorber and the systemic velocity of the galaxy is only 32 \ km s$^{-1}$, so with this $|$V$_{\rm los}$$|$ if the cloud is at $150~h^{-1}_{70}$ kpc, the tangential velocity of the cloud would only have to be greater than 29\ km s$^{-1}$~to have infall and no counter-rotation. The infall component to the galaxy would then be 43\ km s$^{-1}$. In fact, all of the possible distances from the absorber to UGC 7697 are consistent with the possibility of infall and comparable to the velocities of HVCs relative to the Galaxy. However, this gas is also much further away (at least 100 kpc) than the distances to many HVC complexes (e.g., Wakker 2001; Putman et al. 2003b). Both of these absorbers may potentially represent the cooling, infalling halo gas before it has condensed to the densities of HVCs. \subsection{Galaxy Waste?} The case of the gas traced by the absorbers being galactic waste from IC 3061 and UGC 7697 is difficult to reconcile given their positions roughly along the plane of both galaxies. Galactic worms, mushrooms, and chimneys in spiral galaxies represent the possible blow out of enriched gas into the halo of a galaxy (e.g., English et al. 2000; McClure-Griffiths et al. 2003), but are almost always seen extending up in galactic latitude. These features may be due to supernovae at high galactic latitude, where the nature of the diffuse material at high latitude allows the explosion to push through the main disk of the galaxy. In addition models predict outflowing gas due to multiple supernovae would subsequently fall back onto the galaxy closer to the galactic center (e.g., Bregman 1980), and the outflowing gas itself is not likely to reach distances greater than 10 kpc from the galaxy's disk (de Avillez 2000). There is no known scenario where blown out material would start counter-rotating in the plane of the galaxy. For both of the galaxies studied here, the absorbing gas is at distances greater than $50~h^{-1}_{70}$ kpc from the edge of the galaxy's HI disk, an unlikely location to find outflowing material from a spiral galaxy. The last panel of Figure 9 shows the outflow scenario for IC 3061. For UGC 7697 the gas would be beyond the galaxy in the outflow scenario. In Figures 10 and 11 we can turn infall velocity to outflow velocity to examine the magnitude of the velocity component needed to have the gas outflowing and not counter-rotating. For IC 3061 the outflow velocity (within $200~h^{-1}_{70}$ kpc) would have to be greater than at least 250 \ km s$^{-1}$~for the gas to be outflowing and not counter-rotating, and for UGC 7697, the outflow velocity would have to be greater than at least 37 \ km s$^{-1}$. Given the absorbers' distances and positions along the major axes of the galaxies, away from active star formation in the galaxy disks and off axis from any outflow from the galaxy centers, these outflow velocities seem unlikely and the galactic waste model does not seem to be a plausible explanation for the origin of the gas. The only remaining galactic waste scenario is a low surface brightness satellite galaxy which has blown out most of its gas due to multiple supernova (Stocke et al. 2004) or the presence of a structure representing the tidal destruction of a satellite, similar to the Magellanic Stream around our own Galaxy (e.g., Putman et al. 2003), but more diffuse, as the Magellanic Stream would have been detected by our deep HI observations. The next section discusses this scenario further. \subsection{Galaxies with Counter-Rotating Halo Gas and Galactic Satellites} It is possible these two systems are anomalies, like the few examples of galaxies with counter-rotating gas in emission (e.g., Hunter et al. 1998; Pizzella et al. 2004). This is normally attributed to the recent accretion of a satellite, or merger. There is no evidence for a recent interaction in the galaxies' HI distributions to the deep levels of these observations (Figs. 1-8). A typical cloud in the Magellanic Stream has a linewidth of $\sim 25$ \ km s$^{-1}$~ and a column density above $10^{19}$ cm$^{-2}$ (Putman et al. 2003a). Our 5$\sigma$ column density sensitivity to this type of cloud is $6 \times 10^{18}$ cm$^{-2}$ (IC 3061) and $5 \times 10^{18}$ cm$^{-2}$ (UGC 7697). Our beamsize at the distance of IC 3061 is $39.6~h^{-1}_{70}$ kpc and $29.8~h^{-1}_{70}$ kpc for UGC 7697, while the Magellanic Stream subtends over 100$^\circ$~on the sky, or $\sim$100 kpc if you place the entire feature at 55 kpc from the Galaxy. Therefore a feature such as the Magellanic Stream or a feature $1/3$ the size and twice as diffuse would be detected by these observations. There is also no evidence for a recent interaction in the optical DSS images of the galaxies (Figs. 1 \& 4). The closest galaxy to IC 3061 with a velocity within 750 \ km s$^{-1}$~of IC 3061 is $390~h^{-1}_{70}$ kpc away. For UGC 7697, the closest galaxy within 750 \ km s$^{-1}$~is $344~h^{-1}_{70}$ kpc away. A counter-rotating low mass satellite in the plane of the galaxy is the only remaining possibility, and an unlikely one for several reasons, including the following. 1. The mass limit of our HI observations is 7.5 $\times$ 10$^{6}$ ${\rm M}_\odot$~for IC 3061 and 3.6 $\times 10^{6}$ ${\rm M}_\odot$~for UGC 7697 (5$\sigma$; $\Delta$v = 50\ km s$^{-1}$) and thus any gas rich satellite galaxy would have been detected. For UGC 7697 this is further supported by the higher resolution VLA observations of C05. 2. Satellites are very rarely found orbiting a galaxy in the plane of the galaxy, and are most commonly found on polar orbits (e.g. Holmberg 1969; Zaritsky et al. 1993; Knebe et al. 2004). \subsection{A Gaseous Local Filament} Both of the absorbers discussed here have velocities similar to a nearby galaxy (within 210 \ km s$^{-1}$~of the systemic velocity), but their relationship to the actual kinematics of the galaxy is somewhat dubious. A remaining explanation for the gas traced by the absorbers is that it does not have a direct connection to the gas in the galaxy. Instead, the association between the galaxy and the absorber could be a general connection with the large scale structure of the region. Simulations show that a substantial fraction of the low column density gas in the vicinity of galaxies connects with large scale structure, and not necessarily with individual galaxy halos (Dav\'e et al. 2001). Observationally, Bowen et al. (2002) found that within 37-290 $h^{-1}_{70} $ kpc of a galaxy the covering factor is $\sim 100$\% for low column density gas (N$_{HI} > 10^{13}$ cm$^{-2}$). Nevertheless, they were unable to definitively associate the gas seen in absorption directly with a single galaxy halo because of the complexity of the galaxy distribution in the region. A connection between absorbers and large-scale structure was concluded by Penton et al. (2002) by performing a correlation function analysis on a sample of galaxies and absorbers. They found that absorbers with column densities in the range of those studied here are more weakly clustered with galaxies than galaxies are clustered with each other. They also found statistical evidence that absorbers are associated with large scale filaments, as also found by Rosenberg et al. (2003). The kinematic and spatial relationships between the galaxies and absorbers studied here and in the sample of C05, provide additional evidence for low column density absorbers probing gas associated with large scale structures rather than individual galaxy components. The gaseous material traced by the absorbers may ultimately fall into the galaxy halos and fuel future star formation, but currently the gas does not follow the galaxies' potential. Both absorption line-galaxy pairs in this sample reside in gas-rich and galaxy-rich regions of space, providing further support for the idea that the absorbers are tracing out a cosmic filament. IC 3061 is on the outskirts of the Virgo Cluster, where gas-rich galaxies and absorbers are very common (Solanes et al. 2001; Rosenberg et al. 2003). While UGC 7697 does not fall within the bounds of the Virgo Cluster, it is not far from the boundary and is probably still in a significantly over-dense region. Figure 12 shows the sightlines towards PG 1211+143 and Ton 1542 and galaxies from the Updated Zwicky Catalog (Falco et al. 1999) over a 3500 \ km s$^{-1}$~velocity range including the absorbers and galaxies of interest here. The absorber near IC 3061 along the sightline to PG 1211+142 is clearly in a denser region than the absorber near UGC 7697 along the sightline to Ton 1542, but clearly both are near a filament of galaxies. In the sample of absorbers considered here and in C05, the UGC 7697 - Ton 1252 pair is actually the most isolated when a $\sim$2 Mpc radius around the absorber is examined. In another words, all of the galaxy-absorber pairs lie along cosmic filaments rich in galaxies and intergalactic gas. It may be that the HI gas in galaxies is a better tracer of the location of gaseous cosmic filaments than the stellar component of galaxies. This would be true if much of the HI gas represents recently accreted star formation fuel from the cosmic filament. The link between a large sample of absorbers and HI galaxies will be addressed in future papers (Rosenberg et al. 2005, Ryan-Weber 2005). \subsection{Link between Gas and Dark Matter} At what radii would low column density gas traced in absorption directly trace the potential of the nearby galaxy and dark matter distribution? We commonly attempt to detect the gas of a spiral galaxy out to larger and larger radii with deep HI or H$\alpha$ observations to probe their dark matter content (e.g., Corbelli, Schneider, \& Salpeter 1989; Bland-Hawthorn, Freeman \& Quinn 1997), but our results indicate there is a maximum radius at which this method is effective. The idea of a link between the surface density of gas and dark matter was discussed by Hoekstra, van Albada, \& Sancisi (2001; see also Bosma 1981). They derived a mass model and applied it successfully to a sample of 24 spiral galaxies, but were unable to find real evidence for a coupling between HI and dark matter in spiral galaxies. Our results indicate that if the gas and dark matter are coupled, or even if the gas simply follows the mass distribution of the galaxy, there is a maximum distance or minimum density for this relationship. The absorbers studied here probe beyond this limit. Both IC 3061 and UGC 7697 are spiral galaxies similar to the Milky Way in luminosity and HI mass and thus a reasonable total mass (consisting primarily of cold dark matter) is $\sim10^{12}$ ${\rm M}_\odot$~(e.g., Battaglia et al. 2005; Moore et al. 2001). By examining the cold dark matter models of Navarro, Frenk, \& White (1996), one can estimate the dark matter density at the radii of the absorbers. At 100 or 122 kpc from the center of a $\sim10^{12}$ ${\rm M}_\odot$~ dark matter halo, the dark matter density would be $\sim 60,000$ ${\rm M}_\odot$~kpc$^{-3}$ and the material would be on the flat part of the rotation curve. Clearly the dark matter would dominate over the gas in mass at this radii, even for an absorber that is 100 kpc in diameter. One option to explain the lack of rotation in the gas detected here is a truncation of the dark matter halo due to tidal stripping (Nagai \& Kravtsov 2005). This tidal stripping would result in a lack of both dark matter and gas rotating at large radii. In the cluster simulations of Nagai \& Kravstov, they find the average mass loss of sub-halos is $\sim$30\% near the virial radius of the cluster. IC 3061 is considered part of the Virgo Cluster and in projection is 1.2 Mpc from the center of the cluster. UGC 7697 is not considered part of the cluster and is 3.9 Mpc in projection from its center. Both galaxies are $\sim1000$ \ km s$^{-1}$~ from the velocity centroid of the Virgo Cluster, but could be considered as being on the upper edge of the velocity range of the cluster (e.g., Bingelli et al. 1987). The virial radius of the Virgo Cluster has been estimated to be 1.61 $h^{-2/3}_{70}$ Mpc (Mamon et al. 2004), so IC 3061 is a candidate for the tidal truncation scenario, but UGC 7697 does not fall naturally into this model. In support of the truncation model is the recent work on the dark matter halo of the Milky Way by Battaglia et al. (2005), which indicates a truncated model may be necessary to explain the velocity distribution of objects out to 120 kpc from the Milky Way. Lack of support for this scenario come from the recent results of Prada et al. (2003), who examined 3000 satellite velocities from the SDSS and found that galaxies are embedded in large halos extending out to 350 kpc. The regular structure of both galaxies' HI and stellar disks also defies the scenario of a severe tidal disruption. Finally, another possible model is that the dark matter and baryonic halos of the galaxies have slightly different shapes, i.e., the dark matter halo is oblate, while the baryonic halo is spherical or otherwise (e.g., Bailin et al. 2005). The result of low column density gas not smoothly rotating with a dark matter halo may represent a limit on the density of gas that is able to come into equilibrium with a rotating galaxy. This could be due to other forces, besides gravity, that act on diffuse gas more readily than gas above $\sim10^{19}$ cm$^{-2}$ in HI column density. Ram pressure forces are the obvious example (e.g., Kenney, van Gorkom \& Vollmer 2004), and since these galaxies are moving, this diffuse component may not be able to hang on. Ram pressure stripping for the origin of the gas for these galaxies seems unlikely however, as the counter-rotating nature of this gas indicates it was not recently stripped from the galaxy and both galaxies have a regular HI structure.\footnote{In a sample of Virgo galaxies imaged in HI by the VLA, the closest galaxy to IC 3061, NGC 4189, also does not show signs of ram pressure stripping (Chung et al. 2005).} In the case of the Milky Way, there may be a similar effect of low column density gas not able to follow the dark matter halo as closely. The HVCs have typical peak column densities of just below $10^{19}$ cm$^{-2}$ (Putman et al. 2002), and most of the HVCs with peak columns between $5 \times 10^{19}$ and $10^{20}$ cm$^{-2}$ (the maximum column density observed for an HVC in the southern sky) can be attributed to the Magellanic Stream, an HVC which is known to originate from the interaction of the Magellanic Clouds with the Milky Way. If some of the lower column density HVCs represent the condensing IGM, the reason they do not appear to be rotating with the Galaxy may be related to their relatively low column densities. Perhaps diffuse IGM gas must reach a certain column density before it is able to come into equilibrium with the dark matter halo of a galaxy. This comparison cannot be taken too far, as the gas we are probing is at much lower column densities than HVCs and the velocity components being measured are different (see Gaseous Infall section). The lack of kinematic correlation between the gas and dark matter at large radii can be combined with the work of C05 to strengthen the scenarios discussed here. The strongest case of this lack of correlation lies in the two systems presented here with the edge on nature of the galaxies and absorber lying close to the plane of the galaxy at a distance of $\sim100~h^{-1}_{70}$ kpc. In the C05 paper, 5 additional spiral galaxies with kinematic information were found to have low column density (10$^{13.2-13.9}$ cm$^{-2}$) Ly$\alpha$ absorbers within $182~h^{-1}_{70}$ kpc. These galaxies have various inclinations and absorber positions. C05 fit these galaxies with an extended rotating gaseous disk to determine if the absorption line velocity agreed with that expected from a Navarro, Frenk \& White (1996) cold dark matter halo model. For 3 of these, they could explain the rotation if one invoked warps or thick rotating gas layers, and in the other 2 they found the magnitude of the velocity was too high to be fit by any reasonable galactic features, or the gas would have to be counter-rotating. Since determining the main velocity components of the absorbers in relation to the galaxies is not straight-forward for these systems, due to the galaxies' inclinations and the locations of the absorbers, comments cannot easily be made on the issues of gaseous inflow and/or outflow. In summary, the C05 results support the results we present here that the gas and dark matter do not appear to be linked in the extended dark matter halos of spiral galaxies. \section{Summary} The deep HI observations around low redshift Ly$\alpha$ absorbers presented here have revealed two nearby, edge-on spiral galaxies within $125~h^{-1}_{70}$ kpc (in projection from the galaxies' centers) of absorption line systems at similar velocities. Both of the absorbers are located roughly along the plane of the nearby galaxy and, based on the kinematics of the galaxies revealed by the HI observations, are not rotating with the galaxy. In the case of IC 3061 the gas traced by the absorber (along the sightline to PG 1211+143) is either counter-rotating or infalling at a very high relative velocity. For UGC 7697 the gas traced by the absorber (along the sightline to Ton 1542) may be infalling to the galaxy, but cannot be rotating with the galaxy, as also found by C05. These results indicate that despite dark matter halos extending out to 100's of kpc, there is not an associated diffuse baryonic component bound to this dark matter at large radii. The results agree with previous findings that indicate low column density absorbers, even in the vicinity of galaxies, largely trace the cosmic web. In terms of the questions posed in the Introduction, the following "answers" hold based on this paper. \begin{itemize} \item Do the Ly$\alpha$ absorbers near galaxies trace gas in extended dark matter halos? No, unless for some reason the gas has not yet reached equilibrium with the dark matter. \item Is there gas rotating with the dark matter halos of galaxies at large radii? Clearly not unless our understanding of the kinematics of dark matter halos at large radii is incorrect. Another possibility is the dark matter halo has been truncated and neither gas or dark matter is rotating at large radii. \item Do the Ly$\alpha$ absorbers near galaxies represent galactic waste? No, the gas is unlikely to have originated from the galaxies given the discrepant velocities and positions along the major axes of the galaxies. \item Do the Ly$\alpha$ absorbers near galaxies represent galactic fuel? Possibly, but in one case infall velocities are quite high given the distance from the galaxy and a comparison to the velocities of HVCs. The absorbers may represent infalling gas that will eventually condense into HVCs. \item Are the Ly$\alpha$ absorbers near galaxies simply part of large scale cosmic filaments? Yes, this is the most likely scenario given the geometry and velocities of the absorber-galaxy pairs discussed here. This is consistent with the observations of C05, Bowen et al. (2002), and Penton et al. (2002), as well as the simulations of Dav\'e et al. (2001). \item Do the Ly$\alpha$ absorbers represent the presence of a low surface brightness galaxy, rich in gas, but not easily detected in the optical? No, our observations reach HI masses on the order of $3.6 - 7.5 \times 10^{6}$ ${\rm M}_\odot$~ and column densities of $5-6 \times 10^{18}$ cm$^{-2}$, so an HI-rich dwarf galaxy or stripped gaseous feature such as the Magellanic Stream would have been detected. \end{itemize} \medskip \noindent \acknowledgements{{\bf Acknowledgements: }We would like to thank E. Ryan-Weber, M. Shull, J. van Gorkom and J. Bullock for useful discussions and the staff at Arecibo for all of their assistance, in particular Phil Perillat. We also thank the referee for useful comments. MEP acknowledges partial support by NASA through grant HST-HF-01132.01 awarded by STScI. JLR acknowledges NSF Grant AST-0302049. JTS and JLR thank HST project awards AR-09221.01-A and GO-06593.01-A for support. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. } \clearpage
proofpile-arXiv_065-3158
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Ultracold fermionic atoms can exhibit both the phenomena of a Bose-Einstein condensate (BEC) \cite{Einstein24,Einstein25} of molecules and the condensation of correlated atom pairs similar to BCS-superconductivity \cite{ACooper56,BBCS57}. Recent experimental progress \cite{Jin04,Ketterle04,ZGrimm04,Partridge05} in the crossover region reveals the universality of the condensation phenomenon, as anticipated theoretically \cite{ALeggett80,BNozieres85,CMelo93,DStoof96,ECombescot99,FPethick00}. In this paper we develop a systematic functional integral formulation for the treatment of the equilibrium state \footnote{We do not touch in this work on the highly interesting non-equilibrium physics of the atom gas.} of ultracold fermionic atoms. We discuss in detail how to arrive at a formulation that treats the fermionic fluctuations of unbound atoms and the bosonic fluctuations of the molecule or di-atom field on equal footing. An approach based on a Hubbard-Stratonovich transformation is an ideal starting point for a unified inclusion of fluctuations of molecules or ``Cooper pairs''. We show how this formalism can be implemented in practice in a self-consistent approximation scheme. We carefully discuss the renormalization procedure that is needed to absorb the ultraviolet divergences in this nonrelativistic quantum field theory. The result is an effective low energy formulation which is insensitive to the microphysical cutoff scale $\Lambda$. We concentrate on dimensionless quantities by measuring all quantities in units of the Fermi momentum $k_F$ or the Fermi energy $\epsilon_F = k_F^2/2M$. The inverse Fermi momentum is the most important length scale in the problem, related to the total number density of atoms by $k_F = (3\pi^2 n)^{1/3}$. It measures the typical interparticle spacing. We show that the atom density does not appear as an independent parameter in the computations which can be performed in terms of dimensionless ratios. This renders our formalism highly universal, since the results of experiments with different atoms and densities can be related by simple scaling laws. For this purpose, we introduce three dimensionless parameters that characterize the crossover problem efficiently: First, the ``concentration'' $c$ describes the ratio between the in-medium scattering length and the average distance between two unbound atoms or molecules. Its inverse, $c^{-1}$, smoothly connects the weakly coupling BCS regime ($c<0, |c| \ll 1$) with the BEC regime ($c>0, |c| \ll 1$). The perhaps most interesting region is the ``crossover regime'' in between, $|c^{-1}|\lesssim 1$. Second, a Yukawa or Feshbach coupling $\tilde{h}_\phi$ characterizes the interaction between atoms and molecules. The third parameter is the temperature in units of the Fermi energy, $\tilde{T} = T/\epsilon_F$. No further details of the ``microphysics'' are needed for the macroscopic quantities. In this sense the description becomes universal. In the language of quantum field theory or critical phenomena the parameters $c$ and $\tilde{h}_\phi$ describe relevant couplings for the long distance physics. Our formalism is well suited to describe all regimes of coupling and temperature, including the superfluid phase with broken symmetry. As a first application we compute the phase diagram for the crossover problem as described by the dependence of the critical temperature on $c$. The function $\tilde{T}(c^{-1})$ only depends on the value of $\tilde{h}_\phi$ and shows universal limits for large $|c^{-1}|$, i.e. in the BCS or BEC regime, respectively. In the crossover region the dependence on the Yukawa coupling $\tilde{h}_\phi$ is strongest. Nevertheless, we find that for the two limits $\tilde{h}_\phi \to 0$ (narrow resonance) and $\tilde{h}_\phi\to \infty$ (broad resonance) the crossover becomes independent of $\tilde{h}_\phi$. For broad Feshbach resonances the Yukawa coupling becomes an ``irrelevant'' parameter. In the narrow resonance limit for $\tilde{h}_\phi\to 0$ the bosonic degrees of freedom can be associated with microscopic molecules. In this limit the molecule fluctuations can be neglected and mean field theory becomes valid. In contrast, for $\tilde{h}_\phi \to \infty$ the microscopic molecule degrees of freedom play no role and the model is equivalent to a purely fermionic model with pointlike interactions. Nevertheless, effective molecular bound states (``dressed molecules'') become a crucial feature of the crossover physics. For an actual comparison with experimental results one needs to relate the ``universal parameters'' to experimental observables, in particular the strength of the magnetic field. This is done in \cite{Diehl:2005an}, where we connect the scattering properties of atoms with the values of our universal parameters. The present paper may therefore be viewed as the theoretical basis for the more detailed comparison with experiment in \cite{Diehl:2005an}. Here we concentrate on the conceptual and formal developments. A further aspect of universality concerns the geometry of the trap. We will present a systematic ``derivative expansion'' which computes the effects of fluctuations and interactions independently of the shape of the trap. The details of the trap only enter at the end through the solution of effective field equations in presence of a trap potential. The systematic character of the derivative expansion allows for a quantitative estimate of the reliability of simple approximations, like the ``local density'' resp. ``Thomas-Fermi approximation'' or the ``local condensate approximation'' frequently used in literature \cite{DDKokkelmans,FFGriffin,BBStrinati,CCStrinati,ZStrinati}. The effective field equation for the condensate has the same status as the (time independent) Gross-Pitaevskii equation \cite{Pitaevskii61,Gross61,Gross63} for a Bose-Einstein condensate. Our approach basically relies on two ingredients: The functional integral and the Hubbard-Stratonovich transformation or partial bosonization. The partial bosonization permits us to formulate the problem microscopically as a Yukawa theory, thereby allowing to deal with nonlocal interactions. This route is also taken in \cite{BBKokkelmans,CCKokkelmans,DDKokkelmans,EEGriffin,FFGriffin,AATimmermans,GGChen,HHChenReview,WWStoofBos,XXStoofBos,YYStoof,ZZStoof}. The power of the functional integral techniques, however, is so far only marginally used. A first attempt in this direction was made by Randeria \cite{CMelo93,CMelo97}. Later, other approaches employed this concept more as an argumentative tool than as a method for concrete calculations \cite{DDStrinati,CCKokkelmans,WWStoofBos,XXStoofBos} \footnote{The present paper covers part of a longer first version of \cite{Diehl:2005an}. A publication using functional integral computations has appeared more recently \cite{ZZVivas05}.} . Beyond the systematic functional integral formulation and the emphasis on the universal aspects of the phase transition our work extends previous results by the systematic inclusion of the molecule fluctuations. These fluctuations are important for the quantitative understanding of the phase transition for a broad Feshbach resonance with a large dimensionless Yukawa coupling $\tilde{h}_\phi$, as relevant for the present experiments in $^6\mathrm{Li}$ and $^{40}\mathrm{K}$. For zero temperature our calculations agree well with Quantum Monte Carlo simulations \cite{Carlson03} at the resonance. This paper is organized as follows: In sect. \ref{sec:metastabledilutegas} we investigate the Feshbach resonance and introduce the important molecule degrees of freedom in terms of a di-atom or molecule field $\hat{\phi} (x)$. This allows us to cover the whole range of temperature and the crossover. The Bose-Einstein condensate or the superfluid order parameter corresponds to a nonvanishing expectation value $\langle \hat{\phi}\rangle$. We establish the equivalence of our formulation with a purely fermionic formulation for which the effective interaction between the atoms contains a nonlocal piece reflecting the molecule exchange. Only in the broad resonance limit $\tilde{h}_\phi\to \infty$ this interaction becomes pointlike. Our functional integral formulation in terms of an independent field $\hat{\phi}(x)$ is particularly well adapted to the crossover from a BEC to a BCS condensate: Molecules and Cooper pairs are described by the same field. Of course, the dynamical properties depend strongly on the BEC or BCS regime. In particular, we compute in sect. \ref{sec:EffActMFT} the gradient coefficient $\bar{A}_\phi$ which determines the gradient contribution to the free energy for a spatially varying molecule field. For the BEC regime, $\bar{A}_\phi$ is dominated by the ``classical value'', corresponding to the dominance of the tightly bound molecules. In contrast, for the BCS regime the fluctuation effects dominate $\bar{A}_\phi$. In this case $\hat{\phi}(x)$ can be associated to a collective degree of freedom (Cooper pair). One could omit the classical contribution to $\bar{A}_\phi$ such that $\hat{\phi}$ becomes on the microscopic level an auxiliary field. Indeed the presence of a molecular bound state is no longer crucial in the BCS regime. One may work in a purely fermionic setting with a local interaction which depends on a magnetic field $B$. For a broad Feshbach resonance this feature holds for the entire crossover region. We discuss the derivative expansion of the effective action in sect. \ref{sec:EffActMFT}. In particular, sect. \ref{sec:renormalization} addresses the issue of additive renormalization of the detuning and sect. \ref{sec:WFR} computes the wave function renormalization $Z_\phi$ which distinguishes the ``renormalized field'' for the dressed molecules and the field for the microscopic of ``bare'' molecules. We turn in sect. \ref{sec:Relevant} to the discussion of the relevant parameters that describe the universal aspects of ultracold atoms. In terms of the dimensionless concentration $c$ and Yukawa coupling $\tilde{h}_\phi$ the system becomes independent of the detailed short distance properties. Sect. \ref{sec:MolFrac} discusses the fraction of atoms bound in molecules. It is crucial to distinguish between the ``bare'' or microscopic molecules and the dressed molecules \cite{Stoof05,IIChen05}. Their numbers are related by the multiplicative wave function renormalization $Z_\phi$. In order to achieve a complete symmetry between the fermionic fluctuations of unbound atoms and the bosonic fluctuations of molecules we adapt our functional integral setting in sect. \ref{EffAtDens}. In sect. \ref{sec:renconstZR} we turn to the BEC limit. For a broad Feshbach resonance (large $\tilde{h}_\phi$) one finds very large $Z_\phi$, such that the condensate fraction (condensed dressed molecules) exceeds by far the number of microscopic molecules. The latter becomes completely negligible for $\tilde{h}_\phi\to \infty$. Nevertheless, we find a Bogoliubov theory for bosons in the low temperature BEC regime for all values of $\tilde{h}_\phi$. Finally, we include the molecule fluctuations (or collective fluctuations of di-atom states) in sect. \ref{sec:beyond} in the form of new bosonic gap equations. We draw conclusions in sect. \ref{sec:conclusions}. While the main part of this paper deals with a homogeneous situation our formalism can be extended to cover the inhomogeneous situation in a trap of atoms if the inhomogeneity is not too large. Since the main part is independent of the discussion of inhomogeneities we display the formalism for inhomogeneous situations in appendix \ref{sec:partial}. We introduce a general formalism for a functional integral which applies to arbitrary fermionic systems and is easily generalized to systems with bosons, far beyond the particular case of a Feshbach resonance (where di-atom states play a role). In addition to the (Grassmann) field variables $\hat\psi (x)$ for the fermionic atoms we employ a bosonic field variable $\sigma (x)$. It corresponds to a varying effective chemical potential which is associated to the density field $n(x)$. This procedure allows computations for the inhomogeneous setting of atoms in a trap beyond the small density approximation or beyond the Thomas-Fermi approximation. The bosonic field variable $\sigma$ is introduced by partial bosonization. We formulate the effective action $\Gamma [\sigma]$ and establish the exact formal relations between $\sigma (x)$, $n(x)$, the chemical potential $\mu$ and the local trap potential $V_l(x)$. \section{dilute gas of ultracold atoms} \label{sec:metastabledilutegas} The ultracold gas of fermionic atoms in the vicinity of a Feshbach resonance can be treated in the idealization of two stable atomic states denoted by a two component spinor $\hat\psi$. (For the example of $^6\mathrm{Li}$ these states may be associated with the two lowest hyperfine states $|1\rangle$, $|2\rangle$.) The molecular state responsible for the Feshbach resonance can be treated as a bosonic particle. In our idealization it is stable for negative binding energy and can decay into a pair of fermionic atoms for positive binding energy. For a realistic description our formalism has to be capable to describe the following phenomena: (i) Condensates of atom pairs may form at low temperature, similar to the BCS description of superconductivity. (ii) Molecules of two atoms can be exchanged between the single atoms, thus contributing to the interaction. Also these molecules may form a Bose-Einstein condensate at low temperature. In our formalism both effects find a \emph{unified} description as will be discussed in detail in this paper. We work with a microscopic action which explicitly includes a bosonic field $\hat\phi$ with atom number two \cite{CCKokkelmans}, \begin{eqnarray}\label{PhiAction} S_B&=& \int dx\Big\{\hat\psi^\dagger(\partial_\tau - \frac{1}{2M}\triangle -\sigma) \hat\psi\nonumber\\ && \qquad+\hat{\phi}^*(\partial_\tau - \frac{\triangle}{4M} + \bar{\nu}_\Lambda-2\mu) \hat{\phi} \nonumber\\ && \qquad - \frac{1}{2}\bar{h}_\phi \Big(\hat{\phi}^*\hat\psi^T \epsilon\hat\psi - \hat{\phi}\hat\psi^\dagger\epsilon\hat\psi^*\Big) \Big\}. \end{eqnarray} Here we employ the Matsubara formalism where the Euclidean time $\tau$ is wrapped on a torus with circumference $\beta = 1/T$ with conventions \begin{eqnarray} x = (\tau,\textbf{x}), \quad \int dx = \int_0^{\beta} d\tau \int d^3x. \end{eqnarray} (Our units are $\hbar = c = k_B = 1$.) We will shortly see how this microscopic model relates to a purely fermionic setting by the means of a partial bosonization or Hubbard-Stratonovich transformation. The complex two-component spinors $\hat\psi(x)$ are anticommuting Grassmann variables. We assume an equal mixture of the two atomic states. In this case the chemical potential associated to the difference in the number of atoms in the ``up'' and ``down'' states precisely cancels the energy difference between the two states such that both can be omitted. (For unequal mixtures the action contains an additional term $\propto \hat\psi^\dagger \tau_3 \hat\psi$.) The bosonic molecules are described by a complex bosonic field $\hat{\phi}$. The propagator for these ``bare molecules'' is obtained from simple symmetry considerations, i.e. we assume a mass $2M$, leading to a nonrelativistic kinetic energy $q^2/4M$. The quadratic term $\sim\hat{\phi}^* \hat{\phi}$ involves the ``bare'' binding energy or detuning ($\bar{\nu}_\Lambda$) which typically depends on the magnetic field. In order to make contact to physical observables, $\bar\nu_\Lambda$ has to be additively renormalized, which is implemented in sect. \ref{sec:renormalization}. We use here two different chemical potentials $\sigma$ and $\mu$ for the fermionic atoms and bare molecules. This is useful if we want to obtain separately the densities of fermionic atoms or bare molecules by differentiation of the free energy with respect to $\sigma$ or $\mu$. Since only the total number of atoms is conserved, one has to set $\sigma =\mu$ at the end of the computations. The distinction between $\sigma$ and $\mu$ is appropriate if one wants to understand explicitly the role of the microscopic (or bare) molecules. In sect. \ref{EffAtDens} we will drop this distinction in favor of a more unified approach. There we will set $\mu =\sigma$ from the outset such that $\sigma$ will be conjugate to the total density of atoms irrespective of a microscopic distinction between unbound atoms and molecules. In the main part of this paper we will treat $\sigma$ and $\mu$ as constant classical source terms. However, we stress that this source term can be straightforwardly promoted to a fluctuating field, $\sigma \to \hat\sigma(x)$. This issue, and its use for the description of inhomogeneities beyond the usual Local Density Approximation, will be discussed in the appendix \ref{sec:partial}. The Yukawa or Feshbach coupling $\bar{h}_\phi$ describes the coupling between the single atoms and molecules, $\epsilon_{\alpha\beta}= -\epsilon_{\beta\alpha}$, $\epsilon_{12}=1$. For $\bar{h}_\phi\to 0$ the molecular states decouple. However, in an appropriately performed ``narrow resonance limit'' $\bar{h}_\phi\to 0$ which keeps the scattering length fixed, an exact solution of the many-body problem becomes feasible above the critical temperature. A detailed analysis of this limit is given in \cite{Diehl:2005an}. Broad Feshbach resonances correspond to large $\bar h_\phi$, and we will see that the limit $\bar h_\phi\to \infty$ describes a purely fermionic theory with pointlike interactions where microscopic molecules can be neglected. The thermodynamic equilibrium situation is described by the partition function. The basic ingredient for our formalism is the representation of this object in terms of a functional integral with weight factor $e^{-S_B}$, with $S_B$ the Euclidean action (\ref{PhiAction}) \begin{eqnarray}\label{1} Z[\eta,j]&=&\int {\cal D}\hat\psi{\cal D}\hat\phi \exp \Big\{-S_B[\hat\psi,\hat\phi]\\\nonumber &&+\int\hspace{-0.12cm} dx\,\, \eta(x)\hat\psi^\dagger(x) + \eta^\dagger\hat\psi(x) \\\nonumber && + j(x) \hat\phi^*(x) + j^*(x) \hat\phi(x)\Big\}. \end{eqnarray} This formulates the full quantum theory in terms of the sources $\eta$ and $j$ for the fermion and the boson fields. All one particle irreducible $n$-point functions, including the order parameter and the correlation functions, can be directly extracted from the effective action $\Gamma$, which obtains by a Legendre transform, \begin{eqnarray}\label{GammaLeg} \Gamma [\psi, \bar\phi] &=& - \ln Z + \int\hspace{-0.12cm} dx\,\, \eta(x)\psi^\dagger(x) + \eta^\dagger(x)\psi(x)\\\nonumber && \qquad \qquad + j(x) \bar\phi^*(x) + j^*(x) \bar\phi(x). \end{eqnarray} The effective action is a functional of the ``classical'' fields (or field expectation values) $\psi = \langle \hat\psi\rangle , \bar\phi =\langle \hat\phi\rangle$ in the presence of sources. They are defined as \begin{eqnarray} \bar\phi (x) = \langle \hat\phi\rangle(x) = \frac{\delta \ln Z}{\delta j^*(x)} \end{eqnarray} and analogous for the fermion fields. Of course, due to Pauli's principle, the fermion field cannot acquire a nonvanishing expectation value for $\eta = \eta^\dagger =0$ such that the physical value is simply $\psi =0$. It is often convenient to write $\Gamma$ as an implicit functional integral over fluctuations $\delta \phi, \delta\psi$ around ``background fields'' $\bar\phi, \psi$ \begin{eqnarray}\label{GammaFuncInt} \Gamma[\psi ,\bar\phi] =-\ln \int \mathcal{D}\delta\psi\mathcal D \delta\phi \exp\big( - S [\psi + \delta \psi, \bar\phi + \delta \phi ]+\nonumber\\ \int \big( j^*\delta\phi+ \eta^\dagger\delta\psi + \mathrm{h.c.}\big), \end{eqnarray} with $j^*(x) = \delta \Gamma /\delta \bar\phi(x)$. This form is particularly useful for the construction of the equation of state, i.e. the explicit expression for the total atom number density. The total number density of atoms $n$ includes those from unbound or ``open channel'' atoms and the ones arising from the bare molecules or ``closed channel'' atoms. It obeys \begin{eqnarray}\label{TotDens} n (x)= \bar{n}_F(x) + \bar{n}_B(x) = \langle\hat\psi^\dagger(x) \hat\psi(x)\rangle + 2\langle \hat{\phi}^*(x)\hat{\phi}(x)\rangle. \end{eqnarray} Indeed, the action is invariant under $U(1)$ phase transformations of the fermions and bosons, \begin{eqnarray} \hat\psi \to e^{\mathrm{i} \theta }\hat \psi, \quad \hat\phi \to e^{2\mathrm{i} \theta} \hat \phi \end{eqnarray} and the corresponding Noether charge is the total atom number $N= \int d^3 x n(x)$. We emphasize that eq. (\ref{TotDens}) is no ``ad hoc'' assumption - it is an exact expression for the particle number and directly follows from the microscopic formulation. More technically speaking, $\langle\hat\psi^\dagger \hat\psi\rangle$ and $2\langle \hat{\phi}^* \hat{\phi}\rangle$ represent the full two-point correlation functions of the ``bare fields'' which appear in the microscopic action (\ref{PhiAction}) and are quantized by means of the functional integral. In a homogeneous situation, the conserved particle number can be replaced by a fixed constant particle density, $n =N/V$. The bosonic part $\bar{n}_B$ counts the total number of atoms contained in the microscopic or ``bare'' molecules. This number receives a contribution from free molecules and from the condensate as discussed in more detail in sect. \ref{sec:MolFrac}. In the language often used for a Feshbach resonance, $\bar{n}_B$ measures the ``closed channel'' microscopic atoms. Using the formalism of the present paper we have computed $\bar n_B$ as a function of the magnetic field $B$ in \cite{Diehl:2005an}. We find very good agreement with observation \cite{Partridge05} over several orders of magnitude in $\bar n_B$. Since the action (\ref{PhiAction}) contains only terms quadratic or linear in $\hat{\phi}$ it is straightforward to express the expectation value $\bar{\phi}_0$ in terms of a local fermion-bilinear. It obeys (for vanishing source for $\hat{\phi}$) \begin{eqnarray} \big( \bar{\nu}_\Lambda- 2\mu -\frac{\triangle}{4M} + \partial_\tau \big)\bar{\phi}_0 = \frac{\bar{h}_\phi}{2}\langle \hat\psi^T\epsilon\hat\psi\rangle. \end{eqnarray} In particular, for constant $\bar{\phi}_0$ we find \begin{eqnarray} \bar{\phi}_0 = \frac{\bar{h}_\phi}{2\bar{\nu}_\Lambda - 4\mu}\langle \hat\psi^T\epsilon\hat\psi\rangle. \end{eqnarray} This demonstrates directly that our formalism makes no difference between a ``condensate of molecules'' $\bar{\phi}$ and a ``condensate of atom pairs'' $\langle\hat\psi^T\epsilon\hat\psi\rangle$ - they are simply related by a multiplicative constant. We finally show the equivalence of our formalism with a model containing only fermionic atoms and no microscopic molecules. The interaction in this fermionic description is, in general, not local. It becomes local, however, in the limit of a broad Feshbach resonance for $\bar h_\phi\to \infty$. For this purpose we use again the quadratic form of the bosonic part of the microscopic action (\ref{PhiAction}), which allows us to integrate out the $\hat{\phi}$ field. Expressed only in terms of fermions our model contains now a momentum dependent four-fermion interaction ($Q_4=Q_1+Q_2-Q_3$; $Q = (\omega_n, \vec q)$ with discrete bosonic Matsubara frequencies $\omega_n = 2\pi n T$ at finite temperature) \begin{eqnarray}\label{Mom4Fermion} S_{int} &=& - \frac{1}{2}\int\limits_{Q_1,Q_2,Q_3}\big(\hat\psi^\dagger(-Q_1)\hat\psi(Q_2)\big)\big(\hat\psi^\dagger(Q_4) \hat\psi(-Q_3)\big)\nonumber\\ &&\Big\{\frac{\bar{h}_\phi^2}{ \bar{\nu}_\Lambda - 2\mu + 2\pi\mathrm{i}(n_1-n_4)T+(\vec{q}_1-\vec{q}_4)^2/4M } \Big\}.\nonumber\\ \end{eqnarray} We emphasize that there is no difference between the Yukawa type model described by the action (\ref{PhiAction}) and a purely fermionic model with interaction (\ref{Mom4Fermion}). All physical observables can be computed in either one or the other of the two formulations. However, eq. (\ref{Mom4Fermion}) reveals that our model describes a setting beyond pointlike interactions via the classical frequency and momentum dependence of the four-fermion interaction. The momentum structure of (\ref{Mom4Fermion}) is compatible with interactions in the $\hat\psi\hat\psi$ -- channel. The action (\ref{PhiAction}) hence models a nonlocal coupling between the fermionic constituents. Reversing the logic, eq. (\ref{PhiAction}) could also be obtained by starting from eq. (\ref{Mom4Fermion}) and performing a Hubbard-Stratonovich transform or partial bosonization \cite{Hubbard59,Stratonovich}. Finally, we note that we could also choose classical ``gradient coefficients'' $\bar A_\phi^{(cl)}$ (see below) different from $1/(4M)$ in order to model an experimentally determined effective range. In the pointlike limit the momentum dependence and the dependence on $\mu$ can be neglected and the coupling term in eq. (\ref{Mom4Fermion}) is replaced by the ``local interaction approximation'' \begin{eqnarray}\label{BosonCond2} \bar{\lambda}_\Lambda=-\frac{\bar{h}_\phi^2}{\bar{\nu}_\Lambda}. \end{eqnarray} This limit obtains formally for $\bar h_\phi^2 \to \infty$, $\bar\nu_\Lambda \to \infty$, while keeping $\bar \lambda$ fixed. It is relevant for broad Feshbach resonances, as discussed in detail in \cite{Diehl:2005an}. \section{Derivative Expansion for the Effective Action} \label{sec:EffActMFT} The condensation of atoms pairs or molecules is signalled by a non-vanishing expectation value $\langle\hat{\phi}\rangle =\bar{\phi}_0$. The associated symmetry breaking of the global continuous symmetry of phase rotations of $\hat\psi$ and $\hat{\phi}$ (related to the conservation of the number of atoms) induces a massless Goldstone boson. This is the origin of superfluidity. In this section, we show how to describe this phenomenon in the effective action formalism. For this conceptual issue, it is sufficient to work in the mean field approximation or a simple extension thereof (extended MFT). The more sophisticated approximation schemes beyond mean field are presented in sects. \ref{EffAtDens}, \ref{sec:beyond}. In this work we will treat the effective action in a derivative expansion, i.e. we write \begin{eqnarray}\label{GammaPosSpace} \Gamma[\bar \phi] = \int dx\Big\{ U(\bar\phi) + Z_\phi \bar{\phi}^*\partial_\tau\bar{\phi} + \bar{A}_\phi\vec{\nabla}\bar{\phi}^*\vec{\nabla}\bar{\phi} + ...\Big\}, \end{eqnarray} and compute the ``wave function renormalization'' $Z_\phi$, the ``gradient coefficient'' $\bar A_\phi$ and the effective potential $U(\bar\phi)$. In the present paper we do not compute the corrections to the part of the effective action involving fermions -- for this part we have simply taken the classical action. (See \cite{Diehl:2005an} for the renormalization of the Yukawa coupling in presence of a ``background'' four-fermion interaction.) We therefore omit the fermionic part of the effective action from now on. For the concrete calculation we work in momentum space. We emphasize, however, that the above expression can be used for the investigation of weak inhomogeneities as encountered in atom traps. The fluctuation problem, i.e. the computation of $Z_\phi, A_\phi, U$ in the above truncation, can then be solved in momentum space, while the effects of weak inhomogeneities can be investigated by solving the classical field equations derived from (\ref{GammaPosSpace}). This reaches substantially beyond the usual local density approximation, which ignores the kinetic terms in eq. (\ref{GammaPosSpace}). We discuss the implementation of an external trapping potential in our functional integral formalism in app. \ref{sec:partial}. Here, however, we focus on the homogeneous situation. In its most general form, the effective action depends on the parameters $T, \sigma$ and $\mu$ and on the classical field $\bar\phi$. The dependence on the chemical potentials $\sigma$ and $\mu$ can already be inferred from the partition function -- they are only spectators w.r.t. the Legendre transform. Following the thermodynamic construction, the total particle density in a homogeneous setting is obtained as \begin{eqnarray}\label{TheDensEq} n = - \frac{\partial U}{\partial \sigma}\Big|_{\mu} \,\, - \,\, \frac{\partial U}{\partial \mu}\Big|_{\sigma}. \end{eqnarray} This prescription precisely reproduces eq. (\ref{TotDens}). Here $U$ has to be taken at its minimum. In the absence of sources the field equation for $\bar\phi (x)$ reads \begin{eqnarray} \frac{\delta \Gamma }{\delta \bar\phi(x)} \stackrel{!}{=} 0. \end{eqnarray} For a homogeneous situation the stable solution corresponds to a minimum of the effective potential \begin{eqnarray} \frac{\partial U }{\partial \bar\phi} &=& U' \cdot \bar\phi^* = 0, \end{eqnarray} with \begin{eqnarray} U' &=& \frac{\partial U }{\partial \bar\rho}, \quad \bar\rho = \bar\phi^*\bar\phi. \end{eqnarray} Here we have introduced the $U(1)$ invariant $\bar\rho = \bar\phi^*\bar\phi$ -- the effective potential only depends on this combination. This simple equation can be used to classify the thermodynamic phases of the system, \begin{eqnarray}\label{CharPhases} \mathrm{Symmetric\,\,phase (SYM):} && \bar\rho_0 =0, \quad \bar U'(0) > 0,\nonumber\\\nonumber \mathrm{Symmetry\,\, broken\,\, phase (SSB):} &&\bar\rho_0 > 0, \quad U'(\bar\rho_0) = 0,\\\nonumber \mathrm{Phase\,\, transition (PT):} && \bar\rho_0 = 0,\quad U'(0) = 0\\ \end{eqnarray} where $\bar\rho_0 = \bar\phi_0^*\bar\phi_0$ denotes the minimum of $U(\rho)$. For high temperatures, the minimum of the effective potential occurs at $\bar \phi =0$ and we deal with a normal gas phase. For low enough $T$, on the other hand, we expect the minimum of $U(\sigma, \bar{\phi})$ to occur for $\bar{\phi}_0\neq 0$. The spontaneous breaking of the $U(1)$ symmetry is signalled by a nonzero field expectation value and indicates the condensation phenomenon. This has an important aspect of universality: one and the same criterion can be used for the whole parameter space, both for the BCS-type condensation of Cooper pairs and for the BEC of microscopic molecules. In the remainder of this section, we will evaluate the effective action in the mean field approximation (MFT). This scheme is defined by only considering the effects generated by fermion fluctuations. We will include bosonic fluctuations in later chapters. Beyond MFT, we first include the contribution to the density from dressed molecules. This is connected to the effective bosonic two-point function (connected part of the bosonic particle density). This effect is included in different current approaches to the crossover problem in the limit of broad Feshbach resonances \cite{CCStrinati,HHChenReview,Stoof05}. We will refer to it as extended Mean Field Theory. Furthermore, we include in sect. \ref{sec:beyond} the modifications of the effective potential due to bosonic fluctuations, using suitable Schwinger-Dyson equations. In a realistic physical situation the validity of our model is restricted to momenta smaller than some microphysical ``ultraviolet cutoff'' $\Lambda$. In turn, $\Lambda$ is typically given by the range of the van der Waals interactions. One may use $\Lambda \approx a_B/100$ with $a_B$ the Bohr radius. For practical computations it is often convenient to consider the limit $\Lambda \to \infty$ such that no explicit information about the ``cutoff physics'' is needed. This requires to express the couplings of the theory in terms of suitably defined ``renormalized couplings'' that stay finite for $\Lambda\to \infty$. In the next subsection we will discuss the additive renormalization of the detuning. The functions $Z_\phi, \bar A_\phi$, in contrast, are UV finite. We will also subtract field-independent pieces linear in $\sigma$ and $\mu$ that obtain in a naive MFT computation. This additive ``density renormalization'' will be traced back to the relation between the functional integral and the operator formalism. \subsection{Effective potential and additive renormalization} \label{sec:renormalization} In the mean field approximation, the effective potential reads, after carrying out the Matsubara summation and omitting an irrelevant infinite constant, \begin{eqnarray}\label{USigmaPhi} U_\Lambda(\sigma,\bar{\phi}) &=& (\bar{\nu}_\Lambda -2\mu) \bar{\phi}^*\bar{\phi} + \Delta U_1^{(F)},\\\nonumber \Delta U_1^{(F)} &=& - 2T\int_\Lambda\frac{d^3q}{(2\pi)^3}\ln\cosh\gamma_\phi \end{eqnarray} where \begin{eqnarray}\label{DefGammaPhi} \gamma_\phi&=&\frac{1}{2T}\Big(\Big(\frac{q^2}{2M} -\sigma\Big)^2 + r \Big)^{1/2} = \big(\gamma^2 + \beta^2\big)^{1/2},\\\nonumber \gamma &=& \frac{\frac{q^2}{2M} -\sigma}{2T},\quad \beta = \frac{r^{1/2}}{2T}, \quad r = \bar{h}_\phi^2\bar{\phi}^*\bar{\phi}. \end{eqnarray} Here we have added an index $\Lambda$ in order to remind that this form still depends on an ultraviolet cutoff $\Lambda$. We regularize the effective potential by limiting the integration over spacelike momenta by an upper bound $\Lambda$, $q^2<\Lambda^2$. Let us now discuss the additive renormalization needed to properly describe the physics encoded in this object. It is needed in two instances: The first one concerns a zero-point shift of the two-point function and is related to the quantization via the functional integral. Its removal does not involve a physical scale and can thus be seen as a normalization of a certain observable, the particle number. The second one is related to a true ultraviolet divergence which needs to be cured by an appropriate counterterm. For the ultraviolet renormalization we can restrict to the simpler situation where there is no spontaneous symmetry breaking, since this effect occurs only in the low energy sector and cannot affect the ultraviolet physics. For the physical value of the field expectation value, this implies $\bar\phi =0$ and $\gamma_\phi= \gamma$. The fermionic part of the particle number is naively obtained from the thermodynamic relation $\bar{n}_\Lambda = - \partial U_\Lambda/\partial\sigma$ in a homogeneous setting. This yields the explicit expression \begin{eqnarray}\label{BarNFMFT} \bar{n}_{F,\Lambda} = -\int_\Lambda\frac{d^3q}{(2\pi)^3}\tanh \gamma \end{eqnarray} and we observe that this number may get negative for large negative $\sigma/T$. In order to clarify the precise relation between $\bar{n}_\Lambda$ and the particle density, we first consider the simpler situation of a single fermionic degree of freedom. In this case the expectation value $\langle\hat\psi^\dagger\hat\psi\rangle$ can be related to the expectation values of products of the usual annihilation and creation operators $a,a^\dagger$, which obey the anticommutation relation $a^\dagger a+aa^\dagger=1$, \begin{eqnarray}\label{24} \langle\hat\psi^\dagger\hat\psi\rangle&=& \frac{1}{2}\langle \hat\psi^\dagger\hat\psi-\hat\psi\hat\psi^\dagger\rangle= \frac{1}{2}\langle a^\dagger a-aa^\dagger\rangle\nonumber\\ &=&\langle a^\dagger a\rangle -1/2=n-1/2. \end{eqnarray} Here the second equality holds since this combination of operators is covariant with respect to permutations of the ordering. For a lattice model (as, for example, the Hubbard model) with $f$ degrees of freedom per site the fermion number per site therefore reads $n=\bar{n}_\Lambda+\frac{f}{2}$. For electrons in a solid ($f=2$) one can associate $\bar{n}_\Lambda$ with the difference of the electron density from half filling (where $n=1$), i.e. the average number of electrons minus holes per site as compared to the half-filling density. For relativistic charged fermions $\hat\psi^\dagger\hat\psi$ measures the difference between particle and antiparticle density and the additive constant drops out. For nonrelativistic atoms, however, the relation between the atom density $n$ and $\bar{n}_\Lambda$ becomes \footnote{ Note that the constant shift $\hat{n}$ diverges if the ultraviolet cutoff for the momentum integration $(q^2<\Lambda^2)$ goes to infinity, $\hat{n}=\Lambda^3/(6\pi^2)$.} \begin{eqnarray}\label{25} && \bar{n}_{F,\Lambda}(x)=\langle\hat\psi^\dagger(x)\hat\psi(x)\rangle \\\nonumber &=&\frac{1}{2}\langle\int\limits_y\sum \limits_{i,j} \left[\hat\psi_i^\dagger(x)\hat\psi_j(y)-\hat\psi_j(y)\hat\psi_i^\dagger(x)\right]\delta_{ij}\delta(x-y)\rangle\\\nonumber &=&\frac{1}{2}\langle\int\limits_y\sum \limits_{i,j}\left[a_i^\dagger(x)a_j(y)-a_j(y)a_i^\dagger(x) \right]\delta_{ij}\delta(x-y)\rangle\\\nonumber &=&\frac{1}{2}\langle\int\limits_y\sum \limits_{i,j}\left[2a_i^\dagger(x)a_j(y)-\delta_{ij}\delta(x-y) \right]\delta_{ij}\delta(x-y)\rangle\\\nonumber &=&\langle a^\dagger(x)a(x)\rangle - \frac{f}{2}\delta(0) = n_F(x) - \frac{f}{2}\int \frac{d^3q}{(2\pi)^3}\\\nonumber &=& n_F(x)-\hat{n}. \end{eqnarray} The volume factor in momentum space, $\delta(0)$, diverges in the limit of infinite momentum cutoff. The physical fermionic particle density $n_F(x)$ and the relative particle density $\bar{n}_{F,\Lambda}(x)= \langle\hat\psi^\dagger (x)\hat\psi(x)\rangle$ are therefore related by an additive shift that depends on the momentum cutoff. In consequence, one finds now a manifestly positive total fermionic particle number \begin{equation}\label{26} N_F=\int d^3x(\bar{n}_{F,\Lambda}+\hat{n})= V \int\frac{d^3q}{(2\pi)^3} \big(\exp(2\gamma)+1\big)^{-1}. \end{equation} The momentum integral is now ultraviolet finite for $\Lambda\to \infty$. It becomes exponentially insensitive to the ultraviolet cutoff $\Lambda$, such that we dropped the index. We recover the familiar Fermi distribution. An analogous argument holds for the connected part of the bosonic two-point function which will be implemented below. Formally, this additive renormalization appears in the form of field independent terms linear in $\sigma$ and $\mu$ in the classical potential. Let us now proceed to the second instance where UV renormalization is needed. The microscopic action (\ref{PhiAction}) depends explicitly on two parameters $\bar{\nu}_\Lambda$, $\bar{h}_\phi$. A third parameter is introduced implicitly by the ultraviolet cutoff $\Lambda$ for the momentum integration in the fluctuation effects. (Besides this, the results will depend on the thermodynamic variables $T$ and $\sigma, \mu$.) Contact to experiment is established by relating the microscopic parameters to observables of the concrete atomic system. Here we choose these parameters to be the magnetic field dependent physical detuning $\bar\nu(B)$ and the Feshbach coupling $\bar{h}_\phi$ \footnote{For vanishing background or open channel coupling, $\bar{h}_\phi$ is a free parameter. Else, a further UV renormalization of the Feshbach coupling is necessary \cite{Diehl:2006}.}. The parameters $(\bar\nu, \bar{h}_\phi)$ can, however, be replaced by an equivalent set $(a^{-1},\bar{h}_\phi)$ ($a$ the scattering length) in a second step as pointed out below. Once the parameters $\bar{\nu}(B)$ and $\bar{h}_\phi$ are fixed by the properties of the molecules or atom scattering in empty space we can proceed to compute the properties of the atom gas at nonzero temperature and density without further free parameters. In the vicinity of the Feshbach resonance at $B =B_0$ we may approximate $\bar{\nu}_\Lambda(B)$ by a linear behavior (linear Zeeman effect) \begin{eqnarray} \frac{\partial\bar{\nu}_\Lambda}{\partial B} = \bar{\mu}_B. \end{eqnarray} Here $\bar{\mu}_B= \mu_+ + \mu_- - \mu_M$ reflects the difference between the sum of the magnetic moments of the two atomic states ($\mu_+ + \mu_-$) and the molecule magnetic moment $\mu_M$. We relate the physical detuning $\bar{\nu}$ to $\bar{\nu}_\Lambda$ by an additive, $B$-independent shift \begin{eqnarray}\label{Watwees} \bar{\nu} = \bar{\nu}_\Lambda - \frac{\bar{h}_\phi^2 M \Lambda}{2\pi^2}, \quad \frac{\partial\bar{\nu}_\Lambda}{\partial B} = \frac{\partial\bar{\nu}}{\partial B}. \end{eqnarray} This is motivated by a consideration of the fermionic contribution to the ``boson mass term'', \begin{eqnarray} \bar m_\phi^2 = U'(\rho_0=0). \end{eqnarray} Indeed, the fluctuation contribution diverges for $\Lambda\to \infty$ \begin{eqnarray}\label{pedagogic} \bar{m}_\phi^2 &=& \bar{\nu}_\Lambda - \frac{\bar{h}_\phi^2}{2}\hspace{-0.1cm} \int^\Lambda\hspace{-0.2cm} \frac{d^3q}{(2\pi)^3}\big(\frac{q^2}{2M} - \sigma\big)^{-1} \tanh\frac{q^2/2M - \sigma}{2T}\nonumber\\\nonumber &=& \bar{\nu} - \frac{\bar{h}_\phi^2}{2}\hspace{-0.1cm} \int^\Lambda \hspace{-0.2cm} \frac{d^3q}{(2\pi)^3}\Big[\big(\frac{q^2}{2M} - \sigma\big)^{-1} \tanh\frac{q^2/2M - \sigma}{2T} \\ &&\qquad\qquad\qquad- \frac{2M}{q^2}\Big] . \end{eqnarray} In the second equation the linear dependence of the fluctuation correction on the cutoff $\Lambda$ is absorbed into the definition of the physical detuning $\bar{\nu}$. The remaining integral in the second line of (\ref{pedagogic}) is very insensitive with respect to the precise value of $\Lambda$, and we can formally send $\Lambda$ to infinity. More generally, we choose $\bar \nu_\Lambda - \bar \nu$ such that the zero of $\bar \nu$ coincides with the Feshbach resonance at $B_0$ \begin{eqnarray} \bar{\nu} = \bar{\mu}_B(B - B_0). \end{eqnarray} In vacuum ($n=0, T=0$) the Feshbach resonance corresponds to a vanishing binding energy $\epsilon_M = 0$. This is realized \cite{Diehl:2005an} for $\sigma=0$, $\bar{m}_\phi^2 = 0$. There is also no condensate in the vacuum, i.e. $\bar{\phi}_0=0$. We note that there are no boson fluctuations contributing to the renormalization of the mass term in the physical vacuum $n=T=0$ as can be seen from a diagrammatic argument \cite{Diehl:2006}. The scattering length $a$ can be defined in terms of the scattering amplitude at zero momentum and zero energy \cite{Diehl:2005an}. At the present stage we may consider a ``resonant scattering length'' $a_R$ related to $\bar{\nu}$ by \begin{eqnarray} \frac{M}{4\pi a_R} = -\frac{\bar{\nu}}{\bar{h}_\phi^2} = -\frac{\bar{\nu}_\Lambda}{\bar{h}_\phi^2} + \frac{M\Lambda}{2\pi^2}. \end{eqnarray} It accounts for the contribution of the molecule exchange to the atom scattering. For $B\neq B_0$ and low enough momenta and energies (low enough temperature) the molecule exchange can be described by an effective pointlike interaction. We therefore also define the renormalized resonant four-fermion vertex $\bar{\lambda}_R$, \begin{equation}\label{CoupPointlike} \bar{\lambda}_R = \frac{4\pi a_R}{M}. \end{equation} Comparison with eq. (\ref{BosonCond2}) yields \begin{eqnarray}\label{UVRenorm} \frac{1}{\bar{\lambda}_R}&=& -\frac{\bar{\nu}}{\bar{h}_\phi^2} =- \frac{\bar{\nu}_\Lambda}{\bar{h}_\phi^2} + \frac{M\Lambda}{2\pi^2} = \frac{1}{\bar{\lambda}_\Lambda} + \frac{M\Lambda}{2\pi^2} . \end{eqnarray} This demonstrates that the renormalization of $\bar{\nu}$ (\ref{Watwees}) corresponds to the renormalization of the effective atom interaction strength $\bar{\lambda}$. It is instructive to consider in the mean field framework the explicit equation which determines the order parameter $\bar \phi_0$ in the superfluid phase. A nonzero $\bar{\phi}_0$ obeys \begin{eqnarray}\label{eq65} \frac{\bar{\nu} - 2\mu}{\bar{h}_\phi^2}= \frac{1}{4T} \int\frac{d^3q}{(2\pi)^3}\big(\gamma_\phi^{-1}\tanh \gamma_\phi - 4M T/q^2\big). \end{eqnarray} In an alternative purely fermionic formulation we may compute $\bar{\phi}_0$ in the local interaction approximation (\ref{BosonCond2}) by solving the Schwinger-Dyson equation \cite{Dyson49,Schwinger51}. Eq. (\ref{eq65}) would correspond precisely to the BCS gap equation (lowest order Schwinger-Dyson equation) for the purely fermionic formulation with local interaction \cite{AABiBa00,Jaeckel02}, provided we choose \begin{eqnarray}\label{SD1} \frac{1}{\bar{\lambda}_\Lambda}=-\frac{\bar{\nu}_\Lambda - 2\mu}{\bar{h}_\phi^2} \quad \mathrm{or}\quad \frac{1}{\bar{\lambda}}=-\frac{\bar{\nu} - 2\mu}{\bar{h}_\phi^2} . \end{eqnarray} This suggests the definition of a $\mu$ or density dependent effective coupling $\bar\lambda (\mu)$ (cf. eq. (\ref{Mom4Fermion})) for the atom-molecule model. In the many-body context and in thermodynamic equilibrium, where fermions and bosons share a common chemical potential $\sigma=\mu$, the latter is determined by the density. In the physical vacuum, obtained by sending the density to zero, $\mu$ describes the binding energy of a molecule, $\epsilon_M = 2\mu = 2\sigma$ and does not vanish on the BEC side of the resonance \cite{Diehl:2005an}. We emphasize that the resonant scattering length $\bar \lambda_R$ in eq. (\ref{CoupPointlike}) describes the scattering of fermions throughout the crossover and is directly related to the observed scattering length for two atom scattering. On the other hand, $\bar\lambda (\mu)$ is a universal combination characteristic for the ground state of the system \cite{Diehl:2006}. For broad Feshbach resonances ($\bar h_\phi\to \infty$) the two quantities coincide. Expressing the effective potential $U$ (\ref{USigmaPhi}) in terms of $\bar{\nu}$ the momentum integral becomes ultraviolet finite, \begin{eqnarray}\label{finalActionEq} U(\sigma,\bar{\phi}) &=& (\bar{\nu} - 2\mu)\bar{\phi}^*\bar{\phi} + U_1^{(F)}(\sigma,\bar{\phi}), \nonumber\\ U_1^{(F)}(\sigma,\bar{\phi})&=& - 2T\int\frac{d^3q}{(2\pi)^3}\Big[\ln\big(\mathrm{e}^{\gamma_\phi - \gamma} + \mathrm{e}^{-\gamma_\phi-\gamma}\big)\nonumber\\ &&\qquad\qquad\qquad - \frac{\bar{h}_\phi^2\bar{\phi}^*\bar{\phi} M}{2T q^2}\Big]. \end{eqnarray} The remaining cutoff dependence is $\mathcal{O}(\Lambda^{-1})$, the precise value of $\Lambda$ therefore being unimportant. For definiteness, the cutoff $\Lambda$ can be taken $\Lambda = (3\pi^2n_{gs})^{1/3}$ with $n_{gs}$ the density in the liquid or solid ground state at $T=0$. This choice will be motivated in app. \ref{sec:Meta}. At this point we have reached a simple but nevertheless remarkable result. When expressed in terms of measurable quantities, namely scattering length $a_R$ (or $\bar{\lambda}_R$) and the open channel atom density $\bar{n}_F$ the $\bar{\phi}$ dependence of the effective potential becomes very insensitive to the microscopic physics, i.e. the value of the cutoff $\Lambda$. Without much loss of accuracy we can take the limit $\Lambda \to \infty$ for the computation of $U$. The effective chemical potential $\sigma$ depends on $\bar{n}_F$ and $\bar{\phi}$ via $\partial U(\sigma,\bar{\phi})/\partial\sigma = -\bar{n}_F$. \subsection{Wave function renormalization} \label{sec:WFR} The wave function renormalization $Z_\phi$ is another important ingredient for the description of the crossover physics in the Yukawa model. This can be seen from the fact that rescaling all couplings in the effective action with the appropriate power of $Z_\phi$, we end up with an effective bosonic Bogoliubov theory in the BEC regime, as will be discussed in detail in sect. \ref{sec:renconstZR}. Here we focus on the explicit computation of the wave function renormalization. As can be read off from eq. (\ref{GammaPosSpace}), the wave function renormalization $Z_\phi$ is related to the inverse molecule propagator $\bar{\mathcal{P}}_\phi$. In app. \ref{app:WFR} we compute $\bar{\mathcal{P}}_\phi$ in the mean field approximation, \begin{eqnarray}\label{Zphi} \bar{\mathcal{P}}_\phi(Q)&=& 2\pi \mathrm{i} n T + \frac{q^2}{4M} \\\nonumber &&\hspace{-1.2cm}-\bar{h}_\phi^2\int\limits_{Q'} \frac{P_F(Q')-\sigma}{[P_F(Q')-\sigma][P_F(-Q')-\sigma] + \bar{h}_\phi^2\bar{\phi}^*\bar{\phi}}\\\nonumber &&\hspace{-1.2cm}\times\frac{P_F(-Q' + Q)-\sigma}{[P_F(Q' - Q)-\sigma][P_F(-Q' + Q)-\sigma] + \bar{h}_\phi^2\bar{\phi}^*\bar{\phi}}\\ &=& \bar{\mathcal{P}}_\phi^*(-Q)\nonumber \end{eqnarray} where the kinetic part of the inverse fermion propagator reads \begin{eqnarray} P_F(Q) = \mathrm{i}\omega_B + \frac{q^2}{2M} \end{eqnarray} with $Q = (\omega_B, \vec{q})$, and the frequency variable $\omega_B$ represents the discrete fermionic Matsubara frequencies at finite temperature, $\omega_B = (2n + 1)\pi T$. Here we interpret the wave function renormalization $Z_\phi$ as a renormalization of the term in the effective action with a timelike derivative. We may then evaluate the propagator correction (\ref{Zphi}) for analytically continued frequencies $\omega_B \to \omega_B + \mathrm{i}\omega$ and set $\omega_B=0$. Now $\Delta P_\phi (\omega, \vec q)$ becomes a continuous function of $\omega$. Defining \begin{eqnarray}\label{WFRDefini} Z_\phi= 1 - \frac{\partial \Delta P_\phi}{\partial\omega}\Big|_{\omega=0} \end{eqnarray} one finds \begin{eqnarray}\label{ZRFormula} Z_\phi \hspace{-0.1cm} &=& \hspace{-0.1cm} 1 + \frac{\tilde{h}_\phi^2}{16\tilde{T}^2}\int \frac{d^3\tilde{q}}{(2\pi)^3} \gamma \gamma_\phi^{-3}\big[\tanh\gamma_\phi - \gamma_\phi\cosh^{-2}\gamma_\phi\big].\nonumber\\ \end{eqnarray} Here we have rescaled the integration variable and the Yukawa coupling as $\tilde q = q/k_F$, $\tilde{h}_\phi^2 = 4M^2 \bar h_\phi^2 /k_F$. In the symmetric phase the simplification $\gamma_\phi=\gamma$ applies. We note that $Z_\phi$ is closely related to the spectral function for the molecules. If we only consider fermionic diagrams, we can give an equivalent definition of the wave function renormalization using the mean field effective potential, \begin{eqnarray}\label{ZphiinMFT} Z_\phi^{\sigma} = 1 -\frac{1}{2} \frac{\partial^3 U}{\partial\sigma\partial\bar\phi^*\partial\bar\phi}. \end{eqnarray} The property $Z_\phi^{\sigma} = Z_\phi$ holds since the integral in eq. (\ref{Zphi}) depends on the combination $\omega - 2 \sigma$ only. \subsection{Gradient coefficient} \label{GradCoeff} In order to compute the gradient coefficient, we proceed in complete analogy to the wave function renormalization: For the spacelike momenta, we define \begin{eqnarray}\label{Zphi1} \bar{A}_\phi(q) &=& \frac{\bar{\mathcal{P}}_\phi(0,q) - \bar{\mathcal{P}}_\phi(0,0)}{q^2},\\\label{Zphi2} \bar{A}_\phi &=& \lim\limits_{q^2\to 0} \bar{A}_\phi(q).\nonumber \end{eqnarray} More explicit formulae are given in app. \ref{app:WFR} in eqs. (\ref{Aphi1},\ref{AphiSYM}). \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,55) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverTcR2.eps}} \put(70,-2){$c^{-1}$} \put(72,43){$\tilde{T}_c$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Crossover phase diagram $\tilde{T}_c (c^{-1})$. For large $\tilde{h}_\phi$ we compare a calculation with a gap equation modified by boson fluctuations (dashed) to the result obtained with the standard BCS gap equation (solid). The universal narrow resonance limit $\tilde{h}_\phi \to 0$ is indicated by the dashed-dotted line. We further plot the result of the standard BCS approach (BCS gap equation, only fermionic contributions in the density equation; long-dashed rising line) and the result for noninteracting bosons (long dashed horizontal line). The solid line at resonance $\tilde{T}_c(c^{-1}=0)=0.292$ coincides with the result obtained in \cite{Levin052} which omits boson fluctuations. The dashed line ($\tilde{T}_c(c^{-1}=0)=0.259$ is in good agreement with the value obtained in \cite{ZZZStrinati} ($\tilde{T}_c = 0.266$), who work in the broad resonance limit from the outset.} \label{CrossoverTcAll} \end{minipage} \end{figure} \section{Relevant Parameters, Momentum and Energy Scales} \label{sec:Relevant} \subsection{Concentration} For a mean field computation with nonzero density the effective renormalized four-fermion coupling is related by an equation similar to eq. (\ref{BosonCond2}) to the resonant scattering length in vacuum. We define \footnote{For $\sigma=\mu$ this combination appears in the term quadratic in $\bar{\phi}$ in eq. (\ref{finalActionEq}).} \begin{eqnarray}\label{CoupMu} \frac{1}{\bar{\lambda}(\sigma)} = \frac{M}{4\pi a_R} + \frac{2\sigma}{\bar{h}_\phi^2}. \end{eqnarray} Therefore the \emph{effective} scattering length in an atom gas differs from the (vacuum) scattering length which is measured by the scattering of individual atoms. This density effect is reflected by the dependence on the effective chemical potential and we define \begin{eqnarray}\label{ScatterDens} \frac{1}{\bar{a}} = \frac{1}{a_R} + \frac{8\pi\sigma}{\bar{h}_\phi^2M}. \end{eqnarray} On the BCS side $\sigma$ is positive and $a_R$ is negative - the size of $|\bar{a}|$ is therefore larger than $|a_R|$. A similar enhancement occurs in the BEC regime where $\sigma$ will turn out to be negative and $a_R$ positive. Roughly speaking, on the BCS side the presence of a nonvanishing atom density favors the formation of (virtual) molecules by reducing the ``cost'' of forming molecules with positive $\bar{\nu}$ (or positive binding energy) to $\bar{\nu} - 2\sigma$. In the BEC regime the presence of molecules reduces the absolute size of the effective binding energy. It should, however, be pointed out that the effect of a density dependence is small in case of broad resonances $\tilde{h}_\phi\to \infty$, as visible from eq. (\ref{CoupMu}). The atom density $n$ defines a characteristic momentum scale by the Fermi momentum $k_F$, i.e. \begin{eqnarray} \label{TotalDensity} n= \frac{k_F^3}{3\pi^2}. \end{eqnarray} The inverse of the Fermi momentum defines the most important characteristic length scale in our problem. Roughly speaking, it corresponds to the average distance $d$ between two unbound atoms or molecules. We emphasize that our definition of $k_F$ involves the total density $n$ and therefore includes unbound atoms, molecules and condensed atom pairs. In terms of $k_F$ we can form a characteristic dimensionless ``concentration'' \begin{equation}\label{defconc} c = \bar{a} k_F. \end{equation} The concentration is a measure for the ratio between the in-medium scattering length $\bar{a}$ and average distance, $c\propto \bar{a}/d$. As mentioned in the introduction, the concentration is the crucial parameter for the description of the crossover between the BEC and BCS regimes. For a small concentration $|c|$ the gas is dilute in the sense that scattering can be treated as a perturbation. In this range mean field theory is expected to work reasonably well. On the other hand, for large $|c|$ the scattering length exceeds the average distance between two atoms and fluctuation effects beyond mean field might play a crucial role. An alternative definition of the concentration could use the measured (vacuum) scattering length, $c = a(B) k_F$. This definition has the advantage of a simple relation to the magnetic field. The choice between the two definitions is a matter of convenience. We adopt here the definition (\ref{defconc}) since this reflects universality in an optimal way. For broad Feshbach resonances the two definitions coincide since $\bar a = a(B)$. In presence of an additional ``background scattering length'', $a(B) = a_{bg} + a_R(B)$ we will include $a_{bg}$ in the definition of $c$. The concentration $c$ is the most important parameter for the description of the crossover (besides $T$ and $n$). The inverse concentration $c^{-1}$ corresponds to a ``relevant parameter'' which vanishes at the location of the resonance. Once described in terms of $c^{-1}$ the ultracold fermionic gases show a large universality. We demonstrate this universality on the crossover phase diagram in fig. \ref{CrossoverTcAll} which plots the critical temperature $\tilde{T}_c = T_c/\epsilon_F$ for the transition to superfluidity as a function of $c^{-1}$. The narrow resonance limit $\tilde{h}_\phi \to 0$ is exact, while our results for broad resonances ($\tilde{h}_\phi \to \infty$) still have substantial uncertainties, as demonstrated by two approximations that will be explained later. For large $\tilde{h}_\phi$ the actual value of $\tilde{h}_\phi$ becomes irrelevant and all curves coincide with the broad resonance limit. Intermediate $\tilde{h}_\phi$ interpolate between the broad and narrow resonance limits. We have argued in sect. \ref{sec:metastabledilutegas} that in the limit of a pointlike approximation for the effective fermionic interaction all results should only depend on the effective scattering length. This only involves the ratio $\bar{\nu}/\bar{h}_\phi^2$ such that for fixed $c$ the separate value of $\bar{h}_\phi$ should not matter. On the BCS side for small $|c|$ we therefore expect a universal behavior independent of the value of the Yukawa coupling $\bar{h}_\phi$. This is clearly seen in fig. \ref{CrossoverTcAll} where the critical line approaches the BCS result independently of $\bar{h}_\phi$. Furthermore, all results become independent of the value of $\bar{h}_\phi$ in the broad resonance limit ($\tilde{h}_\phi \to \infty$). The concentration remains then as the only parameter (besides $T$ and $n$) for the description of the crossover. These new universal aspects, adding to those that will be presented in \ref{sec:Univ}, are discussed in \cite{Diehl:2005an}. In particular, the broad resonance limit $\tilde{h}_\phi\to \infty$ corresponds to a pointlike microscopic interaction. For the broad resonance limit the first approximation (solid line) corresponds to extended MFT and neglects the modifications of the effective potential induced by the molecule fluctuations. The second approximation (dashed line) includes these fluctuation effects via the solution of a gap equation for the molecule propagator (sect. \ref{sec:beyond}). The fast approach to the BEC value for $c^{-1} \to 0_+$ does not reflect the expected behavior $\tilde{T}_c = \tilde{T}_c^{BEC} + \kappa c$ with a dimensionless constant $\kappa$. This shortcoming of our treatment is remedied by a functional renormalization group study \cite{Diehl:2007th}. The solid line at resonance $\tilde{T}_c(c^{-1}=0)=0.292$ coincides with the result obtained in \cite{Levin052} which omits boson fluctuations. The dashed line ($\tilde{T}_c(c^{-1}=0)=0.259$ is in good agreement with the value obtained in \cite{ZZZStrinati} ($\tilde{T}_c = 0.266$), who work in the broad resonance limit from the outset. \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,52) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{ZphiZRComp.eps}} \put(70,-2){$c^{-1}$} \put(12,42){$\tilde{A}_\phi/\tilde{h}_\phi^2$ } \put(12,22){$Z_\phi/\tilde{h}_\phi^2$ } \put(40,38){$T=0$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Gradient coefficient $\tilde{A}_\phi = 2M\bar{A}_\phi$ and wave function renormalization $Z_\phi$ in dependence on $c^{-1}$. We divide by $\tilde{h}_\phi^2$ in order to get numbers $\mathcal{O}(1)$ and use $\tilde{h}_\phi^2 = 3.72\cdot 10^5$ as appropriate for $^6\mathrm{Li}$ \cite{Diehl:2005an}. The ratio $A_\phi=\tilde{A}_\phi/Z_\phi$ is displayed in fig. \ref{BoseCoeffs}.} \label{UnrenormWFR} \end{minipage} \end{figure} In fig. \ref{UnrenormWFR} we show the wave function renormalization $Z_\phi$ and the dimensionless gradient coefficient $\tilde A_\phi = 2M \bar A_\phi$ as a function of $c^{-1}$ for $T=0$. We use the large value for the Feshbach coupling in $^6\mathrm{Li}$, $\tilde{h}_\phi^2 =3.72 \cdot 10^5$ for $k_F = 1 \mathrm{eV}$ (cf. eq. (\ref{DimlessDimful})). We note that for large $\tilde{h}_\phi$, as appropriate for the Feshbach resonances in $^6\mathrm{Li}$ or $^{40}\mathrm{K}$, the wave function renormalization $Z_\phi$ is large, the ratio $Z_\phi/\tilde{h}_\phi^2$ being an $\mathcal{O}(1)$ quantity. We observe a strong increase of $\tilde A_\phi$ when evolving to the BCS side of the resonance. This strongly suppresses the propagation of the effective bosonic degrees of freedom, leading to a situation where they are completely irrelevant. This is an aspect of universality, where the system ``looses memory'' of the bosonic degrees of freedom and a purely fermionic BCS-type description becomes appropriate. These curves are essentially identical in the limit $\tilde{h}_\phi^2 \to \infty$. Indeed, large values of $\tilde{h}_\phi^2$ exhibit an ``enhanced universality'' \cite{Diehl:2005an}. In this case, all ``microscopic quantities'' depend only on the concentration $c$. In the limit $\tilde{h}_\phi \to \infty$ at fixed scattering length there is a loss of memory concerning the details of the bosonic sector in the microscopic action (\ref{PhiAction}). These aspects are worked out in more detail in \cite{Diehl:2005an}. In a renormalization group treatment this universality will be reflected in strongly attractive partial infrared fixed points, similar to the quark meson model in strong interactions \cite{Jungnickel95}. This universality property will be valid beyond mean field theory. \subsection{Dimensionless parameters} The characteristic energy scales for $T$, $\sigma$ and the gap $\Delta = \bar{h}_\phi\bar{\phi}$ are set by the Fermi energy $\epsilon_F= k_F^2/(2M)$. It is appropriate to define dimensionless quantities \begin{eqnarray}\label{DimlessDimful} \tilde{T} &=& 2MT/k_F^2= T/\epsilon_F, \quad \tilde{\sigma}= 2M\sigma/k_F^2,\nonumber\\\nonumber \tilde{q}&=& q/k_F, \quad \tilde{\Delta}=\Delta/\epsilon_F= 2M \bar{h}_\phi \bar{\phi}/k_F^2, \quad \tilde{r} = \tilde{\Delta}^*\tilde{\Delta},\\ \tilde{h}_\phi &=& 2M k_F^{-1/2}\bar{h}_\phi,\quad \tilde{A}_\phi = 2M\bar{A}_\phi. \end{eqnarray} Once all quantities are expressed in this way in units of the Fermi momentum or the Fermi energy the atom mass $M$ will no longer be present in our problem. Indeed, in dimensionless units one has for the expressions (\ref{DefGammaPhi}) \begin{eqnarray}\label{GammaGammaPhiDimless} \gamma&=&\frac{1}{2\tilde{T}}\big(\tilde{q}^2 -\tilde{\sigma}\big),\\\nonumber \gamma_\phi&=&\frac{1}{2\tilde{T}}\big((\tilde{q}^2 -\tilde{\sigma})^2 +\tilde{r}\big)^{1/2} \end{eqnarray} such that the atom mass $M$ drops out in the computation (\ref{USigmaPhi},\ref{finalActionEq}) of the appropriately rescaled effective potential $U$. All dimensionless quantities in $\gamma_\phi$ are typically of the order one. For practical computations we may therefore choose units $k_F=1 \mathrm{eV}$. The rescaled potential \begin{eqnarray} \label{SigEq} \tilde{u} = k_F^{-3}\frac{\tilde{T}}{T}U=2M k_F^{-5} U \end{eqnarray} is composed of a classical contribution \footnote{In the formulation of sect. \ref{EffAtDens} $\hat{c}$ will be replaced by $c$ since $\sigma$ and $\mu$ will be identified.} and a contribution from the fermion fluctuations ($\tilde{u}_1^{(F)}$) \begin{eqnarray}\label{MFTFermionPot} \tilde{u} = -\frac{\tilde{r}}{8\pi \hat{c}} + \tilde{u}_1^{(F)},\\\nonumber \frac{1}{8\pi \hat{c}} = \frac{1}{8\pi c} -\frac{\sigma - \mu}{\bar{h}_\phi^2 M k_F}. \end{eqnarray} Then the equation determining $\tilde{\sigma}(\tilde{r},\tilde{T})$ becomes \begin{eqnarray}\label{SigEq2} \frac{\partial\tilde{u}}{\partial\tilde{\sigma}}= -\frac{1}{3\pi^2}\frac{\bar{n}_F}{n} \end{eqnarray} and is indeed independent of $M$. It depends, however, on the ratio $\bar{n}_F/n$ since we have defined in eq. (\ref{TotalDensity}) $k_F$ as a function of $n$ rather than $\bar{n}_F$. In the mean field approximation the effective chemical potential $\tilde{\sigma}$ obeys, using the dimensionless shorthands (\ref{GammaGammaPhiDimless}), \begin{eqnarray}\label{anotherDens} \int\limits_0^\infty d\tilde{q}\,\tilde{q}^2\big\{\frac{\gamma}{\gamma_\phi}\tanh\gamma_\phi - 1\big\} = -\frac{2}{3}\frac{\bar{n}_F}{n}. \end{eqnarray} In particular, for $\tilde{r}=0$ this determines $\tilde{\sigma}$ as a function of $\tilde{T}$ and $\bar{n}_F/n$. For given $\tilde{\sigma}$, $\tilde{T}$ and $c$ one can compute the order parameter $\tilde{r}$ in the low temperature phase according to \begin{eqnarray}\label{dimlessFieldEq} \frac{\partial\tilde{u}}{\partial\tilde{r}}=0 , \quad \frac{\partial\tilde{u}_1^{(F)}}{\partial\tilde{r}}= \frac{1}{8\pi \hat{c}}. \end{eqnarray} The critical temperature for the phase transition corresponds to the value $\tilde{T}_c$ where $\tilde{r}$ vanishes. This part of the mean field computation is independent of the Yukawa coupling $\bar{h}_\phi$. In sect. \ref{sec:MolFrac} we will determine $\bar{n}_F/n$ as a function of $\tilde{\sigma}, \tilde{T}$ and $c$ such that eq. (\ref{SigEq2}) can be used to determine $\tilde{\sigma}$ as a function of $c$ and $\tilde{T}$. As an alternative, we will modify in sect. \ref{EffAtDens} the definition of $U$ such that a modified equation $\partial\tilde{u}/\partial\tilde{\sigma} = -1/3\pi^2$ becomes independent of $\bar{n}_F/n$. Again, $\tilde{\sigma}$ can now be fixed as a function of $c$ and $\tilde{T}$. We will see that the relation $\tilde{\sigma} (c,\tilde{T})$ depends on the choice of the Yukawa coupling $\bar{h}_\phi$. The results will therefore depend on the additional dimensionless parameter $\tilde{h}_\phi$ (\ref{DimlessDimful}). Away from the narrow and broad resonance limits, the model is therefore characterized by two dimensionless quantities, $c$ and $\tilde{h}_\phi$. \subsection{Universality} \label{sec:Univ} All observables can be expressed in terms of $c$, $\tilde{T}$ and $\tilde{h}_\phi$. For example, using the definition \begin{eqnarray} \tilde{\nu} = \frac{2M}{k_F^2}\bar{\nu}, \end{eqnarray} one finds the relation \begin{eqnarray}\label{NuMuC} \tilde{\nu} - 2\tilde{\sigma} = -\frac{\tilde{h}_\phi^2}{8\pi c}. \end{eqnarray} In particular, the phase diagram $\tilde{T}_c(c)$ depends only on $\tilde{h}_\phi$ as shown in fig. \ref{CrossoverTcAll} for the case of narrow and broad resonances. For $\tilde{T}=\tilde{T}_c$ we find that the relation $\tilde{\sigma} (c)$ depends only mildly on the value of $\tilde{h}_\phi$ as shown in fig. \ref{CrossoverPhaseDiagramSigmaAll} where we compare narrow and broad resonance limits again. Furthermore, for small $|c|$ the universal curves $T_c (c)$ and $\tilde{\sigma} (c,\tilde{T})$ become independent of $\tilde{h}_\phi$: Both in the BEC and BCS regime, an enhanced universality sets in, making $\tilde{h}_\phi$ irrelevant for all its possible values! These issues are investigated more systematically in \cite{Diehl:2005an}. \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,55) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverSnR2.eps} \put(70,-2){$c^{-1}$} \put(72,43){$\tilde{\sigma}$ } \put(15,25){$T=T_c$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Crossover at the critical temperature in the broad resonance limit: Effective dimensionless chemical potential $\tilde{\sigma}$ at the critical temperature as a function of the inverse concentration $c^{-1}$. We compare the results for two versions of the gap equation as in fig. \ref{CrossoverTcAll} (solid and dashed line). Additionally, the result for the narrow resonance limit is indicated (dashed-dotted).} \label{CrossoverPhaseDiagramSigmaAll} \end{minipage} \end{figure} In summary we have now an effective low energy formulation where neither $M$ nor $\Lambda$ enter anymore. Everything is expressed in terms of $k_F$ and three dimensionless parameters, namely $c,\tilde{h}_\phi$ and $\tilde{T}$. This scaling property is an important aspect of universality. In \cite{Diehl:2005an} we discuss the relation of the parameters $c$ and $\tilde{h}_\phi$ with the physical observables for an experimental setting, namely the magnetic field $B$ and the binding energy in vacuum. In the present paper we treat $\tilde{h}_\phi$ as a free parameter. The values of $\tilde{h}_\phi$ for $^6\mathrm{Li}$ and $^{40}\mathrm{K}$ turn out to be large such that the broad resonance limit applies to these systems. We display in tab. \ref{Scales} the values of the dimensionless scale rations $M/k_F$, $\Lambda/k_F$ and $\epsilon_F/k_F$. The first two give an idea how well the detailed microphysics decouples for realistic ultracold fermionic gases. We use a density $n = 4.4\cdot 10^{12}\mathrm{cm}^{-3}$, $k_F=1\mathrm{eV}\hat{=}(3.7290\cdot 10^3 a_B)^{-1}$. \begin{table}[h] \caption{\label{Scales} Typical values for the dimensionless scale ratios for $^6$Li and $^{40}\mathrm{K}$ ($k_F=1\mathrm{eV}\hat{=}(3.7290\cdot 10^3 a_B)^{-1}$).} \begin{ruledtabular} \begin{tabular}{cccccccc} & $M/k_F$ & $\Lambda/k_F$ & $\epsilon_F/k_F$ \\\hline $^6\mathrm{Li}$& $5.65\cdot 10^{9}$ & $1.6\cdot 10^3$ & $8.9\cdot 10^{-11}$ \\\hline $^{40}\mathrm{K}$& $40.0\cdot 10^{9}$ & $1.2\cdot 10^3$ & $1.2\cdot 10^{-11}$ \\ \end{tabular} \end{ruledtabular} \end{table} \section{Molecule and Condensate Fraction} \label{sec:MolFrac} At this point the functional integral setting for the ultracold fermionic atom gases is fully specified. The parameters $a_R$ and $\bar h_\phi^2$ can be extracted from a computation of two-atom scattering in the vacuum, taking the limit $T\to 0$, $n\to 0$ \cite{Diehl:2005an}. Rescaling with appropriate powers of $k_F$ yields the parameters $c$ and $\tilde{h}_\phi^2$ for the many body system. The relation between $\sigma$ and $n$ is determined by eq. (\ref{TheDensEq}) and we identify $\mu = \sigma$ at the end of the computation. An approximate solution of the functional integral for the effective action $\Gamma$ gives access to thermodynamic quantities and correlation functions. The relation (\ref{TheDensEq}) between $\sigma$ and $n$ seems to be rather formal at this stage. In this section we will develop the physical interpretation of this formula. In this context the important distinction between microscopic and dressed molecules will appear. In a quantum mechanical computation for the physics of a Feshbach resonance the concept of dressed molecules arises from mixing effects between the open and closed channels. This is an important ingredient for the understanding of the crossover. Our functional integral formalism has to reproduce this channel mixing in vacuum and to extend it to the many body situation. In the functional integral formulation the quantum mechanical ``mixing effects'' are closely related to the wave function renormalization $Z_\phi$. The concept of dressed molecules and their contribution to the density is directly related to the interpretation of eq. (\ref{TheDensEq}). We stress however, that the functional integral evaluation of $n$ for given $\sigma = \mu$ can be done completely independently of this interpretation. \subsection{Exact expression for the bare molecule density} \begin{figure} \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,52) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{ZqAllR.eps}} \put(70,-2){$q/k_F$} \put(70,42){$A_\phi$ } \put(40,38){$T=T_c$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Momentum dependence of the gradient coefficient in the broad resonance regime $\tilde{h}_\phi^2\gg 1$. We plot $A_\phi= 2M\bar{A}_\phi/Z_\phi$ for $T=T_c$ in the different regimes, BCS ($c^{-1}=-1.5, \tilde{T}_c = 0.057$, topmost), crossover ($c^{-1}=0, \tilde{T}_c = 0.29$) and BEC $c^{-1}=1.5, \tilde{T}_c=0.218$). The dashed line is the value for elementary pointlike bosons, $A_\phi=1/2$.} \label{Zqtot} \end{minipage} \end{figure} The total density of atoms is composed of three components according to eq. (\ref{TotDens}), \begin{eqnarray}\label{DensComps} n=\bar{n}_F + 2\bar{n}_M +\bar{n}_C. \end{eqnarray} Here we have split up the contribution $\bar{n}_B$ from (\ref{TotDens}) in a contribution from uncondensed bare molecules (connected two-point function) \begin{eqnarray}\label{MolDens} \bar{n}_M &=& \langle\hat{\phi}^*\hat{\phi}\rangle - \langle\hat{\phi}^*\rangle\langle\hat{\phi}\rangle - \hat{n}_B\\\nonumber &=& \langle\hat{\phi}^*\hat{\phi}\rangle_c - \hat{n}_B \end{eqnarray} and one from the condensate which only occurs below the critical temperature $T_c$ \begin{eqnarray} \bar{n}_C= 2\langle\hat{\phi}^*\rangle\langle\hat{\phi}\rangle = \frac{k_F^4}{2\bar{h}_\phi^2 M^2}\, \tilde{r} = 2 k_F^3\frac{\tilde{r}}{\tilde{h}_\phi^2}. \end{eqnarray} The ratio of atoms in the condensate arising from bare molecules is defined as \begin{eqnarray}\label{CondFrac} \bar{\Omega}_C = \frac{\bar{n}_C}{n} = \frac{6\pi^2}{\tilde{h}_\phi^2} \,\tilde{r} \end{eqnarray} and involves the Yukawa coupling $\tilde{h}_\phi=2M \bar{h}_\phi/k_F^{1/2}$. The density of uncondensed bare molecules $\bar{n}_M$ (\ref{MolDens}) can be written in terms of the bare molecule propagator or connected two-point function \begin{eqnarray}\label{ExactDens} \bar{n}_M(x) = T \int d\tau \bar{G}_\phi(x,\tau;x,\tau) - \hat{n}_B. \end{eqnarray} This formula is exact \footnote{Eq. (\ref{ExactDens}) is valid for the normal phase $T\geq T_c$. For the superfluid phase $\bar{G}_\phi$ becomes a $2\times 2$ matrix and the corresponding generalization of eq. (\ref{ExactDens}) is discussed in sect. \ref{sec:MolDensSFL}.}. It involves an additive renormalization $\hat n_B$ of a similar origin as for the fermionic density, cf. sect. \ref{sec:renormalization}, but with opposite sign due to Bose-statistics. In the homogeneous limit we can write eq. (\ref{ExactDens}) as a sum over integer Matsubara frequencies $2\pi n T$ \begin{eqnarray}\label{MolProExact} \bar{n}_M = T \sum\limits_n\int \frac{d^3q}{(2\pi)^3}\bar{G}_\phi(q,n) - \hat{n}_B. \end{eqnarray} The ratio of bare molecules to open channel atoms, $\bar{n}_M/\bar{n}_F$, is important for the understanding of the atom gas. Technically, it enters the field equation for the effective chemical potential (\ref{SigEq2}) which involves $\bar{n}_F/n$. In the regions in parameter space where $\bar{n}_M/n$ is not very small a reliable computation of $\bar{n}_M$ is crucial for a quantitative understanding of the phase diagram. Such situations are e.g. realized in the BEC regime for narrow and intermediate resonances. However, for the crossover region for a broad Feshbach resonance (as for $^6\mathrm{Li}$ and $^{40}\mathrm{K}$) it turns out that $\bar{n}_M/n$ is small such that an approximation $\bar{n}_F =n$ yields already a reasonable result. Nevertheless, the mean field approximation (\ref{anotherDens}) does not remain valid in all regions of the phase diagram. This is due to an additional $\sigma$-dependence of $u$ arising from the boson fluctuations which are neglected in MFT. We will argue below that this contribution from the bosonic fluctuations can be interpreted as the density of dressed molecules. Thus, again an estimate of the molecule density will be mandatory for the understanding of the crossover physics. A simple estimate of the dressed molecule density is the minimal ingredient beyond mean field theory needed for a qualitatively correct description. We will refer to this as ``extended mean field theory''. As a first step the evaluation of the molecule density we may evaluate the classical approximation where the Yukawa interaction between open and closed channel atoms is neglected. This corresponds to the limit $\bar{h}_\phi\to 0$ (for fixed $\bar{a}$). Then $G_\phi^{(cl)}(q,n)$ is the free propagator \footnote{Note that we use $\bar{\nu}$ instead of $\bar{\nu}_\Lambda$ - this includes already the dominant fluctuation effects as motivated by the following paragraphs. Strictly speaking, the classical propagator features $\bar{\nu}_\Lambda$.} \begin{eqnarray}\label{BosPropClass} G_\phi^{(cl)} (q,n) = \Big(2\pi \mathrm{i}n T + \frac{q^2}{4M} + \bar{\nu} -2\mu \Big)^{-1}. \end{eqnarray} Performing the Matsubara sum and inserting $\hat{n}_B = \hat{n}/2= 1/2\int d^3q/(2\pi)^3$ one obtains the familiar expression involving the occupation numbers for bosons \begin{eqnarray}\label{BosNumber} \bar{n}_M&=& \int \frac{d^3q}{(2\pi)^3}\Big[\exp\Big(\frac{\bar{P}_\phi^{(cl)}(q)}{T}\Big) - 1\Big]^{-1},\\\nonumber \bar{P}_\phi^{(cl)}(q)&=&\frac{q^2}{4M} + \bar{\nu} -2\mu. \end{eqnarray} We note the role of $\hat{n}_B$ for the removal of pieces that do not vanish for $\Lambda\to \infty$. \subsection{Fluctuation effects} The fluctuation effects will change the form of $\bar{G}_\phi(q,n)$. Details of the computation of $\bar{G}_\phi = \bar{\mathcal{P}}_\phi^{-1}$ are presented in app. \ref{app:WFR}. In the symmetric phase we may account for the fluctuations by using \begin{eqnarray}\label{Pphifull} \bar{\mathcal{P}}_\phi(Q)&=& ( 2\pi \mathrm{i} n T Z_\phi(\sigma,T) + \bar{P}_\phi(q))\delta_{ab},\nonumber\\ \bar{P}_\phi(q)&=& \bar{A}_\phi(\sigma,T,q^2)q^2 +\bar{m}_{\phi}^2(\sigma,T). \end{eqnarray} We include here the fluctuations of the fermions (open channel atoms) for the computation of $\bar{A}_\phi$ and $Z_\phi$. The mass term $\bar{m}_\phi^2$ will be evaluated by a gap equation which also includes the bosonic molecule fluctuations. In the superfluid phase ($\bar{\phi} \neq 0$) the diagonalization of the propagator has to be performed carefully as discussed below in subsects. G,H. The gradient coefficient $\bar{A}_\phi(\sigma,T,q^2)$ is defined by eq. (\ref{Zphi1}) and depends on the Yukawa coupling $\bar{h}_\phi$. For large $q^2$ it comes close to the classical value $1/4M$. We plot the renormalized gradient coefficient $A_\phi(q^2)= 2M\bar{A}_\phi(q^2)/Z_\phi$ for different $c$ corresponding to the BCS, BEC and crossover regimes in fig. \ref{Zqtot}. It is obvious that this gradient coefficient plays a major role. Large $A_\phi$ leads to an additional \emph{suppression} of the occupation number for modes with high $q^2$, as anticipated in sect. \ref{GradCoeff}. \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,155) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{nqBCS10.eps}} \put(70,-2){$\tilde{q}$} \put(60,27){BCS} \put(55,22){$c^{-1}=-1.5$} \put(-3,1){(c)} \put(60,41){$\tilde{q}^3N_M(\tilde{q})$ } \end{picture} }} \put (0,52){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{nqCross10.eps}} \put(70,-2){$\tilde{q}$} \put(60,41){$\tilde{q}^3N_M(\tilde{q})$ } \put(57,30){crossover} \put(58,25){$c^{-1}=0$} \put(-3,1){(b)} \end{picture} }} \put (0,104){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{nqBEC10.eps}} \put(70,-2){$\tilde{q}$} \put(60,41){$\tilde{q}^3N_M(\tilde{q})$ } \put(59,30){BEC} \put(55,25){$c^{-1}=1.5$} \put(-3,1){(a)} \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Weighted Bose distribution $N_M^{T=T_c}(\tilde{q})=(\exp(A_\phi \tilde{q}^2/\tilde{T}) - 1)^{-1}$ as a function of dimensionless momentum $\tilde{q} = q/k_F$, for $T=T_c$. We show the three regimes: (a) BEC ($c^{-1}=1.5, \tilde{T}_c=0.218$), (b) Crossover ($c^{-1}=0, \tilde{T}_c = 0.292$) and (c) BCS ($c^{-1}=-1.5, \tilde{T}_c = 0.057$). We compare our best estimate using $A_\phi(\tilde{q})=\tilde{A}_\phi(\tilde{q})/Z_\phi$ (solid curve) with the approximation $A_\phi =A_\phi(\tilde{q} =0)$ (dashed curve) that we employ for the numerical estimates in this paper. Also indicated is the result for the classical gradient coefficient $A_\phi =1/2$ (dashed-dotted) that strongly overestimates the molecule number in the crossover and BCS regimes. } \label{nqtot} \end{minipage} \end{figure} In the symmetric phase ($\bar{\phi}_0=0$) the ``mass term'' is given by \begin{eqnarray} \bar{m}_{\phi}^2 = \frac{\partial^2 U}{\partial\bar{\phi}^*\partial\bar{\phi}}\Big|_{\bar{\phi}=0}. \end{eqnarray} We use here a language familiar in quantum field theory since the molecule wave $\bar{\phi}$ behaves like a massive field for $\bar{m}_\phi^2 >0$. (The propagator $G_\phi$ has a ``gap''.) For $\bar{m}_\phi^2 >0$ the symmetric solution of the field equation, $\bar{\phi}_0 =0$, is stable whereas it becomes unstable for negative $\bar{m}_\phi^2$. For a second order phase transition the critical temperature therefore corresponds precisely to a vanishing mass term, $\bar{m}_\phi^2(T=T_c) =0$. The mass term reads in the mean field approximation \begin{eqnarray}\label{MFTMass} \bar{m}_{\phi}^{(F) 2}&=& \frac{\partial^2 U_{MFT}}{\partial \bar{\phi}^*\partial\bar{\phi}}\Big|_{\bar{\phi}=0} = \bar{\nu} -2\mu+ \frac{\partial^2 U_1^{(F)}}{\partial\bar{\phi}^*\partial\bar{\phi}}\Big|_{\bar{\phi}=0}.\nonumber\\ \end{eqnarray} For $T=0$, $\sigma =0$ the definition of $\bar{\nu}$ (\ref{Watwees}) implies $\bar{m}_\phi^{(F) \,2}=\bar{\nu} -2\mu$. We note that the fermion fluctuations lower $\bar{m}_{\phi}^2$ as compared to the microscopic term $\bar{m}_{\phi, \Lambda}^2=\bar{\nu}_\Lambda -2\mu$, cf. eq. (\ref{USigmaPhi}). This effect \emph{enhances} the occupation number for molecules - using $\bar{m}_{\phi, \Lambda}^2$ instead of $\bar{m}_\phi^2$ would yield a much too small density of molecules! At the critical temperature the mass term $\bar{m}_\phi^2$ vanishes (see below). For $T=T_c$ the fluctuation effects therefore concern the size and shape of $\bar{A}_\phi(q)$. As we have seen in our mean field computation in sect. \ref{sec:EffActMFT} the quantities $Z_\phi, \bar A_\phi$ and $\bar m_\phi^2$ depend strongly on $\sigma$. We may imagine to integrate first the fermion fluctuations in the functional integral (\ref{GammaFuncInt}) (with $\eta^\dagger =\eta =0$). The result is an intermediate ``mean field action'' for the remaining functional integral over bosonic fluctuations. The quadratic part $\sim \phi^*(Q) \phi(Q)$ in this action will be of the type of eq. (\ref{Pphifull}). Performing now the boson fluctuations will induce an additional contribution to $\partial U /\partial \sigma = - \bar n_F$. This will be interpreted below as the density of dressed molecules. \subsection{Dressed Molecules} Dressed molecules are quasi-particles with atom number two. They are described by renormalized scalar fields $\phi$ with a standard non-relativistic $\tau$-derivative in the effective action. This allows for a standard association of the number density of quasi-particles with the correlation function for renormalized fields. With the effective action (\ref{GammaPosSpace}) the relation between the fields for dressed and bare molecules reads \begin{eqnarray} \phi_R = Z_\phi^{1/2} \bar \phi. \end{eqnarray} Correspondingly, the dressed molecule density $n_M$ becomes \begin{eqnarray} n_M = Z_\phi \bar n_M, \quad \Omega_M = \frac{2 n_M}{n} \end{eqnarray} and we find for large $Z_\phi$ a very substantial enhancement as compared to the bare molecule density $\bar n_M$. We may define a renormalized gradient coefficient $A_\phi$ and mass term $m_\phi^2$ by \begin{eqnarray} \bar{A}_{\phi,R} = \frac{\bar{A}_\phi}{Z_\phi}, \quad \bar{m}^2_{\phi,R} = \frac{\bar{m}_{\phi}^2}{Z_\phi}, \end{eqnarray} or, in dimensionless units, \begin{eqnarray}\label{DimlessRenorm} A_\phi = \frac{\tilde{A}_\phi}{Z_\phi}, \quad m_\phi^2 = \frac{\tilde{m}_{\phi}^2}{Z_\phi} = \frac{2M}{k_F^2Z_\phi} \bar{m}_\phi^2. \end{eqnarray} Then the quadratic part in the effective action for the bosons can be written in terms of $\phi$ and $m_\phi^2, A_\phi$, without explicit reference to $Z_\phi$. (Similar rescalings can be made for any quantity entering our calculations. For a complete list of dimensionful, dimensionless and dimensionless renormalized quantities, cf. app. \ref{app:numerics}.) Using eq. (\ref{Pphifull}) in eq. (\ref{MolProExact}) the wave function $Z_\phi$ can be factored out in $\bar{P}_\phi$ such that \begin{eqnarray}\label{RenormBosDens} n_M &=& Z_\phi \bar{n}_M = \int\frac{d^3q}{(2\pi)^3} \big[\exp\big(\frac{\bar{P}_{\phi,R}(q)}{T}\big) - 1\big]^{-1}.\nonumber\\ \end{eqnarray} This is a standard bosonic particle number without the appearance of $Z_\phi$, now expressed in terms of the effective renormalized inverse boson propagator (dimensionful and dimensionless version displayed) \begin{eqnarray}\label{MFTCorrProp} \bar{P}_{\phi,R}(q) &=& \frac{\bar{P}_{\phi}(q)}{Z_\phi} = \bar{A}_{\phi,R} q^2 + \bar{m}_{\phi,R}^2,\nonumber\\ P_\phi(q) &=& \frac{2M}{k_F^2}\frac{\bar{P}_{\phi}(q)}{Z_\phi} = A_\phi \tilde{q}^2 + m_\phi^2. \end{eqnarray} The ``dressed molecules'' \cite{XXStoofBos,Stoof05} include both bare and fluctuation induced, effective molecules (cf. eq. (\ref{WFRDefini})). We will see how the dressed molecule density $n_M$ emerges naturally in the equation of state for the particle density below. We plot in fig. \ref{nqtot} the mode occupation number for the dressed molecules ($\tilde{q} = |\vec{q}\,|/k_F$) \begin{eqnarray}\label{116} N_M(\tilde{q}) &=& (\exp (\bar{P}_{\phi,R}(|\vec{q}\,|)/T)-1)^{-1}\\ &=& (\exp (P_\phi(\tilde{q})/\tilde{T})-1)^{-1}\nonumber \end{eqnarray} (weighted by a volume factor $\tilde{q}^3$). There we compare the ``classical case'' $\bar{P}_\phi = q^2/4M$ with the result including the fluctuation corrections. The normalization in the figure reflects directly the relative contribution to $n_M$ \begin{eqnarray}\label{117} n_M = \frac{k_F^3}{2\pi^2} \int d (\ln \tilde{q} ) \,\,\tilde{q}^3N_M(\tilde{q}). \end{eqnarray} We observe a large fluctuation effect in the BCS regime (beyond the renormalization of $\bar{m}_\phi^2$). In this regime, however, the overall role of the molecules is subdominant. On the other hand, in the BEC regime the molecule distribution is rather insensitive to the details of the treatment of the fluctuations. The most important uncertainty from the ``molecule'' fluctuations therefore concerns the crossover regime. In the BEC and crossover regime the replacement of $\bar{A}_\phi(q^2)$ by $\bar{A}_\phi (q^2=0)$ results only in a moderate error. For simplicity we neglect the momentum dependence of $\bar{A}_\phi(q^2)$ for the numerical results in this work. \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,105) \put (0,55){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverFractionsRbroad.eps} \put(70,-2){$c^{-1}$} \put(-3,1){(a)} \put(10,43){$\Omega_F$ } \put(69,43){$\Omega_M$ } \put(60,23){$T=T_c$ } \end{picture} }} \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverFractionsRnarrow.eps} \put(70,-2){$c^{-1}$} \put(-3,1){(b)} \put(10,41){$\bar{\Omega}_F$ } \put(69,41){$\bar{\Omega}_M$ } \put(60,23){$T=T_c$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Crossover at the critical temperature: Contributions to the total particle number, showing the crossover from fermion to boson dominated physics. (a) Fractions of ``dressed'' densities in the large $\tilde{h}_\phi$ limit. We compare the results for two versions of the gap equation as in fig. \ref{CrossoverTcAll}. (b) Fractions of ``bare'' densities in the exact narrow resonance limit $\tilde{h}_\phi\to 0$. Though the pictures are similar, the physical interpretation of the two plots differs as described in the text. } \label{CrossoverDensAll} \end{minipage} \end{figure} \subsection{Contribution of molecule fluctuations to the effective potential} \label{sec:ContribMolFluct} The computation of $n_M$ evaluates a one loop integral which involves the molecule fluctuations. (Graphically, eqs. (\ref{MolProExact}), (\ref{BosNumber}) correspond to a closed loop for the molecule fluctuations with an insertion of a $\mu$ - derivative.) A self-consistent approximation should therefore also include the effects of the molecule fluctuations in the computation of $U$ and therefore $\bar{\phi}_0$ or $T_c$. Our functional integral approach makes this necessity particularly apparent: The computation of the partition function $Z$ involves the fluctuations of open channel atoms and molecules in a completely symmetric way (the variables $\hat\psi$ and $\hat{\phi}$ in eq. (\ref{PhiAction})). There is no reason why the fluctuations of the fermionic atoms should be included and not the ones for the bosonic molecules. In particular, the critical region very close to $T_c$ will be dominated by the boson fluctuations. For the effective potential the incorporation of the molecule fluctuations is achieved by adding to $U$ the one loop contribution $U_1^{(B)}$ from the fluctuations of $\hat{\phi}$ \begin{eqnarray} \hspace{-0.1cm}U = U_{MFT} + U_1^{(B)} \hspace{-0.06cm}= (\bar{\nu} - 2 \mu)\bar{\phi}^*\bar{\phi} + U_1^{(F)}\hspace{-0.1cm}+U_1^{(B)}. \end{eqnarray} We can construct the bosonic contribution as the leading order correction to the mean field result which omits boson fluctuations completely. For this purpose, we note that the fermion field appears only at quadratic order in the classical action (\ref{PhiAction}). We can therefore integrate them out, which turns eq. (\ref{GammaFuncInt}) into a purely bosonic functional integral, \begin{eqnarray}\label{PurelyBosonic} \Gamma[\psi=0,\bar\phi] = - \ln \int \mathcal D \delta \phi \exp\big( - S_{MFT}[ \bar\phi + \delta\phi]\\\nonumber \qquad\qquad+ j^*\delta\phi + \mathrm{h.c.} \big) \end{eqnarray} with an intermediate action $ S_{MFT}$ depending on the field $\hat \phi = \bar\phi + \delta\phi$. This is given by the exact expression \begin{eqnarray}\label{IntermediateAct} S_{MFT} [\hat{\phi}] &=& S_\phi^{(cl)}[\hat{\phi}] - \frac{1}{2}\ln\det S ^{(\psi\psi)}[\hat{\phi}]\\\nonumber &=& S_\phi^{(cl)}[\hat{\phi}] - \frac{1}{2}\mathrm{Tr} \ln S ^{(\psi\psi)}[\hat{\phi}] \end{eqnarray} where $S^{(\psi\psi)}$ denotes the second variation w.r.t. the fermion fields. In the ``classical approximation'' one has $\Gamma[\bar\phi] = S_{MFT}[\bar\phi]$, while the one loop approximation corresponds to an expansion of $S_{MFT}[\bar\phi + \delta\phi]$ to second order in $\delta\phi$. The mean field effective potential (\ref{USigmaPhi}) is obtained from eq. (\ref{PurelyBosonic}) in the classical approximation. The next order contribution takes the Gaussian approximation for the fluctuations of the molecule field into account. In principle, this requires the evaluation of highly nonlocal objects -- the one-loop fermion fluctuations encoded in the $\mathrm{Tr}\ln$-term in eq. (\ref{IntermediateAct}) feature a complex frequency and momentum dependence. However, since we are interested in the observable low energy properties of the system, we may apply a derivative expansion which only keeps the leading order terms in the frequency and momentum dependence. This precisely generates the wave function renormalization $Z_\phi$ (\ref{WFRDefini},\ref{ZRFormula}) and the gradient coefficient (\ref{Zphi1},\ref{Aphi1}). In this approximation, we find (up to an irrelevant infinite constant and evaluated in the symmetric phase) the one loop result \begin{eqnarray}\label{BosPotNew} U_1^{(B)} = T \int\frac{d^3q}{(2\pi)^3} \ln \big|1- \mathrm{e}^{-\bar{P}_{\phi,R}/T}\big|, \end{eqnarray} where the spacelike part of the boson propagator is precisely given by eq. (\ref{MFTCorrProp}) in the symmetric phase (for the result in the symmetry broken phase, cf. eq. (\ref{MFTCorrBroken})). This one loop formula has shortcomings and we will improve on it by solving appropriate gap equations in sect. \ref{sec:beyond}. Nevertheless, it already contains the essential information for the different contributions to the density. Let us compare the effect of the $\mu$- and $\sigma$-derivatives on the bosonic part of the effective potential, eq. (\ref{BosPotNew}). With the ``classical'' inverse boson propagator $\bar{P}_\phi^{(cl)} = q^2/4M + \bar{\nu} - 2\mu$ one has $\partial\bar{P}_\phi/\partial \mu = -2$ and we note the simple relation \begin{eqnarray}\label{simplerel} \frac{\partial U_1^{(B)}}{\partial \mu } = - 2 \bar{n}_M. \end{eqnarray} On the other hand, the fermion loop corrections induce a $\sigma$ - dependence (at fixed $\mu$) of $U_1^{(B)}$, which contributes to $\bar{n}_F$. This contribution can be interpreted as the number of open channel atoms that are bound in the dressed molecules \begin{eqnarray} 2n_{FM} = -\frac{\partial U_1^{(B)}}{\partial \sigma}\big|_\mu. \end{eqnarray} The total number of dressed molecules is then given by \begin{eqnarray} n_M = \bar{n}_M + n_{FM} = Z_\phi \bar{n}_M. \end{eqnarray} This could be taken as a possible alternative definition of $Z_\phi$ \begin{eqnarray} Z_\phi -1 = \frac{n_{FM}}{\bar{n}_M} = \frac{\partial U_1^{(B)}/\partial \sigma}{\partial U_1^{(B)}/\partial \mu}. \end{eqnarray} In the limit where the $\sigma$ - dependence of $Z_\phi$ and $\bar{A}_\phi$ can be neglected and $\partial \bar{P}_\phi/\partial \mu =-2$ one finds from \begin{eqnarray} \bar{P}_\phi = \bar{A}_\phi q^2 + \bar{m}_\phi^2 \end{eqnarray} (cf. eq. (\ref{Pphifull})) the relation \begin{eqnarray}\label{ZRdef} Z_\phi - 1 = \frac{\partial \bar{P}_\phi/\partial \sigma}{\partial \bar{P}_\phi/\partial \mu} =-\frac{1}{2}\frac{\partial\bar{m}_\phi^2} {\partial\sigma}\big|_\mu = -\frac{1}{2}\frac{\partial^3 U}{\partial \sigma\partial\bar{\phi}^*\partial\bar{\phi}}\big|_\mu.\nonumber\\ \end{eqnarray} We recover the MFT result in eq. (\ref{ZphiinMFT}). The combined effect of the derivatives with respect to $\sigma$ and $\mu$ yields directly the number of dressed molecules \begin{eqnarray}\label{DressedMol} n_M = Z_\phi \bar{n}_M = -\frac{1}{2}\Big(\frac{\partial U_1^{(B)}}{\partial \mu}\big|_\sigma + \frac{\partial U_1^{(B)}}{\partial \sigma}\big|_\mu\Big). \end{eqnarray} The quantitative results shown in the figures are obtained by including the $\sigma$ - and $\bar{\phi}$ - dependence of $P_\phi$. They will be discussed in more detail in the next sections. \subsection{Open and closed channel atoms} \label{sec:OpenandClosed} Some characteristic properties of the ultracold gas involve the fractions of open channel atoms, uncondensed bare molecules and condensed bare molecules \begin{eqnarray} \bar{\Omega}_F &=& \frac{\bar{n}_F}{n}, \quad \bar{\Omega}_M = \frac{2\bar{n}_M}{n},\quad\bar{\Omega}_C = \frac{\bar{n}_C}{n},\nonumber\\ &&\bar{\Omega}_F + \bar{\Omega}_M + \bar{\Omega}_C = 1. \end{eqnarray} For example, the sum $\bar{\Omega}_B = \bar{\Omega}_M + \bar{\Omega}_C$ measures the fraction of closed channel atoms, as observed in \cite{Partridge05}. The formal use of two different effective chemical potentials in the action $S_B$ (\ref{PhiAction}), i.e. $\sigma$ for the fermions and $\mu$ for the bosons, allows the simple association \begin{eqnarray} -3\pi^2 \bar{\Omega}_F = \frac{\partial U}{\partial \sigma}\Big|_\mu , \quad -3\pi^2 \bar{\Omega}_B = \frac{\partial U}{\partial \mu}\Big|_\sigma, \end{eqnarray} and we recall that $\bar{\Omega}_F$ also receives a contribution from $U_1^{(B)}$. We have computed $\bar \Omega_B$ in \cite{Diehl:2005an} and the results agree well with \cite{Partridge05} over several orders of magnitude. In the symmetric phase for $T\geq T_c$ one has $\bar{\Omega}_F + \bar{\Omega}_M =1$. For small $\tilde{h}_\phi\lesssim 1$ the BEC-BCS crossover is the crossover from small to large $\bar{\Omega}_F$. In fig. \ref{CrossoverDensAll} (b) we plot $\bar{\Omega}_F$ and $\bar{\Omega}_M$ as a function of the inverse concentration $c^{-1}$ for $T=T_c$. The modifications of $Z_\phi$, $\bar{A}_\phi$ and $\bar{m}_\phi^2$ in the inverse propagator $\bar{P}_\phi$ (\ref{Pphifull}) depend on the Yukawa coupling $\tilde{h}_\phi$. However, this influences the precise shape of the crossover between the BEC and BCS regime only for moderate $\tilde{h}_\phi\gtrsim 1$. For smaller $\tilde{h}_\phi$, the density fractions are insensitive to the fluctuation modifications. For the large values of $\tilde{h}_\phi$ encountered for the broad Feshbach resonances in $^6\mathrm{Li}$ and $^{40}\mathrm{K}$ the contributions from the closed channel molecules $\bar{\Omega}_M, \bar{\Omega}_C$ become very small (cf. fig. \ref{FractionsT0R} (b)). The dressed molecules differ substantially from the bare molecules (large $Z_\phi$) and the crossover physics is better described in terms of dressed molecules. We display in fig. \ref{CrossoverDensAll} (a) the fraction of dressed unbound atoms $\Omega_F$ and dressed molecules $\Omega_M$ for large values of $\tilde{h}_\phi$ and $T= T_c$. The fractions are not sensitive to the precise value of $\tilde{h}_\phi$ in the broad resonance limit $\tilde{h}_\phi\to \infty$, similar to the behavior found for small $\tilde{h}_\phi$. \subsection{Condensate fraction} \label{sec:CondFrac} The total number of atoms in the condensate depends on the expectation value of the \emph{renormalized} field $\phi_R = \langle \hat\phi_R\rangle = Z_\phi^{1/2} \bar \phi$. We will mainly use the dimensionless fiel \begin{eqnarray} \phi = k_F^{-3/2} \langle\hat{\phi}_R \rangle = k_F^{-3/2}Z_\phi^{1/2}\bar{\phi}. \end{eqnarray} With $\rho = \phi^*\phi$, $\rho_0 = \phi_0^*\phi_0$ the dressed condensate fraction can be defined as \begin{eqnarray}\label{OmCRDef} \Omega_C =\frac{2\langle\hat{\phi}^*_R \rangle\langle\hat{\phi}_R \rangle}{n} = 2 k_F^3 \phi_0^*\phi_0 /n = 6 \pi^2 \rho_0 = Z_\phi\bar{\Omega}_C.\nonumber\\ \end{eqnarray} In fig. \ref{FractionsT0R} (a) we plot the condensate fraction $\Omega_C$ as a function of $c^{-1}$ for $T=0$. We also show the fraction of closed channel atoms \footnote{The fig. for $\bar{\Omega}_C$ includes renormalization effects for $\tilde{h}_\phi$ discussed in \cite{Diehl:2005an}.} in the condensate, $\bar{\Omega}_C$, in fig. \ref{FractionsT0R} (b). Both correspond to the broad resonance limit and we choose the Yukawa coupling appropriate for $^6\mathrm{Li}$, $\tilde{h}_\phi=610$. For large $Z_\phi$ one has $\Omega_C \gg \bar{\Omega}_C$ - indeed, the probability $Z_\phi^{-1}$ that a condensed di-atom state contains a bare molecule is small. In this case the major part of the condensate is due to open channel atoms, i.e. $n_C$ is dominated by a contribution from $\bar{n}_F$ \begin{eqnarray} n_{FC} &=& n_C - \bar{n}_C= (Z_\phi-1) \bar{n}_C \\\nonumber &=& \frac{\bar{\Omega}_C}{\bar{\Omega}_C +\bar{\Omega}_M}\big( 1- \frac{ \Omega_F}{\bar{\Omega}_F}\big)\bar{n}_F. \end{eqnarray} Here the second line defines implicitly $\Omega_F$ by the requirement \begin{eqnarray} \Omega_F + \Omega_M + \Omega_C = 1. \end{eqnarray} (From $n_C + n_M \leq n$, $\bar{\Omega}_C + \bar{\Omega}_M \leq Z_\phi^{-1} \ll 1$ one concludes $\bar{\Omega}_F \approx 1$, and for low enough $T$ one further expects $\bar{\Omega}_M < \bar{\Omega}_C$.) In the BCS limit this result is not surprising since we could have chosen a formulation without explicit molecule fields such that all atoms are described by $\bar{n}_F$. (If the chemical potential multiplies only $\hat\psi^\dagger\hat\psi$ the total condensate fraction must be $\Omega_C = 1- \Omega_F$ \footnote{The difference between $\Omega_C$ and $1- \Omega_F$ can be traced back to the appearance of the chemical potential $\mu$ in the effective four-fermion interaction (\ref{Mom4Fermion}).}.) The total number of open channel atoms can therefore be found in three channels, $\bar{n}_F = n_F+ n_{FC}+ 2n_{FM}$. Here $n_F$ denotes the unbound dressed fermionic atoms, $2n_{FM}$ the open channel atoms contained in dressed molecules and $n_{FC}$ the ones in the condensate. As long as the fermionic and bosonic contributions to $U$ can be separated we have the identities \begin{eqnarray} n_{F,0} &=& n_F + n_{FC} = -\frac{\partial U_1^{(F)}}{\partial \sigma},\\\nonumber n_M &=& -\frac{1}{2}\Big(\frac{\partial U_1^{(B)}}{\partial \mu}\big|_\sigma + \frac{\partial U_1^{(B)}}{\partial \sigma}\big|_\mu\Big), \end{eqnarray} and \begin{eqnarray}\label{RenDensConstr} n &=& n_{F,0} + 2 n_M + \bar{n}_C\\\nonumber &=& n_F + 2 n_M + n_C\\\nonumber &=& \bar{n}_F + 2 \bar{n}_M + \bar{n}_C. \end{eqnarray} The definition of a condensate fraction in terms of the superfluid order parameter $\rho_0$ is rather simple and appealing. Nevertheless, this may not correspond precisely to the condensate fraction as defined by a given experimental setup. The ambiguity is even larger when we come to the concepts of uncondensed fermionic atoms and molecules. The distinction becomes somewhat arbitrary if we include higher loops for the computation of it, where bosonic and fermionic fluctuations are mixed. As an example for the ambiguities in the definition one may try to extract the number of open channel atoms in the condensate directly from the $\phi$ - dependence of $\bar{n}_F$. We can decompose the fermion contribution into a part for vanishing condensate and a ``condensate contribution'' due to $\phi \neq 0$ \begin{eqnarray} U_1^{(F)} = U_1^{(F)} (\bar{\phi} = 0) + \Delta U_1^{(F)} . \end{eqnarray} The association \begin{eqnarray} n_{FC} = - \frac{\partial\Delta U_1^{(F)}}{\partial \sigma} \end{eqnarray} yields an alternative definition of $Z_\phi$, \begin{eqnarray}\label{ZRaltern} Z_\phi' - 1 = \frac{n_{FC}}{\bar{n}_C} = -\frac{1}{2\bar{\phi}^*\bar{\phi}} \frac{\partial\Delta U_1^{(F)}}{\partial \sigma}. \end{eqnarray} With this definition the number of unbound atoms $n_F$ becomes \begin{eqnarray} n_F = \bar{n}_F - n_{FC} - n_{FM} = -\frac{\partial U_1^{(F)}(\bar{\phi} = 0)}{\partial \sigma} \end{eqnarray} which amounts to the standard number density in a Fermi gas. We note that $Z_\phi'$ (\ref{ZRaltern}) coincides with $Z_\phi$ (\ref{ZRdef}) in the limit where $\partial^3 U/\partial\sigma\partial\bar{\phi}^*\partial\bar{\phi}$ is dominated by the term quadratic in $\bar{\phi}$ in $U_1^{(F)}$. From fig. \ref{FractionsT0R} (a) we see that this is the case in the BEC and BCS regimes. \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,110) \put (0,54){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverFractionsT0R3.eps}} \put(66,-2){$c^{-1}$} \put(-3,1){(a)} \put(10,30){BCS} \put(12,20){$T=0$} \put(65,12){$\Omega_M$ } \put(43,28){$\Omega_C$ } \put(25,23){$\Omega'_C$ } \put(69,30){BEC} \end{picture} }} \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{BareFracsLarge2.eps}} \put(70,-3){$c^{-1}$} \put(-3,1){(b)} \put(68,39){$\bar{\Omega}_C$ } \put(66,7){$\bar{\Omega}_M$ } \put(15,15){$T =0$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{(a) Contributions to the total particle density for $T=0$ in the large $\tilde{h}_\phi$ limit: The fraction of dressed molecules $\Omega_M$ (dashed line) is largest in the crossover regime. The condensate fraction $\Omega_C$ (solid line) grows to one in the BEC regime. The solid line corresponds to $\Omega_C = Z_\phi \bar{\Omega}_C$ whereas the dashed-dotted line uses $Z_\phi'$ (eq. (\ref{ZRaltern})) instead of $Z_\phi$. (b) Fractions of the bare or closed channel molecules. In contrast to the dressed molecules, they are $\mathcal{O} (\tilde{h}_\phi^{-2})$. The dominant contribution arises from the condensed bare molecules $\bar{\Omega}_C$ (solid line). The contribution from noncondensed bare molecules $\bar{\Omega}_M$ at $T=0$ remains very small. } \label{FractionsT0R} \end{minipage} \end{figure} \subsection{Excitations in the superfluid phase} \label{sec:Excitations} The bosonic excitations in the superfluid phase are analogous to a purely bosonic theory as for superfluidity in $^4\mathrm{He}$ \cite{Gollisch02}. Due to the nonvanishing order parameter the matrix for the inverse renormalized propagator $\mathcal{P}_\phi$ contains now ``off-diagonal'' entries form terms $\sim \rho_0 \phi\phi$ or $\rho_0\phi^*\phi^*$ where $\rho_0 = k_F^{-3}Z_\phi\langle \hat{\phi} \rangle^* \langle \hat{\phi}\rangle$. It is convenient to use a basis of real fields $\phi_1,\phi_2$ \begin{eqnarray} \phi = \frac{1}{\sqrt{2}}(\phi_1 + \mathrm{i}\phi_2) , \quad \rho = \frac{1}{2} (\phi_1^2 + \phi_2^2). \end{eqnarray} In this basis $\mathcal{P}_\phi$ and $G_\phi =\mathcal{P}_\phi^{-1}$ are $2\times 2$ matrices and the exact eq. (\ref{MolProExact}) for the bare molecule density is replaced by \begin{eqnarray} \bar{n}_M (x) = \frac{T}{2}\mathrm{tr} \int d\tau \bar{G}_\phi(x,\tau; x,\tau ) - \hat{n}_B. \end{eqnarray} We expand in the superfluid phase around the minimum of the potential at $\bar{\rho} = \bar{\rho}_0$ \begin{eqnarray} \tilde{u} = \frac{\lambda_\phi}{2}(\rhoR - \rhoR_0)^2 + .. \end{eqnarray} such that the mass matrix \begin{eqnarray} \big(m_\phi^2\big)_{ab} = \frac{\partial^2 \tilde{u}}{\partial \phi_a\partial\phi_b}\Big|_{\rhoR=\rhoR_0} \end{eqnarray} becomes \begin{eqnarray} m_\phi^2 =\left( \begin{array}{cc} {2\lambda_\phi \rhoR_0} & {0} \\ {0} & {0} \end{array}\right). \end{eqnarray} Without loss of generality we have taken here the order parameter in the $\phi_1$ direction, $\phi_{1,0} = \sqrt{2\rhoR_0}$, $\phi_{2,0} =0$, and we recognize the flat direction in the potential (vanishing eigenvalue of $m_\phi^2$) in the ``Goldstone direction'' $\phi_2$. In contrast, the ``radial mode'' $\phi_1$ has a nonvanishing mass term $2\lambda_\phi\rhoR_0$. In the basis $(\phi_1,\phi_2)$ the term containing the $\tau$ - derivative is off-diagonal (neglecting total derivatives) \begin{eqnarray} \int d\tau \phi^*\partial_\tau \phi = \mathrm{i} \int d\tau \phi_1\partial_\tau \phi_2 = - \mathrm{i} \int d\tau \phi_2\partial_\tau \phi_1. \end{eqnarray} In momentum space one therefore finds for the renormalized inverse propagator $\mathcal{P}_\phi = 2M/(Z_\phi k_F^2) \bar{\mathcal{P}}_\phi$\footnote{Note the structure $\Gamma_2 \approx 1/2 \int_q \phi^T(-Q) \mathcal{P}_\phi(Q) \phi (Q)$.} \begin{eqnarray}\label{MFTCorrBroken} \mathcal{P}_\phi = \left( \begin{array}{cc} {A_\phi \tilde{q}^2 + 2\lambda_\phi \rhoR_0,} & {-\tilde{\omega}} \\ {\tilde{\omega} \qquad,} & {A_\phi \tilde{q}^2} \end{array}\right) \end{eqnarray} where we use eq. (\ref{DimlessRenorm}) and the renormalized order parameter and quartic coupling \begin{eqnarray} \rhoR_0 = Z_\phi \tilde{\rho}_0 , \quad \lambda_\phi = \tilde{\lambda}_\phi/Z_\phi^2. \end{eqnarray} (For a list of the relations between dimensionless and dimensionless renormalized parameters cf. app. \ref{app:numerics}.) This has an important consequence: The propagating excitations correspond to frequencies $\omega$ which obtain from the Matsubara frequencies $\omega_B$ by analytic continuation $\omega_B \to \mathrm{i} \omega$. This corresponds to the analytic continuation from Euclidean time $\tau$ to real or ``Minkowski'' time $t = -\mathrm{i} \tau$. Now $G_\phi$ has a pole or $\mathcal{P}_\phi$ a zero eigenvalue. The eigenvalues $\lambda$ of $\mathcal{P}_\phi$ therefore obey \begin{eqnarray} (A_\phi \tilde{q}^2 + 2\lambda_\phi \rhoR_0 - \lambda)(A_\phi \tilde{q}^2- \lambda) - \tilde{\omega}^2 =0. \end{eqnarray} Vanishing eigenvalues $\lambda$ therefore lead to the dispersion relation \begin{eqnarray} \tilde{\omega}^2 = A_\phi \tilde{q}^2 ( 2\lambda_\phi \rhoR_0 + A_\phi \tilde{q}^2 ). \end{eqnarray} For small $\tilde{q}^2$ this yields the linear dispersion relation characteristic for superfluidity \begin{eqnarray} \tilde{\omega} = \sqrt{2A_\phi\lambda_\phi\rhoR_0}\sqrt{\tilde{q}^2}, \end{eqnarray} from which we can read off the speed of sound \begin{eqnarray} v_s =k_F/(2M) \, \sqrt{2A_\phi\lambda_\phi\rhoR_0}. \end{eqnarray} \subsection{Molecule density in the superfluid phase} \label{sec:MolDensSFL} The density of dressed molecules in the superfluid phase obeys \begin{eqnarray} n_M &=& Z_\phi \bar{n}_M = k_F^3\frac{\tilde{T}}{2} \mathrm{tr} \sum\limits_n \int\frac{d^3\tilde{q}}{(2\pi)^3} \mathcal{P}_\phi^{-1}(\tilde{q}, \tilde{\omega}_n) - \hat{n}_B\nonumber\\ &=& k_F^3 \tilde{T}\sum\limits_n \int\frac{d^3\tilde{q}}{(2\pi)^3}\frac{A_\phi \tilde{q}^2 + \lambda_\phi\rhoR_0}{\tilde{\omega}_n^2 + A_\phi \tilde{q}^2(A_\phi \tilde{q}^2 + 2\lambda_\phi\rhoR_0)} \nonumber\\ && - \hat{n}_B, \end{eqnarray} where the Matsubara summation can be performed analytically \begin{eqnarray}\label{SuperFlDens1} n_M &=&\frac{k_F^3}{2} \int\frac{d^3\tilde{q}}{(2\pi)^3}\Big\{\frac{A_\phi \tilde{q}^2 + \lambda_\phi\rhoR_0}{ \sqrt{A_\phi \tilde{q}^2(A_\phi \tilde{q}^2 + 2\lambda_\phi\rhoR_0)}}\nonumber\\ &&\times\coth\frac{\sqrt{A_\phi \tilde{q}^2(A_\phi \tilde{q}^2 + 2\lambda_\phi\rhoR_0)}}{2\tilde{T}} - 1\Big\}. \end{eqnarray} Due to the subtraction of $\hat{n}_B$ (the term $-1$ in the curled bracket) the momentum integral is UV finite. We have used dimensionless units and may introduce \begin{eqnarray}\label{DefAlpha} \alpha &=&\left\{ \begin{array}{c} {(A_\phi \tilde{q}^2 + m_\phi^2)/(2 \tilde{T}) \quad \text{symmetric phase}} \\ {A_\phi \tilde{q}^2/(2 \tilde{T}) } \qquad \,\qquad\text{superfluid phase} \end{array}\right. \nonumber\\ \kappa &=&\left\{ \begin{array}{c} {0 \qquad\qquad\qquad \,\,\,\qquad \text{symmetric phase}} \\ {\lambda_\phi\rho_0/(2 \tilde{T}) \,\qquad\quad \quad\text{superfluid phase}} \end{array}\right.\label{DefKappa} \end{eqnarray} where (for SSB) \begin{eqnarray}\label{DefAlphaPhi} \alpha_\phi &=&\sqrt{A_\phi \tilde{q}^2(A_\phi \tilde{q}^2 + 2\lambda_\phi\rho_0}%\tilde{\rho_{0,R)}/(2\tilde{T})\\\nonumber &=& \sqrt{\alpha^2 + 2\kappa\alpha} \end{eqnarray} is the bosonic analog to eq. (\ref{DefGammaPhi}). This allows us to write the dressed molecule density in both phases as \begin{eqnarray} n_M^{(SYM)} &=& \int\frac{d^3q}{(2\pi)^3}\Big(\exp 2\alpha - 1\Big)^{-1}\label{SymmDens},\nonumber\\ n_M^{(SSB)} &=& \frac{1}{2} \int\frac{d^3q}{(2\pi)^3}\Big(\frac{\alpha + \kappa}{\alpha_\phi}\coth\alpha_\phi - 1\Big). \label{SuperFlDens} \end{eqnarray} At the phase boundary the two definitions coincide since $\alpha_\phi = \alpha$ ($m_\phi^2 = 0, \rho_0 = 0$ and $\kappa=0$). \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,108) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverGapT0R.eps}} \put(72,-2){$c^{-1}$} \put(72,42){$\tilde{\Delta}$ } \put(38.5,11.5){$\nwarrow$ } \put(36.8,12.9){{\large $\star$} } \put(41,8){QMC } \put(-3,1){(b)} \put(20,35){$T=0$} \end{picture} }} \put (0,54){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{CrossoverSnT0R.eps}} \put(72,-3){$c^{-1}$} \put(72,42){$\tilde{\sigma}$ } \put(40,33){$\nearrow$ } \put(43,35.2){{\large $\star$} } \put(32,32){QMC } \put(20,15){$T=0$} \put(-3,1){(a)} \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Solutions of the coupled gap and density equations at $T=0$, (a) effective chemical potential, (b) gap parameter $\tilde{\Delta} = \sqrt{\tilde{r}}$. We also show (dashed)the mean field result (obtained by setting the bosonic contribution $n_M =0$ in the density equation). Our result can be compared to QMC calculations \cite{Carlson03} performed at $c^{-1} =0$ which find $\tilde{\sigma}=0.44, \tilde{\Delta} =0.54$. Our solution yields $\tilde{\sigma}=0.50, \tilde{\Delta} =0.53$, and improves as compared to the MFT result $\tilde{\sigma}=0.63, \tilde{\Delta} =0.65$. } \label{CrossoverSnGap} \end{minipage} \end{figure} The equations (\ref{SymmDens}) for the density of dressed molecules involve the renormalized coupling $\lambda_\phi$. This describes the full vertex and therefore $\lambda_\phi$ is a momentum dependent function. This also holds for $A_\phi$. We will neglect the momentum dependence of $A_\phi$, as motivated by fig. \ref{nqtot} for $T= T_c$. For $\lambda_\phi$ this issue is more involved since $\lambda_\phi(q)$ vanishes for $q\to 0$ ($T\leq T_c$) due to the molecule fluctuations, as shown in app. \ref{app:SDE}. We observe that for small $T$ the momentum integration in eq. (\ref{SuperFlDens1}) is dominated by the range $\tilde{q}^2 \approx \lambda_\phi\rho_0/A_\phi$. The infrared suppression of $\lambda_\phi(q\to 0)$ is therefore not effective. For the density equation we will simply omit the contribution of the molecule fluctuations to $\lambda_\phi$ and approximate $\lambda_\phi = \lambda_\phi^{(F)}$ with $\lambda_\phi^{(F)}$ evaluated at $q^2 =0$. The inclusion of the molecule density $n_M$ is important for quantitative accuracy even at $T=0$. This is demonstrated in fig. \ref{CrossoverSnGap} where we show the crossover for the effective chemical potential $\tilde{\sigma}$ and the gap $\tilde{\Delta}$ as a function of $c^{-1}$. The agreement with quantum Monte Carlo simulations \cite{Carlson03} is substantially improved as compared to mean field theory, and also compares reasonably well with other analytical approaches (Strinati \emph{et al} \cite{StrinatiPieri04}, $\tilde \Delta = 0.53, \tilde{\sigma} = 0.445$). \section{Effective Field for Atom Density} \label{EffAtDens} Before proceeding in sect. \ref{sec:beyond} to a description of our computation of the effects from the molecule fluctuations on the effective potential we present in this section an improvement of our formalism which treats the fermionic and bosonic fluctuations in an even more symmetric way. Indeed, in the thermodynamic equilibrium the molecule propagator should involve the same effective chemical potential $\sigma$ as the propagator of the unbound atoms. (So far it involves $\mu$ instead of $\sigma$.) If one is interested in the separate contributions from open and closed channel atoms one may formally consider different chemical potentials $\sigma$ and $\mu$ multiplying $\hat\psi^\dagger\hat\psi$ and $\hat\phi^*\hat\phi$ in the action. Then $\bar{n}_F$ and $\bar{n}_B$ can be associated to the variation of the effective action with respect to $\sigma$ and $\mu$. At the end of the computations one has to identify $\sigma = \mu$). For many purposes, however, one only needs the total number of atoms $n$. It seems then advantageous to modify our formulation such that the field $\sigma$ is associated to $n$ instead of $\bar{n}_F$. In this section, we treat $\sigma$ as a classical field such that its role is reduced to a source term $\sigma =J$. In app. \ref{sec:partial} we will consider the more general case where $\sigma$ is treated as a fluctuating field. Identifying $\sigma = \mu$ in the bare action (\ref{PhiAction}), the $\sigma$-derivative of the effective potential generates the total particle number density, \begin{eqnarray} -\frac{\partial U}{\partial\sigma} = n = \bar n_F + 2 \bar n_M + 2 \bar\phi^*\bar\phi. \end{eqnarray} In the presence of interactions it is hard to evaluate the full bare correlation functions in the above form explicitly -- it premises the solution of the full quantum field theory. As we have discussed in sects. \ref{sec:ContribMolFluct}, \ref{sec:OpenandClosed} it is more practicable to decompose $n$ according to \begin{eqnarray}\label{RenDensConstr} n &=& -\frac{\partial(U_{MFT} + U_1^{(B)})}{\partial\sigma} = \bar{n}_C + n_{F,0} + 2 n_M \\\nonumber \end{eqnarray} which is a ``mixed'' representation involving both bare and renormalized quantities. In the next section we will see that this equation reduces to an equation of state for ``fundamental'' bosons in the BEC regime, which involves dressed quantities only (eq. (\ref{BogoliubovDens})). Beyond the classical bosonic propagator we have implemented \footnote{A more rigorous derivation of the equation of state via the Noether construction for the conserved charge associated to the global $U(1)$-symmetry reveals that this choice is uniquely fixed by requiring consistency within our truncation (linear frequency dependence) \cite{Diehl:2006}.} the $\sigma$ - dependence of $U_1^{(B)}$ by the $\sigma$ - dependence of $\bar{m}_\phi^2$. This yields \begin{eqnarray} Z_\phi = -\frac{1}{2} \frac{\partial \bar{m}_\phi^2}{\partial\sigma} = -\frac{1}{2} \frac{\partial^3 U} {\partial\sigma\partial\bar{\phi}^*\partial\bar{\phi}} \end{eqnarray} where we note that the contribution from the bare molecules is already included in the classical $\sigma$ - dependence of $\bar{m}_\phi^2$, leading to the replacement $Z_\phi \to Z_\phi - 1$ in eq. (\ref{ZRdef}). In practice, we use the approximation \begin{eqnarray}\label{ZRApprox} Z_\phi = 1 -\frac{1}{2} \frac{\partial^3 U_1^{(F)}}{\partial\sigma\partial\bar{\phi}^*\partial\bar{\phi}}. \end{eqnarray} This has a simple interpretation: due to the fermionic fluctuations the classical ``cubic coupling'' $- 2 \sigma\bar{\phi}^*\bar{\phi}$ is replaced by a renormalized coupling $- 2 Z_\phi \sigma\bar{\phi}^*\bar{\phi}$. We can compute $n_M$ directly from eqs. (\ref{117}),(\ref{116}),(\ref{RenormBosDens}),(\ref{Pphifull}). \section{BEC Limit} \label{sec:renconstZR} In sect. \ref{sec:Excitations} we have defined a multiplicative renormalization scheme for the bosons by rescaling all couplings and fields with an appropriate power of $Z_\phi$ such that the timelike derivative in the effective action has a unit coefficient. This leads to the notion of dressed molecules in our formalism. In this section we discuss the implications of this prescription in the BEC regime. Here we will see that the dressed molecules behave just as ``elementary'' bosons. \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,55) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{BoseCoeffZphiR.eps}} \put(70,-2){$c^{-1}$} \put(65,10){BEC} \put(17,40){$T=0$} \put(8,10){BCS} \put(63,41){$A_\phi$ } \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Gradient coefficient $A_\phi = \tilde{A}_\phi/Z_\phi$ for $T=0$, as a function of the inverse concentration. In the BEC limit, $A_\phi$ takes the classical value $1/2$ as appropriate for elementary bosons of mass $2M$.} \label{BoseCoeffs} \end{minipage} \end{figure} The propagator for the dressed molecules has still a nontrivial renormalization factor for its dependence on the spacelike momenta (\ref{Pphifull}). The ratio $A_\phi = \tilde{A}_\phi/Z_\phi$ is shown as a function of $c$ in fig. \ref{BoseCoeffs}. It is instructive to evaluate $A_\phi$ in the BEC limit where the propagator should describe the propagation of dressed molecules. The integrals in $\tilde{A}_\phi$ and $Z_\phi$ can be evaluated analytically (cf. app. \ref{AnalBEC}, eqs. (\ref{ZphiBEC}, \ref{ZRBEC})) irrespective to the thermodynamic phase of the system. The universal result does not depend on $\tilde{h}_\phi$, \begin{eqnarray} A_\phi \to \frac{1}{2}. \end{eqnarray} This makes it particularly clear that for \emph{both} the limit of small $\tilde{h}_\phi \to 0$ \emph{and} the limit of large $\tilde{h}_\phi$ we recover composite, but pointlike bosons with wave function $A_\phi = 1/2$, corresponding to a kinetic energy $p^2/4M$. However, in the first case of small $\tilde{h}_\phi$ we deal with microscopic closed channel molecules, while in the second case of large $\tilde{h}_\phi$ the emerging bosons are open channel pairs, however behaving just as if they were pointlike bosons in the BEC limit. For large $\tilde{h}_\phi$ we could equally well drop the classical piece in $\bar{A}_\phi$ and $Z_\phi$. This corresponds to a purely pointlike interaction for the fermions without any explicit reference to molecules at all, cf. eq. (\ref{Mom4Fermion}), and the discussion in \cite{Diehl:2005an}. Nevertheless, bound atom pairs emerge that behave just as pointlike particles. A similar aspect of universality is found for the four-boson coupling $\lambda_\phi = \tilde\lambda_\phi/Z_\phi^2$. In the approximation where we neglect bosonic contributions to the four-boson coupling and to the gap equation for the mass (BCS gap equation), we find $\lambda_\phi = \lambda_\phi^{(F)} = 8\pi/\sqrt{-\tilde{\sigma}}$ such that the bosonic scattering length becomes $a_{M} = 2a_R$. This is the Born approximation for the scattering of the composite bosons. However, more accurate treatments on this issue have shown $a_M/a_R =0.6$. This result is obtained from the solution of the four-body Sch\"odinger equation \cite{Petrov04} and form a numerically demanding diagrammatic approach \cite{Kagan05}, and has also been confirmed in QMC simulations \cite{Giorgini04}. A resummation of the effective boson interaction vertices in vacuum as done in \cite{AAAAStrinati} yields $a_M/a_R =0.75(4)$. Our present Schwinger-Dyson approach does not correctly account for the real situation since the contribution from molecule fluctuations in the vacuum is missing. In the frame of functional renormalization group equations, this deficiency has been remedied. Simple truncations yield $a_M/a_R = 0.71 - 0.92$ \cite{Diehl:2007th}. It is instructive to investigate eq. (\ref{SymmDens}) in the limit $c^{-1} \to \infty$ (BEC limit) where we have $A_\phi \to 1/2, a_M=2a_R$. The density in the superfluid phase $n_M$ (\ref{SymmDens}) then coincides precisely with the Bogoliubov formula for bosons of mass $2M$ and the above scattering length, \begin{eqnarray} n_M &=& \frac{1}{2} \int\frac{d^3q}{(2\pi)^3}\Big(|v_q|^2 + \frac{|u_q|^2 + |v_{-q}|^2}{\exp 2\alpha_\phi -1} \Big)\label{BogoDens} \end{eqnarray} where we used the relations connecting the Bogoliubov transformation coefficients and our expressions (\ref{DefAlpha}, \ref{DefAlphaPhi}), \begin{eqnarray} |v_q|^2 &=& \frac{1}{2}\big(\frac{\alpha + \kappa}{\alpha_\phi} -1\big),\quad |u_q|^2 + |v_{-q}|^2 =\frac{\alpha + \kappa}{\alpha_\phi}.\nonumber\\ \end{eqnarray} We emphasize again that we can perform first the limit $\tilde{h}_\phi \to \infty$ where we recover a purely fermionic model with pointlike interaction and no explicit molecule degrees of freedom (cf. \cite{Diehl:2005an}). Subsequently we may consider large $c^{-1}$ where the approximations leading to eq. (\ref{BogoDens}) become valid. This shows that the Bogoliubov formula for weakly interacting ``fundamental'' bosons can be recovered from a purely fermionic model! In our approach, this result emerges in the simultaneous limit $c^{-1}\to \infty$ (BEC regime), $\tilde{h}_\phi \to \infty$ (broad resonance regime). A similar result has been established by Strinati \emph{et al.} \cite{AAAAStrinati,BBStrinati,CCStrinati,ZStrinati}, who work in a purely fermionic setting, or, in our language, in the broad resonance limit $\tilde{h}_\phi\to \infty$ from the outset. This observation is strengthened by an investigation of the dimensionless density equation (\ref{RenDensConstr}) at $T=0$ in the BEC limit: \begin{eqnarray}\label{BogoliubovDens} 1 &=&3\pi^2( \frac{\tilde{r}}{16\pi\sqrt{-\tilde{\sigma}}} + 2n_M)\nonumber\\\nonumber &=& 3\pi^2(2Z_\phi\frac{\tilde{r}}{\tilde{h}_\phi^2} + 2n_M)\\ &=& \Omega_C + \Omega_M. \end{eqnarray} Here we have dropped the term $\bar{\Omega}_C = \mathcal{O}(\tilde{h}_\phi^{-2})$ and we may similarly neglect $\bar{\Omega}_M$. In the first line of eq. (\ref{BogoliubovDens}) the first term is the explicit result for $\Omega_{F,0}$. The second line uses the explicit result for $Z_\phi$ (\ref{ZRBEC}) and we recover the definition of $\Omega_C$ (\ref{OmCRDef}). This is precisely the density equation one obtains when assuming fundamental bosons of mass $2M$. This is an important result: Our $Z_\phi$ - renormalization procedure generates precisely the macrophysics we would have obtained when starting microscopically with a purely bosonic action. In our approach however, the bosons emerge dynamically. \section{Gap Equation for the Molecule Propagator}\label{sec:beyond} The formulation of the problem as an Euclidean functional integral for fermionic and bosonic fields permits the use of many of the highly developed methods of quantum field theory and statistical physics. It is an ideal starting point for systematic improvements beyond MFT. We have mentioned in the previous sections that one possible alternative to the standard one loop approximation could be to first integrate out the fermions and then perform the remaining $\hat{\phi}$-integral in one loop order. This procedure has, however, some problems. Let us consider the dimensionless renormalized inverse bosonic propagator (\ref{Pphifull}) ($\omega_B=0$) in the approximation \begin{eqnarray}\label{MFTtildeMass} \mathcal{P}_\phi(\vec{\tilde{q}}) = A_\phi \tilde{q}^2\delta_{ab} + m^2_{\phi , ab}. \end{eqnarray} The exact mass matrix \begin{eqnarray} m_{\phi, ab}^2 = \frac{\partial^2 \tilde{u}}{\partial \phi_a\partial\phi_b}\Big|_{\phi=0} \end{eqnarray} is diagonal but in general non-degenerate in the $\phi_1,\phi_2$ basis. It vanishes at the critical temperature of a second order phase transition. Already after the first step of the $\hat\psi$ - integration $\mathcal{P}_\phi$ differs from the classical inverse molecule propagator. The molecule propagator will appear in the molecule fluctuations as described by the partition function \begin{eqnarray}\label{Intermediate} Z = \int \mathcal{D} \hat{\phi} e^{-S_{MFT}[\hat{\phi}]}, \end{eqnarray} where $S_{MFT}$ is the intermediate action resulting from the Gaussian integration of the fermion fields. It is composed of the classical boson piece and a loop contribution from the fermion fields, \begin{eqnarray} S_{MFT} &=& \int d^4x \big\{\hat{\phi}^*\big(Z_\phi[\hat{\phi}]\partial_\tau -\bar{A}_\phi[\hat{\phi}]\triangle + \bar{\nu}- 2\sigma\big)\hat{\phi} \nonumber\\ &&+U_1^{(F)}[\hat{\phi}^*\hat{\phi}] + ... \big\}. \end{eqnarray} At this stage the contribution from the fermion loop still depends on the fluctuating boson field. For example, the formula for the dressed molecule density in the symmetric phase \begin{eqnarray}\label{MolPro} n_M = \int\frac{d^3q}{(2\pi)^3}\Big(\mathrm{e}^{P_\phi(\vec{\tilde{q}})/\tilde{T}} - 1\Big)^{-1}= -\frac{1}{2} \frac{\partial U_1^{(B)}}{\partial\tilde{\sigma}} \end{eqnarray} involves $P_\phi$ (\ref{MFTtildeMass}). It generalizes \footnote{Eq. (\ref{MolPro}) approximates the exact eq. (\ref{MolProExact}) in the limit where corrections to the dependence of $P_\phi$ on the Matsubara frequencies can be neglected, see app. \ref{app:WFR}. The second part can be viewed as a Schwinger-Dyson equation for the $\sigma$ - dependence of $U$.} eq. (\ref{BosNumber}) with $\mu$ replaced by $\sigma$. We see that the inverse propagator $P_\phi$ appears in the bosonic fluctuation contribution $U_1^{(B)}$ to the effective potential and influences the field equation for $\phi$. For a simple one loop evaluation of the functional integral (\ref{Intermediate}) one would have to use a propagator with $m_\phi^{(F) \,2}$ instead of $m_\phi^2$ in eq. (\ref{MFTtildeMass}). This has an important shortcoming. Due to the difference between $U_{MFT}$ and $U$ the masslike term $m_\phi^{(F)\,2}$ vanishes at some temperature different from $T_c$. As a consequence of this mismatch one observes a first order phase transition for sufficiently large values of $\tilde{h}_\phi$. Clearly such a first order phase transition may be suspected to arise from an insufficient approximation. (A similar fake first order transition has been observed for relativistic scalar theories. There it is well understood \cite{ATetradis93,CTetradis92} that an appropriate resummation (e.g. by renormalization group methods) cures the disease and the true phase transition is second order. For the crossover problem the second order nature of the phase transition has recently been established by functional renormalization group methods for the whole range of concentrations, cf. \cite{Diehl:2007th}) In order to improve this situation we use for the bosonic fluctuations Schwinger-Dyson type equations where the inverse propagator $P_\phi$ involves the second derivative of the full effective potential $U$ (rather than $U_{MFT}$). It is obvious that this is needed for a reliable estimate of $n_M$ since the exact expression (\ref{ExactDens}) and therefore also (\ref{MolPro}) involves the full molecule propagator including the contribution of the molecule fluctuations. \begin{figure}[t!] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,40) \put (0,0){ \makebox(84,38){ \begin{picture}(84,38) \put(0,0){\epsfxsize84mm \epsffile{SDEGr.eps}} \put(42,2){$\bar{\lambda}_\phi^{(F)}$} \put(17,30){$^{-1}$} \put(22,28){$=$} \put(48,28){$-$} \put(28,6.5){$+$} \put(60,25){$\bar{h}_\phi$} \put(73,25){$\bar{h}_\phi$} \end{picture} }} \end{picture} \end{center} \vspace*{-0.25ex} \caption{Lowest order Schwinger-Dyson equation for the inverse molecule propagator (double dashed line) in the symmetric phase. The first two terms on the rhs denote the ``mean field inverse propagator'' after integrating out the fermionic fluctuations with a dashed line for the classical inverse molecule propagator and a solid line for the fermion propagator. The third term on the rhs accounts for the molecule fluctuations. Here $\bar{\lambda}_\phi^{(F)}$ is the molecule self-interaction induced by the fermion fluctuations.} \label{SDEGraphic} \end{figure} Since our functional integral treats the molecule fluctuations exactly on the same footing as the fermionic atom fluctuations we can derive the lowest order Schwinger-Dyson or gap equation for $m_\phi^2$ in the standard way. For the symmetric phase it is graphically represented in fig. \ref{SDEGraphic} and involves the full propagator and therefore $m_{\phi, ab}^2$. \subsection{Symmetric phase} We will consider a Taylor expansion of $\tilde{u} = 2Mk_F^{-5} U$ in terms of the invariant \begin{eqnarray} \rho =k_F^{-3} Z_\phi \bar{\phi}^*\bar{\phi}. \end{eqnarray} Correspondingly we use a renormalized Yukawa coupling and four-boson vertex, \begin{eqnarray} h_\phi = 2M\bar{h}_\phi/(k_F Z_\phi)^{1/2}, \quad \lambda_\phi = \tilde{\lambda}_\phi/Z_\phi^2. \end{eqnarray} For the symmetric phase we expand \begin{eqnarray} \tilde{u} &=& m_\phi^2 \rhoR +\frac{1}{2} \lambda_\phi \rhoR^2 + ...\\\nonumber \tilde{u}_{MFT} &=& m_\phi^2 \rhoR + \frac{1}{2} \lambda_\phi^{(F)} \rhoR^2 + ... \end{eqnarray} The gap equation for $m_\phi^2$ takes the form \begin{eqnarray} m_\phi^2&=& m_\phi^{(F)\,2} + m_\phi^{(B)\,2},\\ m_\phi^{(B)\,2} &=& 2\sum\limits_n\tilde{T} \int\frac{d^3\tilde{q}}{(2\pi)^3}\lambda_\phi^{(F)}(\tilde{q},\tilde{\omega}_n) P_\phi^{-1}(\tilde{q},\tilde{\omega}_n).\nonumber\\ \end{eqnarray} Here $\lambda_\phi^{(F)}$ is the effective vertex involving four bosonic fields $\sim (\phi^*\phi)^2$, as induced by the fermion fluctuations. It depends on $\tilde{q}^2$ and $\tilde{\omega}_n$. Neglecting the $\tilde{\omega}_n$ - dependence, i.e. replacing $\lambda_\phi^{(F)}(\tilde{q},\tilde{\omega}_n) \to \lambda_\phi^{(F)}(\tilde{q}) \equiv\lambda_\phi^{(F)}(\tilde{q},\tilde{\omega}_n = 0)$, one can perform the Matsubara sum \begin{eqnarray}\label{BosonMassIntegral} m_\phi^{(B)\,2} &=& \int\frac{d^3\tilde{q}}{(2\pi)^3}\lambda_\phi^{(F)}(\tilde{q}) \coth \big( \frac{A_\phi \tilde{q}^2 + m_\phi^2}{2\tilde{T}}\big).\nonumber\\ \end{eqnarray} We have not yet computed the momentum dependence of $\lambda_\phi^{(F)}(\tilde{q})$ but a simple qualitative consideration of the relevant diagram shows that for large $\tilde{q}^2$ one has a fast decay $\lambda_\phi^{(F)}(\tilde{q}) \propto \tilde{q}^{-4}$. This makes the momentum integral (\ref{BosonMassIntegral}) ultraviolet finite. It will be dominated by small values of $\tilde{q}^2$. For our purpose we consider a crude approximation where we replace $\lambda_\phi^{(F)}(\tilde{q}) \to \lambda_\phi^{(F)} \equiv \lambda_\phi^{(F)} (q=0)$. Of course, we have now to restrict the momentum integration to low momenta. This can be done efficiently by subtracting the leading UV divergence similar as in the computation of $n_M$, i.e. replacing $\coth x$ by $\coth x -1$. This procedure yields \begin{eqnarray}\label{BMIUV} m_\phi^{(B)\,2}&=& 2\lambda_\phi^{(F)} \int\frac{d^3\tilde{q}}{(2\pi)^3} \Big(\exp2\alpha - 1\Big)^{-1}. \end{eqnarray} We recognize on the rhs of eq. (\ref{BMIUV}) the expression for the number density of dressed molecules and obtain the gap equation \begin{eqnarray}\label{MBU} m_\phi^2 &=& m_\phi^{(F)\,2} + \frac{\lambda_\phi^{(F)}\Omega_M}{3\pi^2} \end{eqnarray} where we recall that $\Omega_M$ depends on $m_\phi^2$. Our gap equation has a simple interpretation: The bosonic contribution to $m_\phi^2$ vanishes in the limit where only very few dressed molecules play a role ($\Omega_M\to 0$) or for vanishing coupling. We are aware that our treatment of the suppression of the high momentum contributions is somewhat crude. It accounts, however, for the relevant physics and a more reliable treatment would require a quite involved computation of $\lambda_\phi^{(F)}(\tilde{q},\tilde{\omega})$. This complication is an inherent problem of gap equations which often require the knowledge of effective couplings over a large momentum range. As an alternative method one may employ functional renormalization \cite{Tetradis} for the crossover problem \cite{Diehl:2007th}, where only the knowledge of couplings in a narrow momentum interval is required at every renormalization step. The fermionic contribution to the mass term $m_\phi^{(F)\,2}$ (cf. eq. (\ref{MFTMass})) reads explicitly \begin{eqnarray}\label{FermSYMMass} m_\phi^{(F)\,2} &=& (Z_\phi\epsilon_F)^{-1} \bar{m}_\phi^{(F)\,2} = \frac{\tilde{\nu} - 2\tilde{\sigma}}{Z_\phi} + \frac{\partial \tilde{u}_1^{(F)}}{\partial \rho}\\\nonumber &=& \frac{\tilde{\nu} - 2\tilde{\sigma}}{Z_\phi} - \frac{h_\phi^2}{4\tilde{T}} \int\frac{d^3\tilde{q}}{(2\pi)^3} \Big[\frac{1}{\gamma_\phi}\tanh\gamma_\phi -\frac{2\tilde{T}}{\tilde{q}^2}\Big]. \end{eqnarray} The expression in the last line has to be evaluated with $\gamma_\phi =\gamma$ in the symmetric phase. The r.h.s. of the gap equation (\ref{MBU}) involves the coupling $\lambda_\phi^{(F)}$ for the molecule-molecule interactions \begin{eqnarray}\label{147} \lambda_\phi^{(F)} &=& \frac{2Mk_F}{Z_\phi^2}\bar{\lambda}_\phi^{(F)} =\frac{\partial^2 \tilde{u}_1^{(F)}}{\partial \rho^2} \\\nonumber &=& \frac{h_\phi^4}{32\tilde{T}^3}\int\frac{d^3\tilde{q}}{(2\pi)^3} \big\{\gamma_\phi^{-3}\tanh\gamma_\phi - \gamma_\phi^{-2}\cosh^{-2}\gamma_\phi \big\}. \end{eqnarray} Again, in the symmetric phase $\lambda_\phi^{(F)}$ has to be evaluated at $\rhoR =0$, i.e. $\gamma_\phi =\gamma$. The molecule fluctuations give a positive contribution to $m_\phi^2$, opposite to the fermionic fluctuations. This has a simple interpretation. The fermion fluctuations induce a self-interaction between the molecules $\sim \lambda_\phi^{(F)}$. In turn, the fluctuations of the molecules behave similarly to interacting fundamental bosons and modify the two point function for the molecules. The quantum corrections to the fermionic and bosonic fluctuations to the inverse molecule propagator are represented graphically in fig. \ref{SDEGraphic}. We emphasize that an additional microscopic molecule interaction could now easily be incorporated by adding to the mean field value for $\lambda_\phi^{(F)}$ a ``classical part'' $\lambda_\phi^{(cl)}$. In this case the renormalization of $\bar{\nu}_\Lambda$ discussed in sect. \ref{sec:renormalization} would be modified. For a constant $\lambda_\phi^{(cl)}$ the UV - divergent part would contribute to $m_\phi^{2}(T=0,n=0)$. One would again end up with a contribution of the form (\ref{BMIUV}), now with $\lambda_\phi^{(F)}$ replaced by $\lambda_\phi^{(cl)}+\lambda_\phi^{(F)}$. Our approximation (\ref{BMIUV}) therefore treats the interactions of dressed molecules similar to fundamental interacting bosons. Actually, the Schwinger-Dyson equation for the molecule propagator (fig. \ref{SDEGraphic}) also describes the contribution of molecule fluctuations to the momentum dependent part encoded in $A_\phi$. In the limit of a momentum independent $\bar{\lambda}_\phi^{(F)}$ the contribution of the boson loop to $\bar{A}_\phi$ vanishes in the symmetric phase. \subsection{Superfluid phase} \begin{figure}[t!] \begin{minipage}{\linewidth} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(85,54) \put (0,0){ \makebox(80,49){ \begin{picture}(80,49) \put(0,0){\epsfxsize80mm \epsffile{GapTDepR2.eps}} \put(73,-2){$\tilde{T}$} \put(70,43){$\tilde{\Delta}$ } \put(20,20){$c^{-1} =0$} \put(58.7,28.5){$\bullet$} \put(66.3,27){$\bullet$} \put(58.7,17){$\bullet$} \end{picture} }} \end{picture} \end{center} \vspace*{-1.25ex} \caption{Temperature dependence of the gap $\tilde{\Delta}=\sqrt{\tilde{r}}$ at the resonance. The role of molecule fluctuations and the uncertainties in their treatment for $T\to T_c$ are demonstrated by four choices of $\lambda_\phi$ in the gap and density equation. The critical temperatures are indicated by vertical dashed lines, with values $\tilde{T}_c = 0.259, 0.292$ } \label{Tdependence} \end{minipage} \end{figure} In the superfluid phase we choose an expansion of $\tilde{u}$ around the minimum at \begin{eqnarray} \bar{\rho}_0 = r/\bar{h}_\phi^2=\frac{k_F^3\tilde{r}}{\tilde{h}_\phi^2},\quad \rhoR_0 = \tilde{r}/h_\phi^2, \end{eqnarray} namely \begin{eqnarray} \tilde{u} = \frac{1}{2}\lambda_\phi(\rhoR - \rhoR_0)^2 + ... \end{eqnarray} and define correspondingly \begin{eqnarray}\label{MLamNu} \hat{m}_\phi^{(F)\,2} &=& \frac{\partial \tilde{u}_{MFT}}{\partial\rhoR}\Big|_{\rhoR_0},\quad \lambda_\phi^{(F)} = \frac{\partial^2 \tilde{u}_{MFT}}{\partial\rhoR^2}\Big|_{\rhoR_0}.\nonumber\\ \end{eqnarray} Our truncation of $\tilde{u}_1^{(B)}$ approximates \begin{eqnarray} \tilde{u}_1^{(B)} = \hat{m}_\phi^{(B)\,2}(\rhoR - \rhoR_0) + \frac{1}{2}\lambda_\phi^{(B)} (\rhoR - \rhoR_0)^2 + ... \end{eqnarray} The location of the minimum $\rhoR_0$ is determined by the condition \begin{eqnarray}\label{Rho0Cond} \hat{m}_\phi^{(F)\,2} +\hat{m}_\phi^{(B)\,2} =0, \quad \hat{m}_\phi^{(B)\,2} = \frac{\partial \tilde{u}_1^{(B)}} {\partial\rhoR} \Big|_{\rhoR=\rhoR_0}. \end{eqnarray} This defines the gap equation for $\rhoR_0$, which is the equivalent of eq. (\ref{MBU}) for the superfluid phase. The computation of the bosonic contribution $\hat{m}_\phi^{(B)\, 2}$ encounters the same problems as for $m_\phi^{(B)\, 2}$ in the symmetric phase. Again we replace $\lambda_\phi^{(F)}(q)$ by a constant $\lambda_\phi^{(F)}$ evaluated for $q=0$ and subtract the leading UV divergence of the momentum integral. This results in \begin{eqnarray} \hat{m}_{\phi}^{(B)\,2} &=& 2\lambda_\phi^{(F)}\int\frac{d^3\tilde{q}}{(2\pi)^3} \frac{A_\phi\tilde{q}^2+ \lambda_\phi\rho_0}%\tilde{\rho_{0,R/2}{\sqrt{A_\phi\tilde{q}^2(A_\phi\tilde{q}^2 +2\lambda_\phi\rho_0}%\tilde{\rho_{0,R)}}\\ &&\times \big(\exp \sqrt{A_\phi\tilde{q}^2(A_\phi\tilde{q}^2 +2\lambda_\phi\rho_0}%\tilde{\rho_{0,R)}/\tilde{T} -1 \big)^{-1}.\nonumber \end{eqnarray} In terms of the shorthands $\alpha, \kappa,\alpha_\phi$ (\ref{DefAlpha},\ref{DefKappa},\ref{DefAlphaPhi}) we arrive at the gap equation for $\rho_0$ \begin{eqnarray}\label{MBUB} m_\phi^{(F)\,2} &+& 2\lambda_\phi^{(F)}\int\frac{d^3\tilde{q}}{(2\pi)^3} \frac{\alpha + \kappa/2}{\alpha_\phi}\big(\exp 2\alpha_\phi -1 \big)^{-1}= 0.\nonumber\\ \end{eqnarray} The quantity $\alpha_\phi$ contains a mass term $2\lambda_\phi\rho_0}%\tilde{\rho_{0,R$ which involves the ``full'' vertex $\lambda_\phi = \lambda_\phi^{(F)} + \lambda_\phi^{(B)}$. We have computed the Schwinger-Dyson eq. for $\lambda_\phi$ in app. \ref{app:SDE}. For zero momentum $\tilde{q}\to 0$ we find that $\lambda_\phi$ vanishes in the superfluid phase. This is, however, only part of the story since the gap equation involves a momentum dependent vertex $\lambda_\phi(\tilde{q})$. In order to demonstrate the uncertainty arising from our lack of knowledge of $\lambda_\phi(\tilde{q})$ we present in fig. \ref{Tdependence} our results for different choices of $\lambda_\phi$ in the gap eq. (\ref{MBUB}). In detail, we show in fig. \ref{Tdependence} four approximation scenarios: (i) the ``standard'' BCS gap equation (long dashed) neglects the molecule fluctuations, i.e. $\lambda_\phi = \lambda_\phi^{(F)} =0$ in both the gap and the density equation. This yields a second order transition but disagrees with QMC results for $T\to 0$. (ii) Bogoliubov density (short dashed) with $\lambda_\phi^{(F)}$ in the density equation, while the molecule fluctuations in the gap equation are neglected. This improves the behavior for $T\to 0$, but induces a fake first order phase transition for $T\to T_c$. (iii) Neglection of molecule fluctuations in the effective coupling $\lambda_\phi$ (dashed-dotted), i.e. we use $\lambda_\phi = \lambda_\phi^{(F)}$ in the density and gap equation. (iv) Our best estimate (solid line) includes also corrections from molecule fluctuations for $\lambda_\phi$. As described in this section we use $\lambda_\phi$ in the propagator of the diagram in the gap equation, whereas the coefficient multiplying the diagram is given by $\lambda_\phi^{(F)}$. (We use $\lambda_\phi^{(F)}$ in the density equation.) The first order nature is weaker than in (iii), but still present. In a recent functional renormalization group treatment we have established the second order nature of the phase transition throughout the crossover, indicating that we now control the universal long range physics governing the phase transition \cite{Diehl:2007th}. \subsection{Phase transition} For $T=T_c$ the gap equations (\ref{MBU}) and (\ref{MBUB}) match since $\rho_0}%\tilde{\rho_{0,R = 0$, $\alpha_\phi=\alpha$, $\kappa=0$. Also the expression for the molecule density becomes particularly simple \begin{eqnarray}\label{UBDens} n_M = k_F^{3}\frac{\Gamma(3/2) \zeta(3/2)}{4\pi^2}\Big(\frac{\tilde{T}}{A_\phi}\Big)^{3/2}. \end{eqnarray} \section{Conclusions} \label{sec:conclusions} Our functional integral investigation of ultracold fermionic atoms strengthens the picture of a smooth crossover between Bose-Einstein condensation of molecules and BCS type superfluidity. One and the same field $\phi$ can both describe molecules and collective atom excitations of the type of Cooper pairs. The two different pictures correspond to two regions in the space of microscopic couplings and external parameters ($T,n$). In dependence on a concentration parameter $c$, which is related to the magnetic field $B$, one can continuously change from one to the other region. Away from the critical temperature the ``macroscopic observables'' are typically analytic in $c^{-1}$ despite the divergence of the scattering length for two-atom scattering at $c^{-1}=0$. Only for $T\to T_c$ one expects to encounter the non-analytic critical behavior which is characteristic for a second order phase transition in the universality class of $O(2)$ - Heisenberg magnets. For small and moderate Yukawa couplings $\tilde{h}_\phi$ the physical picture of the crossover for $T>T_c$ is rather simple. Far on the BEC side the low temperature physics is dominated by tightly bound molecules. As the concentration increases, the molecule waves start to mix with collective di-atom states. More precisely, the molecule state with momentum $\vec{p}$ mixes with pairs of atoms with momenta $\vec{q}_1,\vec{q}_2$ that are correlated in momentum space such that $\vec{q}_1 + \vec{q}_2 = \vec{p}$ (Cooper pairs). On the BCS side the relevant state becomes dominantly a Cooper pair. Our formalism uses only one field $\phi$ for both the molecule and Cooper pair states. Nevertheless, the relative importance of the microscopic molecule versus the collective Cooper pair is reflected in the propagator of $\phi$, i.e. the wave function renormalization $Z_\phi$ and the gradient coefficient $\bar{A}_\phi$. On the BEC side the propagation of $\phi$ corresponds to free molecules and $\bar{A}_\phi$ takes the classical value $\bar{A}_\phi^{(cl)} = 1/4M$, and similarly for the wave function renormalization, $Z_\phi^{(cl)} =1$. The increasing mixing in the crossover region results in $\bar{A}_\phi$ and $Z_\phi$ growing larger than $\bar{A}_\phi^{(cl)},Z_\phi^{(cl)}$. Indeed, the diagram responsible for the increase of $\bar{A}_\phi,Z_\phi$ precisely corresponds to a molecule changing to a virtual atom pair and back. Finally, far in the BCS regime one has $\bar{A}_\phi \gg \bar{A}_\phi^{(cl)}, Z_\phi \gg Z_\phi^{(cl)}$ such that the classical contribution is negligible. The existence of a microscopic molecule is not crucial anymore. At long distances the physics looses memory of the microscopic details - the approximation of a pointlike effective interaction between atoms becomes valid. For a broad Feshbach resonance the situation is, in principle, similar. However, the region where $Z_\phi$ and $\bar{A}_\phi/\bar{A}_\phi^{(cl)}$ are large covers now both sides of the Feshbach resonance. The difference between the bare (microscopic) and dressed molecules is a crucial feature. For a broad Feshbach resonance the microscopic molecules are irrelevant for the whole crossover region. For this limit we have demonstrated in sect. \ref{sec:renconstZR} that our formalism is equivalent to a purely fermionic model with a pointlike interaction. This demonstrates that in the broad resonance limit, the use of the two channel model is rather a question of computational ease than a physical issue -- it describes the same physics as a single channel model. On the other hand, the two channel model is more general and also covers the case of narrow resonances. This issue is discussed in more detail in \cite{Diehl:2005an}. It remains to be seen if it will be possible to investigate the narrow resonances also experimentally in the future. The universality of the low temperature physics and the crossover can be traced back to the ``renormalizable couplings'' of an effective non-relativistic quantum field theory for long distances. More precisely, the long distance physics will only depend on the relevant (or marginal) couplings in the infrared renormalization flow. These are precisely the dimensionless parameters $c$ and $h_\phi$. Improved approximations will influence the relation between microscopic atomic and molecular physics and $(c,h_\phi)$. At this point also ``subleading interactions'' like the $\sigma$-exchange or other interactions contributing to the background scattering length $a_{bg}$ will play a role. However, universality predicts that the relations between long-distance (``macroscopic'') quantities should become computable only in terms of the ``renormalizable couplings'' $m_\phi^2$ (or $c$) and $h_\phi$. In consequence, we find a universal phase diagram in terms of these parameters, where the ``memory'' of the microscopic physics only concerns the values of $c$ and $h_\phi$, providing for a universal phase diagram in terms of these parameters. In addition, the universal critical exponents and amplitude ratios for $T\to T_c$ will be independent of $c$ and $h_\phi$. Furthermore, the BCS limit is independent of $h_\phi$ since only one parameter ($c$) characterizes the effective pointlike atom interaction. Also the BEC limit does not depend on $h_\phi$ since the fluctuations of unbound atoms become irrelevant. On the other hand, the behavior in the crossover region can depend on $h_\phi$ as an important universal parameter. This demonstrates that a pointlike approximation for the effective atom interaction is not always applicable for small $|c^{-1}|$. The couplings $h_\phi$ and $\tilde{h}_\phi$ are related by the multiplicative factor $Z_\phi^{1/2}$. In the broad resonance limit $\tilde{h}_\phi\to \infty$ one finds that the renormalized coupling $h_\phi$ is given by an infrared fixed point. It therefore ceases to be an independent coupling. For a more general form of the microscopic action we expect that deviations from universality become most visible far in the BCS regime, where further channels may play a role for the effective interaction, as well as in the far BEC regime, where more details of the microscopic dispersion relation for the molecules and their microscopic interactions may become relevant. Going beyond our particular ansatz for the microscopic action universality should actually work best in the crossover region of small $|c^{-1}|$. On the other hand, for a given microscopic action the results are most accurate in the BEC and BCS regimes where fluctuation effects are most easily controlled. It is precisely the presence of strong fluctuation or renormalization effects in the crossover regime that is responsible for the ``loss of microscopic memory'' and therefore for universality! The formulation as a Yukawa model solves the problems with a large effective scattering length for the atoms in the crossover region in a simple way. The large scattering length is simply due to the small ``mass term'' $\bar{m}_\phi^2$ of the molecule field. It does not require a large atom-molecule interaction $\propto \tilde{h}_\phi$. As long as the dimensionless Yukawa or Feshbach coupling $\tilde{h}_\phi$ remains small, the region of small and vanishing $|c^{-1}|$ (large or diverging effective atom interaction) poses no particular problem. In the symmetric phase all quantities can be computed in a perturbative expansion in $\tilde{h}_\phi$. For $\tilde{h}_\phi \to 0$ a nontrivial exact limiting solution of the functional integral has been found \cite{Diehl:2005an}. Nevertheless, for large $\tilde{h}_\phi$ a new strong interaction appears and the bosonic fluctuations become important. Indeed, the crossover for a broad Feshbach resonance amounts to a theoretical challenge. One has to deal with a genuinely non-perturbative setting. In the present paper we have included the molecule fluctuations by the solution for Schwinger-Dyson or gap equations. For $T=0$ the results are quite satisfactory as shown by the comparison with quantum Monte Carlo simulations \cite{Carlson03} in fig. \ref{CrossoverSnGap}. However, as $T$ increases towards the critical temperature $T_c$ our method becomes less reliable since the details of the treatment of the molecule fluctuations play an increasing role, cf. fig. \ref{Tdependence}. The main problem results from the neglected momentum dependence of the four-boson-vertex $\lambda_\phi$. This is related to a general difficulty for Schwinger-Dyson equations: the momentum integrals involved require the knowledge of effective couplings in a large momentum range. A promising alternative may be the use of functional renormalization \cite{Tetradis}. At every renormalization step only a narrow momentum range plays a role and the momentum dependence of suitably chosen effective vertices is much less important. First results of a functional renormalization group treatment can be found in \cite{Diehl:2007th}. In this light the present paper should be viewed as a starting point for more accurate investigations with further functional integral techniques. It is well suited for systematic extensions, among which we would like to stress a more appropriate treatment of the momentum dependence of the couplings, an extended inclusion of the effect of boson fluctuations, and the modification of the fermion propagator by the renormalization effects originating from mixed fermion-boson contributions. It remains to be seen if theoretical improvements, together with a reduction of systematic experimental uncertainties, will finally lead to an understanding of ultracold fermionic atoms with high quantitative precision.\\\\ \textbf{Acknowledgement} \\\\ We would like to thank T. Gasenzer, H. Stoof and M. Zwierlein for useful discussions. \begin{appendix} \section{Partial bosonization and particle density} \label{sec:partial} In this appendix we extend the formulation for the functional integral in such a way that we are able to discuss situations which are (i) inhomogeneous and (ii) beyond the ``small density approximation'' (SDA). The treatment of inhomogeneities is particularly desirable in the context of ultracold gases which are prepared in traps of various shapes. In particular, we show how the ``local density approximation'' emerges from our formalism. Further we consider situations beyond small densities. The price to pay is the inclusion of a further functional integration over a now fluctuating field $\hat \sigma$. In the SDA, treating the chemical potential as a source term is appropriate, and we will quantify this statement here. We will start the discussion for a single fermion field. This describes a situation far off a Feshbach resonance, where molecules of effective collective states are unimportant. Actually, the discussion in this appendix covers a very large class of fermionic systems with effective pointlike interactions. Besides ultracold fermionic atoms, it may be applied to neutrons (e.g. in neutron stars), dilute gases with short range interactions (e.g. dipole interactions) and also covers certain aspects of electron gases (where the Coulomb interaction is replaced by a pointlike repulsion). We then extend the discussion to the case of fermions and bosons in order to account for strong interactions via a Feshbach resonance. The concept presented here will also be technically useful for the analysis of strongly interacting ultracold atoms in the frame of the functional renormalization group. \subsection{Functional integral} \label{partial2} For an arbitrary fermionic theory with ``classical'' action $S_F[\hat\psi]$ we define the partition function as a functional of the local source $J(x)$ \begin{eqnarray}\label{1} Z_F[J]=\int {\cal D}\hat\psi \exp \Big\{-S_F[\hat\psi]+\int\hspace{-0.12cm} dx J(x)\hat\psi^\dagger(x)\hat\psi(x)\Big\}.\nonumber\\ \end{eqnarray} With $W_F[J]=\ln Z_F[J]$ the (relative) particle density becomes \begin{equation}\label{2} \bar{n}_\Lambda(x)=\langle\hat\psi^\dagger(x)\hat\psi(x)\rangle= \frac{\delta W_F}{\delta J(x)}. \end{equation} The physical particle density is related to $\bar{n}_\Lambda(x)$ by a constant shift that we have discussed in sect. \ref{sec:EffActMFT}. The source may be composed \footnote{The split between a constant part in $V_l$ and $\bar{\mu}_\Lambda$ is arbitrary. For interacting atoms in a homogeneous situation ($V_l\equiv 0$), $\bar{\mu}_\Lambda$ is related to the true chemical potential $\mu$ by a constant shift (see below). For electronic systems $V_l$ typically corresponds to an electrostatic potential.} of a bare chemical potential $\bar{\mu}_\Lambda$ and a local potential $V_l(x)$ , \begin{equation}\label{3} J(x)=\bar{\mu}_\Lambda-V_l(x). \end{equation} For ultracold atoms the local potential $V_l(x)$ represents the trapping potential. In this section we develop a general functional integral approach for the computation of the density $n(x)$. For this purpose we introduce a field $\sigma(x)$ which is conjugate to $J(x)$. It will play the role of an effective chemical potential which differs from eq. (\ref{3}) unless the density is small. Our approach can be used for arbitrary $n$ and is not restricted to a small density approximation. The effects of density fluctuations can be incorporated by functional integration over $\sigma$ via partial bosonization. \subsection{Partial bosonization} \label{Partial3} Partial bosonization is achieved by inserting a Gaussian integral which is equivalent to one up to an irrelevant multiplicative constant (Hubbard-Stratonovich transformation \cite{Stratonovich}, \cite{Hubbard59}) \begin{eqnarray}\label{4} Z_F[J]&=&\int\hspace{-0.1cm} {\cal D}\hat\psi{\cal D} \hat{\sigma} \exp \Big\{\hspace{-0.1cm}-S_F[\hat\psi]+\hspace{-0.1cm}\int\hspace{-0.12cm} dx\Big[J(x)\hat\psi^\dagger(x)\hat\psi(x)\nonumber\\ &&-\frac{1}{2}m^2\Big(\hat{\sigma}(x) -J(x)-\frac{1}{m^2}\hat\psi^\dagger(x)\hat\psi(x)\Big)^2\Big]. \end{eqnarray} The partition function $Z_F[J]$ can now be computed from an equivalent functional integral which also involves bosonic fields $\hat{\sigma}(x)$ \begin{eqnarray}\label{5A} Z_F[J]&=&\int {\cal D}\hat\psi{\cal D}\hat{\sigma}\exp \Big\{-S_B[\hat{\sigma},\hat\psi] \nonumber\\ &&+m^2\int\hspace{-0.12cm} dx\Big(J(x)\hat{\sigma}(x)-\frac{1}{2}J^2(x)\Big)\Big\} \end{eqnarray} with \begin{equation}\label{5} S_B=S_F[\hat\psi]+\int\hspace{-0.12cm} dx\left\{\frac{1}{2}m^2\hat{\sigma}^2-\hat{\sigma}\hat\psi^\dagger\hat\psi +\frac{1}{2m^2}(\hat\psi^\dagger\hat\psi)^2\right\}. \end{equation} Here the expectation value $\sigma(x)=\langle\hat{\sigma}(x)\rangle$ is related to the density $\bar{n}_\Lambda(x)$ by \begin{eqnarray}\label{6} \bar{n}_\Lambda(x)&=&\frac{\delta W_F}{\delta J(x)}=m^2\Big(\sigma(x)-J(x)\Big),\nonumber\\ \sigma(x)&=&\langle\hat{\sigma}(x)\rangle=\frac{1}{m^2} \langle\hat\psi^\dagger(x)\hat\psi(x)\rangle+J(x). \end{eqnarray} After partial bosonization the new action $S_B$ contains a mass term for $\hat{\sigma}$, a Yukawa interaction $\sim\hat{\sigma}\hat\psi^\dagger\hat\psi$ and an additional four-fermion interaction $\sim(\hat\psi^\dagger\hat\psi)^2/(2m^2)$. The resulting explicit four-fermion vertex in $S_B$ therefore becomes \begin{eqnarray} \bar{\lambda}_\psi = \bar{\lambda} + \frac{1}{m^2}. \end{eqnarray} We can choose $m^2$ such that $\bar{\lambda}_\psi$ vanishes by requiring \begin{eqnarray} \label{BosonCond} \frac{1}{m^2} = -\bar{\lambda}. \end{eqnarray} The limit of non-interacting fermions obtains then for $m^2\rightarrow\infty$. We note that such a cancellation is possible only for $\bar{\lambda} <0$. It is not compulsory for our formalism an we will not always assume the relation (\ref{BosonCond}) in the following. The partition function $Z_F$ (\ref{5A}),(\ref{5}) is closely related to the standard formulation of a scalar-fermion-model with \begin{equation}\label{7} Z_B=\int {\cal D}\hat\psi {\cal D}\hat{\sigma}\exp \Big\{-S_B[\hat{\sigma},\hat\psi]+\int\hspace{-0.12cm} dxj(x)\hat{\sigma}(x)\Big\} \end{equation} by $(W_B=\ln Z_B)$ \begin{equation}\label{8} j(x)=m^2J(x), \hspace{0.2cm} W_B=W_F+ \frac{m^2}{2}\int\hspace{-0.12cm} dx J^2(x). \end{equation} In this formulation we may define the (one particle irreducible) effective action $\Gamma$ by the usual Legendre transformation \begin{equation}\label{9} \Gamma=-W_B+\int\hspace{-0.12cm} dxj(x)\sigma(x), \hspace{0.2cm}\sigma(x)= \frac{\delta W_B}{\delta j(x)}. \end{equation} For a given source $j(x)$ the expectation value $\sigma(x)$ obeys the field equation \begin{equation}\label{10} \frac{\delta\Gamma}{\delta\sigma(x)}=j(x). \end{equation} The effective action may be decomposed into a classical part $\Gamma_{cl}[\sigma]=S_B[\hat{\sigma}=\sigma,~\psi=0]$ and a quantum part $\Gamma_q$ \begin{equation}\label{11} \Gamma=\Gamma_{cl}+\Gamma_q= \frac{m^2}{2}\int\hspace{-0.12cm} dx \sigma^2(x)+\Gamma_q[\sigma]. \end{equation} We next turn to the relative particle density $\bar{n}_\Lambda(x)$. It can be computed from $\Gamma$ by decomposing the field equation \begin{eqnarray}\label{12} \sigma(x)&=&J(x)+\sigma_1(x),\nonumber\\ \frac{\delta\Gamma}{\delta\sigma(x)} &=&m^2\sigma(x)+\frac{\delta\Gamma_q}{\delta\sigma(x)}=m^2J(x) \end{eqnarray} as \begin{equation}\label{13} \bar{n}_\Lambda(x)=m^2\sigma_1(x)=- \frac{\delta\Gamma_q}{\delta\sigma(x)} \Big(J(x)+\sigma_1(x)\Big). \end{equation} This establishes the exact general relation between $\bar{n}_\Lambda(x)$ and the $\sigma$ - functional derivative of the quantum part of the effective action. The limit $|\sigma_1|\ll|J|$ corresponds (for fixed $J(x)$) to small densities or small interactions $(m^2\rightarrow \infty)$ according to \begin{equation}\label{14} |\bar{n}_\Lambda(x)|\ll m^2|J(x)|. \end{equation} In this limit we may expand \begin{equation}\label{15} \frac{\delta\Gamma_q}{\delta\sigma(x)} \Big(J(x)+\sigma_1(x)\Big) =-n_0(x)+b(x)\sigma_1(x)+\dots \end{equation} with \begin{equation}\label{16} n_0(x)=- \frac{\delta\Gamma_q}{\delta\sigma(x)}\Big|_{J(x)}~,~b(x)= \frac{\delta^2\Gamma_q}{\delta^2\sigma(x)}\Big|_{J(x)}. \end{equation} This yields \begin{equation}\label{17} \bar{n}_\Lambda(x)= \frac{n_0(x)}{1+b(x)/m^2}. \end{equation} We emphasize, however, that eq. (\ref{13}) remains valid for arbitrary densities. The explicit computation of $\bar{n}_\Lambda(x)$ needs an evaluation of $\Gamma_q$. In mean field theory the fluctuation part $\Gamma_q$ is estimated by including only the fermionic fluctuations in the functional integral (\ref{7}), while keeping $\hat{\sigma}(x)=\sigma(x)$ as a fixed ``background''. In this scheme, the generalization of eq. (\ref{BarNFMFT}) for the relative particle density reads, according to eq. (\ref{13}) \begin{equation}\label{23} \int d^3x\bar{n}_\Lambda(x)=-\int d^3x\frac{\delta\Gamma_q}{\delta\sigma(x)} =-\frac{1}{2}\tilde{\text{Tr}}\tanh \left(\frac{\bar{P}_F-\sigma}{2T}\right) \end{equation} where the remaining trace $\tilde{\text{Tr}}$ is over three dimensional phase space and spinor indices $\alpha$ - in momentum space it reads $\tilde{\text{Tr}} \hat{=}V_3\int \frac{d^3q}{(2\pi)^3}\sum_\alpha$, with $V_3$ the volume of (three-dimensional) space. At this point the construction from sect. \ref{sec:renormalization} can be performed, replacing the relative ($\bar n_\lambda$) by the physical particle number density $\bar n_\lambda + \hat n$. This yields \begin{equation}\label{26} N=\int d^3x(\bar{n}_\Lambda+\hat{n})=\tilde{\text{Tr}} \Big(\exp\Big(\frac{\bar{P}_F-\sigma}{T}\Big)+1\Big)^{-1}. \end{equation} This formula has a simple interpretation. The trace $\tilde{\text{Tr}}$ over an operator $\hat{A}$ can be evaluated in a complete orthonormal system of complex functions $f_m(x)$, \begin{eqnarray}\label{27} &&\int d^3xf^*_{m'}(x)f_m(x)=\delta_{m'm},\nonumber\\ &&\int d^3xf^*_{m'}(x)\hat{A}f_m(x)=A_{m'm},\nonumber\\ &&\tilde{\text{Tr}}\hat{A}=\sum_mA_{mm}. \end{eqnarray} With $\sigma(x)=\mu-\hat{V}(x)$ (in analogy to eq. (\ref{3})) we can define the Hamilton operator $\hat{H}=\bar{P}_F+\hat{V}$ (with $P=-\Delta/(2M)$ for nonrelativistic atoms). Choosing the $f_m$ to be eigenfunctions of the Hamiltonian with eigenvalue $E_m$ eq. (\ref{26}) becomes \begin{equation}\label{28} N=\sum_m\left(\exp \left(\frac{E_m-\mu}{T}\right)+1\right)^{-1} \end{equation} and we recognize the well known fermionic occupation number. In the limit of noninteracting atoms ($m^2\rightarrow\infty)$ we can cut the Taylor expansion in eq. (\ref{15}) at the lowest term, $\bar{n}_\Lambda=n_0$. In consequence, $\delta\Gamma_q/\delta\sigma$ is evaluated with the local potential $V_l$ in eq. (\ref{3}). In this limit one has $\sigma = \mu - V_l$ and $\hat{V}$ therefore equals the trap potential $V_l$. As it should be the Hamiltonian $\hat{H}$ reduces to the quantum Hamiltonian of a single atom in a potential and the density becomes independent of $m^2$. \subsection{Universal field equation for the density} \label{universalfieldequation} The crucial advantage of our formalism is the possibility to compute the effective action $\Gamma[\sigma]$ without any specification of the local potential (trapping potential) $V_l(x)$. The detailed geometry of the trap only enters at a second step when one solves the field equation (\ref{10}). This offers the great advantage that the fluctuation problem can be solved in a homogeneous setting, i.e. one encounters standard momentum integrals in the loop expressions rather than summations over spherical harmonics or other function systems adapted to the geometry of the trap. The results are universal and apply to all geometries, provided the inhomogeneity is sufficiently weak. For $\sigma(\vec{x})$ mildly varying in space and independent of $\tau$ one may use a derivative expansion \begin{equation}\label{29} \Gamma[\sigma]=\int\hspace{-0.12cm} dx\left\{U_\sigma(\sigma)+ \frac{1}{2}A_\sigma(\sigma)\vec{\nabla}\sigma\vec{\nabla}\sigma+\dots\right\}. \end{equation} We emphasize that the derivative expansion always applies for situations close enough to homogeneity. The following results are therefore valid independently of the way how $\Gamma[\sigma]$ is computed - in particular, they do not rely on MFT. However, for definiteness, let us give the result for $A_\sigma$ in the latter scheme ($U(\sigma)$ can be read off from eq. (\ref{USigmaPhi}) for $\gamma_\phi=\gamma$). For this purpose, we proceed along the lines in sect. \ref{GradCoeff} and extract from $\Gamma_{q,MFT}$ the term quadratic in $\delta\sigma_Q$ \begin{eqnarray}\label{40} \Gamma^{(2)}_{q,MFT}&=&\frac{V_3}{T}\delta\sigma^*_Q P_\sigma(Q)\delta\sigma_Q~,\\\nonumber P_\sigma(Q)&=& \int\limits_{Q'}\Big\{\frac{1}{(P_F(Q')-\sigma)(P_F(Q'+Q)-\sigma)}\\\nonumber && + \frac{1}{(P_F(Q')-\sigma)(P_F(Q'-Q)-\sigma)}\Big\} \label{41} \end{eqnarray} with $\int_{Q'} = \sum_{n'} T \int \frac{d^3q'}{(2\pi)^3}$. Inserting the background $\sigma(X)= \sigma+\exp(iQX)\delta\sigma_Q + \exp(-iQX)\delta\sigma^*_Q$ into eq. (\ref{29}) yields \begin{eqnarray}\label{42} &&A_\sigma(\sigma)=\lim\limits_{q^2\rightarrow 0} \hspace{0.5cm}\frac{\partial}{\partial q^2}P_\sigma(0,q)\\\nonumber &&=\frac{1}{4MT^2}\int\frac{d^3q}{(2\pi)^3}\left\{\frac{ \tanh~\gamma}{ \cosh ^2\gamma}-\frac{q^2}{9MT} \frac{2 \cosh ^2\gamma-3}{ \cosh ^4\gamma}\right\}. \end{eqnarray} We find that $A_\sigma(\sigma)$ is ultraviolet finite. The zero temperature limit of this expression reads \footnote{The prefactor has been determined numerically with an error of less than $0.3\%.$} (for $\sigma >0$) \begin{eqnarray}\label{Zsigma0} A_\sigma = \frac{1}{12\pi^2}\sqrt{\frac{2M}{\sigma}}, \end{eqnarray} and diverges for $\sigma\to 0$. \subsubsection{Density field equation} We are interested in time-independent situations and therefore consider $\sigma(x) = \sigma(\vec{x})$ independent of $\tau$. Variation with respect to $\sigma$ yields the stationary field equation (with $U_\sigma'=\partial U_\sigma/\partial\sigma$ etc.) \begin{equation}\label{30} U_\sigma'(\sigma)-A_\sigma(\sigma)\triangle\sigma- \frac{1}{2}A'_\sigma(\sigma)\vec{\nabla}\sigma\vec{\nabla}\sigma=m^2(\bar{\mu}_\Lambda-V_l) \end{equation} where $\sigma$ and $V_l$ depend on $\vec{x}$. Once $\sigma(\vec{x})$ is found one can compute the particle density from \begin{eqnarray}\label{31} n&=&-U'+A_\sigma\Delta\sigma+\frac{1}{2} A'_\sigma\vec{\nabla}\sigma\vec{\nabla}\sigma\nonumber\\ &=&m^2\Big(\sigma-\bar{\mu}_\Lambda+V_l+\frac{\hat{n}}{m^2}\Big),\\ U&=&U_\sigma-\frac{1}{2}m^2\sigma^2-\hat{n}\sigma. \end{eqnarray} Here we have chosen the definition of $U$ such that the physical particle density from eq. (\ref{25}) is reproduced in eq. (\ref{31}). In the second line of eq. (\ref{31}) appears a shifted or ``additively renormalized chemical potential'', \begin{equation}\label{33} \mu=\bar{\mu}_\Lambda-\frac{\hat{n}}{m^2}. \end{equation} This combination is the true ``physical'' chemical potential as can be seen along the lines of eqs. (\ref{24},\ref{25}). Indeed, the interaction term in the operator language $\propto (a^\dagger (x) a (x))^2$ is translated to the interaction term in the functional integral (\ref{1}) for the microscopic action by the use of \begin{eqnarray}\label{33X} \frac{1}{2m^2}\left(a^\dagger(x)a(x)\right)^2 \rightarrow \frac{1}{2m^2} \left(\hat\psi^\dagger(x)\hat\psi(x) + \hat{n}\right)^2. \end{eqnarray} The term $(\hat{n}/m^2)\hat\psi^\dagger(x)\hat\psi(x)$ precisely shifts $\mu$ to $\bar{\mu}_\Lambda$ in the functional integral (\ref{1}). A detailed account for a similar ``chemical potential shift'' for bosonic atoms can be found in \cite{Gollisch01}. Hence the second part of eq. (\ref{31}) yields a linear relation between the physical particle density $n$ and $\sigma$, \begin{equation} \sigma=\mu-V_l+\frac{n}{m^2}.\label{34.2} \end{equation} This will be our central equation relating the density $n$, the effective chemical potential $\sigma$, the chemical potential $\mu$ and the local trap potential $V_l$. We observe that a great part of the ``ultraviolet divergencies'' (for $\Lambda \to \infty$) present in the functional integral description is related to $\bar{\mu}_\Lambda$ and $\hat{n}$. These divergencies do not appear if the ``physical quantities'' $\mu$ and $n(x)$ are used. Eq. (\ref{34.2}) can now be used to eliminate $\sigma$ in favor of the physical particle density. Insertion into the first eq. (\ref{30}) yields the central field equation for the density in an inhomogeneous situation, \begin{eqnarray}\label{34} &&n-\frac{1}{m^2}A_\sigma\left(\mu-V_l + \frac{n}{m^2}\right)\triangle n\nonumber\\ &&\qquad\quad+\frac{1}{m^2}A'_\sigma(\mu-V_l + \frac{n}{m^2}) \left(\vec{\nabla}V_l-\frac{1}{2m^2}\vec{\nabla}n\right)\vec{\nabla}n\nonumber\\ &&\qquad=-U'(\mu-V_l + \frac{n}{m^2}) -A_\sigma(\mu-V_l + \frac{n}{m^2})\Delta V_l\nonumber\\ &&\qquad\quad+\frac{1}{2}A'_\sigma(\mu-V_l + \frac{n}{m^2})\vec{\nabla}V_l\vec{\nabla}V_l. \end{eqnarray} Once the functions $U'(\sigma)$ and $A_\sigma(\sigma)$ have been computed this equation permits the determination of $n(\vec{x})$ as a function of $\mu$. For its solution the boundary conditions of $n(\vec{x})$ have to be specified appropriately - for trapped atoms $n(\vec{x})$ has to vanish away from the trap. In practice, one often does not know $\mu$ in a given experimental situation, but rather has information about the total particle number $N=\int d^3xn(x)$. In this case one may compute $N(\mu)$ by integrating the solution of eq. (\ref{34}). Our setting can be reformulated in a more intuitive form in terms of a ``density functional'' $\hat{\Gamma}[n]$ to which we proceed in the next section. \subsubsection{Density Functional} \label{app:DensFunc} We can write the effective action in a more intuitive form as a functional of the particle density. It is convenient to add the source explicitly such that the field equation becomes \begin{eqnarray} \label{FieldEqGeneral} \frac{\delta \hat{\Gamma}}{\delta n(x)} = 0. \end{eqnarray} Using eq. (\ref{34.2}) one has \begin{eqnarray} \label{FreeEnergy} \hat{\Gamma}[n] &=& \Gamma[\sigma] + \int\hspace{-0.12cm} dx [m^2(V_l - \mu) - \hat{n}]\sigma \\\nonumber &=& \int\hspace{-0.12cm} dx\left\{U( \frac{n}{m^2}+\mu-V_l )+\frac{n^2}{2m^2} \right.\\\nonumber && \quad\left. +\frac{1}{2}A_\sigma(\frac{n}{m^2}+\mu-V_l )\left(\vec{\nabla}(\frac{n}{m^2} - V_l)\right)^2 +...\right\}, \end{eqnarray} where again a density independent term has been dropped. The explicit form of eq. (\ref{FieldEqGeneral}) is given by eq. (\ref{34}). In particular, a homogeneous situation is governed by the effective potential \begin{eqnarray} U_n(n)= \frac{n^2}{2m^2} + U(n). \end{eqnarray} In this case we can use $n$ as the independent variable, replacing $\mu$. In turn, the chemical potential $\mu (n)$ follows from the minimum condition $\partial U_n/\partial n =0$. In practice, one first solves for $\sigma (n)$ by inverting \begin{eqnarray} \label{DensSelfCons} U'(\sigma) = - n. \end{eqnarray} In MFT this step does not depend on $m^2$. The interaction strength $m^{-2}$ only enters the determination of $\mu$ through eq. (\ref{34.2}). Similarly, in the local density approximation we can trade $\mu$ for the density at a given reference location $x_0$, $n_0=n(x_0)$. Using again eq. (\ref{DensSelfCons}), the computation of the density profile employs \begin{eqnarray}\label{SigmaDensRelat} \sigma(x)= \sigma (n_0) + V_l(x_0) - V_l(x) + \frac{n(x)-n_0}{m^2}. \end{eqnarray} For a given central density $n_0$ (e.g. $x_0=0$) and given trap potential $V_l(x) - V_l(0)$ we can now evaluate $U'$ as a function of $n$ and determine $n(x)$ from $U'(\sigma (n))=-n$. On the other hand, if $n(x)$ is known we may use a functional of the variable $\sigma$ for fixed $n(x)$. Using \begin{eqnarray} \frac{\delta\Gamma}{\delta\sigma}= m^2\sigma - n + \hat{n} \end{eqnarray} we may define \begin{eqnarray} \bar{\Gamma} &=& \Gamma + \int dx \Big\{ \big(n-\hat{n}\big) \sigma - \frac{m^2}{2} \sigma^2\Big\},\nonumber\\ \frac{\delta\bar{\Gamma}}{\delta\sigma}&=&0. \end{eqnarray} Here $n(x)$ is considered as a fixed function. The corresponding potential \begin{eqnarray} \bar{U} = U_\sigma - \frac{m^2}{2}\sigma^2 + \big(n-\hat{n}\big) \sigma = U + n\sigma \end{eqnarray} is particularly useful for a homogeneous situation where $n$ is a fixed constant. The solution for $\sigma (n)$ corresponds to the minimum of $\bar{U}$. \subsubsection{Small and local density approximations} In the \emph{small density approximation} (SDA) $\sigma$ is given by \begin{eqnarray} \sigma= \mu - V_l(x). \end{eqnarray} We can now specify the validity of the small density limit more precisely and replace the condition (\ref{14}) by \begin{equation}\label{35} n\ll m^2|\mu-V_l|. \end{equation} In lowest order in the small density approximation one obtains the density of non-interacting fermions in a local potential as \begin{eqnarray}\label{36} n&=&-U'(\mu-V_l) -A_\sigma(\mu-V_l)\Delta V_l\nonumber\\ &&+\frac{1}{2}A'_\sigma(\mu-V_l) \vec{\nabla}V_l \vec{\nabla}V_l. \end{eqnarray} We emphasize that eq. (\ref{34}) and its low density limit (\ref{36}) are rather universal formulae which determine the density of fermions in a local potential. Their possible applications reach far beyond the particular problem of ultracold trapped atoms. Another useful approximation, the \emph{local density approximation} (LDA), obtains by setting $A_\sigma = 0$ in eq. (\ref{34}), \begin{eqnarray}\label{LDAEq} n&=&-U'(\mu_l + \frac{n}{m^2}), \end{eqnarray} where we define the ``local chemical potential'' \begin{eqnarray} \mu_l(x) = \mu - V_l(x). \end{eqnarray} Its validity does not require a small density but rather that the derivative terms in eq. (\ref{34}) (or (\ref{29})) are small as compared to $U'$. For a given size of $A_\sigma$ this always applies for a sufficiently homogeneous trap. Indeed, the local character of eq. (\ref{LDAEq}) guarantees that weak changes in $\mu_l(x)$ result in weak changes of $n(x)$. The error of the LDA can now easily be estimated by an iterative procedure. Inserting the solution of eq. (\ref{LDAEq}) into eq. (\ref{34}) allows for a simple direct computation of the subsequent terms. Obviously, the error depends on the size of $A_\sigma$ which is often a rather small quantity (see below). If both the density is sufficiently small and the trap sufficiently homogeneous one can work with the \emph{small local density approximation} (SLDA). This results in the simple formulae \begin{equation} \label{nUeq} n = - \frac{\partial U}{\partial\sigma} \end{equation} and \begin{equation}\label{SigMuEq} \sigma = \mu_l. \end{equation} In this approximation we can regard $\sigma(x)$ as a fixed external parameter and compute $n(x)$ by eq. (\ref{nUeq}). We will find that for realistic ultracold atom systems like $^6\mathrm{Li}$ the LDA is valid whereas eqs. (\ref{nUeq}), (\ref{SigMuEq}) are oversimplifications (except for the BEC regime far away from the Feshbach resonance). We emphasize again that our method is valid quite generally and not bound to the case of small density. If necessary, mean field theory can be improved by including the bosonic fluctuations in the computation of $U$ and $A_\sigma$ by performing the functional integral over $\hat{\sigma}$. Our method can also be applied in the presence of additional condensate fields. As a simple application we compute in appendix \ref{sec:Meta} the density at low $T$ in the mean field approximation. In particular, this shows that the dilute gas of ultracold atoms is a metastable state, the ground state being a liquid or solid. \subsection{Fermions and Bosons} We can proceed in complete analogy to sects. \ref{partial2}, \ref{Partial3} with $Z_F$ replaced by \begin{eqnarray}\label{ZFM} Z_{FM}[J]&=&\int\hspace{-0.1cm} {\cal D}\hat\psi{\cal D} \hat{\phi} \exp \Big\{\hspace{-0.1cm}-S_{FM}[\hat\psi,\hat{\phi}]\\\nonumber &&+\hspace{-0.1cm}\int\hspace{-0.12cm} dx\Big[J(x)\big(\hat\psi^\dagger(x)\hat\psi(x)+2\hat{\phi}^*(x)\hat{\phi}(x)\big)\Big]\Big\}, \end{eqnarray} and \begin{equation} \frac{\delta \ln Z_{FM}}{\delta J(x)}=n(x),\quad J(x) = \mu +(1-\tilde{\beta})\frac{\hat{n}}{m^2}-V_l(x). \end{equation} The classical action reads now \begin{eqnarray} S_{FM}\hspace{-0.12cm} &=&\hspace{-0.12cm}\int \hspace{-0.12cm} dx \Big[\hat\psi^\dagger\big(\partial_{\tau} -\frac{\triangle}{2M} \big)\hat\psi- \frac{1}{2}\big(\frac{1}{m^2} - \bar{\lambda}_\psi\big)\big( \hat\psi^\dagger\hat\psi\big)^2\nonumber\\ &&\hspace{-1.5cm}+\hat{\phi}^*\Big(\hspace{-0.05cm}\partial_\tau\hspace{-0.12cm} -\hspace{-0.05cm}\frac{\triangle}{4M}\hspace{-0.05cm} +\hspace{-0.05cm} \bar{\nu}_\Lambda +\frac{2\hat{n}}{m^2}\big(1-2\tilde{\beta}+\beta\big) + V_M(x) -2V_l(x)\Big)\hat{\phi} \nonumber\\ &&\hspace{-1.5cm}-\frac{2\beta}{m^2}\big(\hat{\phi}^*\hat{\phi}\big)^2 -\frac{2\tilde{\beta}}{m^2}\big(\hat\psi^\dagger\hat\psi\big)\hat{\phi}^*\hat{\phi} -\frac{\bar{h}_\phi}{2}\big(\hat{\phi}^*\hat\psi^T\epsilon\hat\psi - \hat{\phi}\hat\psi^\dagger\epsilon\hat\psi^*\big) \Big]\nonumber\\ \end{eqnarray} and the partition function describes the coupled system of atoms in a local potential $V_l$ and molecules in a ``molecule potential'' $V_M$. (The term $\propto -2V_l\hat{\phi}^*\hat{\phi}$ in $S_{FM}$ cancels the corresponding term from $-2J\hat{\phi}^*\hat{\phi}$.) The atoms are coupled to the molecules by the Yukawa coupling $\bar{h}_\phi$. Again, the bare parameters $\bar{\nu}_\Lambda$, $\bar{h}_\phi$ and $m^2$ have to be fixed by appropriate observable ``renormalized'' parameters. We have also included a possible local self-interaction of the molecules $\sim 2\beta/m^2$ and between free atoms and molecules $\sim 2\tilde{\beta}/m^2$. We concentrate on $V_M= 2V_l$ and $\beta=\tilde{\beta}=1$. This simply counts the molecules as two atoms as far as the local interactions are concerned, e.g. the local self-interaction is $\sim (\bar{n}_F+ 2\bar{n}_M)^2$ and the energy in the trap potential is $\sim \bar{n}_F + 2\bar{n}_M$. We note that in this particular case $\hat{n}$ drops out. (The corrections to $\mu$ are computed similar to eqs. (\ref{33},\ref{33X}).) Replacing in the Hubbard-Stratonovich transformation (\ref{4}) $\hat\psi^\dagger\hat\psi\to \psi^\dagger\psi + 2\hat{\phi}^* \hat{\phi}$ all steps in sec. \ref{universalfieldequation} can be performed in complete analogy, with the only difference that $Z_B$ in (\ref{7}) involves now also an integration over $\hat{\phi}$ and an appropriate source for $\hat{\phi}$, and $S_B$ reads (for $m^{-2} = \bar{\lambda}_\psi$) \begin{eqnarray}\label{YukawaAction} S_B\hspace{-0.15cm}&=&\hspace{-0.15cm}\int \hspace{-0.12cm} dx \Big[\psi^\dagger\big(\partial_{\tau} -\frac{\triangle}{2M} -\hat{\sigma}\big)\psi\nonumber\\ &&+\hat{\phi}^*\big(\partial_\tau -\frac{\triangle}{4M} + \bar{\nu}_\Lambda - 2\hat{\sigma}\big)\hat{\phi} \nonumber\\ &&\hspace{-0.12cm}-\frac{\bar{h}_\phi}{2}\Big(\hat{\phi}^*\psi^T\epsilon\psi - \hat{\phi}\psi^\dagger\epsilon\psi^*\Big) + \frac{m^2}{2}\hat{\sigma}^2\Big]. \end{eqnarray} This is a simple model for fermions with Yukawa coupling to scalar fields $\hat{\phi}$ and $\hat{\sigma}$. \subsubsection{Effective action} The effective action $\Gamma [\sigma,\bar{\phi}]$ obtains again by a Legendre transform similar to eq. (\ref{9}) and obeys now \begin{eqnarray} \frac{\delta\Gamma}{\delta\sigma}= m^2\mu_l = m^2\sigma- n. \end{eqnarray} In particular, a homogeneous situation is characterized by an extremum of $\bar{U} = U_\sigma - (m^2/2)\sigma^2 + n\sigma$, \begin{eqnarray} \label{Ubar} \bar{U}=n \sigma - \frac{\bar{h}_\phi^2M}{4\pi a} \bar{\phi}^*\bar{\phi} - 2\sigma\bar{\phi}^*\bar{\phi} +U_1 = U + n\sigma . \end{eqnarray} Similar to sect. (\ref{sec:ContribMolFluct}) we may proceed beyond the Gaussian functional integration for $\hat\psi$ (MFT) by adding to $U_1$ a piece from the one loop contribution from the $\hat\phi$-fluctuations \begin{eqnarray} U_1&=& U_1^{(F)}+U_1^{(B)}.\label{BosonicPot} \end{eqnarray} The contributions from dressed uncondensed molecules is now given by \begin{eqnarray}\label{nRenorm} \frac{\partial U_1^{(B)}}{\partial\sigma}=-2n_M \end{eqnarray} Indeed, eq. (\ref{nRenorm}) follows from eq. (\ref{DressedMol}) if $\mu$ is replaced in the classical bosonic propagator by the effective chemical potential $\sigma$. \subsubsection{Field equations} Collecting the different pieces the extremum of $\bar{U}$ (eq. (\ref{Ubar})) occurs for \begin{eqnarray} \frac{\partial \bar{U}}{\partial \sigma} = 0 = n - 2\bar{\phi}^*\bar{\phi} - n_{F,0} - 2n_M. \end{eqnarray} This is precisely the relation (\ref{RenDensConstr}) as it should be. (Recall $n_{F,0} = -\partial U_1^{(F)}/\partial \sigma$.) In other words, the density obeys \begin{equation} n= - \frac{\partial U }{\partial \sigma} \end{equation} With $J=\mu$ we may derive the relation between the effective chemical potential $\sigma$ and the chemical potential $\mu$ \begin{eqnarray} \mu =\sigma - n/m^2 \end{eqnarray} directly from the identity \begin{eqnarray} Z_B^{-1}\int\mathcal{D}\hat\psi\mathcal{D}\hat\sigma\mathcal{D}\hat{\phi} \frac{\delta}{\delta\hat\sigma}\exp \big\{-S_B + m^2 \int J\hat\sigma\big\} = 0.\nonumber\\ \end{eqnarray} The evaluation of $U(\sigma, \phi)$ is now sufficient for the computation of the total atom density $n$. For the homogeneous setting one can therefore determine $\tilde{\sigma}$ by \begin{equation}\label{DensHom} \frac{\partial \tilde{u} }{\partial \tilde{\sigma}} = -\frac{1}{3\pi^2}. \end{equation} \subsubsection{Effective chemical potential} In summary, our problem is now formulated as a functional integral for a Yukawa model. In the small density approximation we can treat $\sigma (x) = \mu + V_l(x)$ as an external parameter. The partition function becomes \begin{eqnarray}\label{ZDecoup} Z = \int \mathcal{D}\hat\psi\mathcal{D}\hat{\phi} \exp \big[- S_B[\hat\psi, \hat{\phi}; \sigma] + \mathrm{ source \, terms}\big] \end{eqnarray} where $S_B$ is given by eq. (\ref{YukawaAction}) and the density obtains from eq. (\ref{DensHom}). Beyond the small density approximation $\sigma$ is treated as a field and the partition function involves an additional integration over $\hat{\sigma}$. In the limit where the $\hat{\sigma}$ - fluctuations can be neglected we may consider the effective chemical potential $\sigma$ instead of $\mu$ as a free ``external parameter''. In particular, this offers for the homogeneous case the advantage that we are not bound to the validity of the small density approximation (which may not be accurate for realistic systems as $^6\mathrm{Li}$). It is sufficient to compute the value of $\tilde{\sigma} (\tilde{T})$ for a given density or given $k_F$. For the homogeneous case we can both deal with situations at fixed effective chemical potential $\sigma$ or at fixed density $n$. For a fixed $\sigma$ we may choose an arbitrary fiducial $\bar{k}_F$ (not determined by $n$) and do all the rescalings with $\bar{k}_F$ (instead of $k_F$). One can then work with a fixed value of $\tilde{\sigma}$ and compute $n$ or $k_F$ by the relation \begin{eqnarray} \frac{\partial\tilde{u}}{\partial \tilde{\sigma}} = -\frac{1}{3\pi^2} \frac{k_F^3}{\bar{k}_F^3}. \end{eqnarray} Many experimental settings can be idealized, however, by a fixed value of $n$. Then the choice $\bar{k}_F = k_F$ and the determination of $\tilde{\sigma}$ via eq. (\ref{DensHom}) provides directly all results for fixed $n$. For an inhomogeneous setting it is sensible to use a suitable fiducial $\bar{k}_F$. \section{Metastability} \label{sec:Meta} At low $T$ the thermal equilibrium state of atoms is a liquid or solid, with density essentially determined by the ``size'' of the atoms (typically set by the van der Waals length). When we deal with a dilute gas of atoms at ultracold temperature we obviously do not consider the stable thermal equilibrium state which minimizes the free energy. Indeed, we will see in this section that the metastability can be captured within our formalism if the weak cutoff dependence $\mathcal{O}(\Lambda^{-1})$ is not neglected. For this purpose, we consider a homogeneous system and neglect the effects from molecules. Under the above circumstances the field equation (\ref{34}) reduces to a simple self-consistency relation for the density, \begin{eqnarray}\label{43} n&=&-U'\left(\mu+\frac{n}{m^2}\right)\nonumber\\ &=&2\int\frac{d^3q}{(2\pi)^3}\frac{1}{\mathrm{e}^{(q^2/(2M)-\mu-n/m^2)/T}+1} \end{eqnarray} For the gross features we first consider the $T\to 0$ limit where the last equation reduces to \begin{equation}\label{44} n=\frac{1}{3\pi^2}\left\{2M\left(\mu+\frac{n}{m^2}\right)\right\}^{3/2} \end{equation} The resulting cubic equation for $y=n/(\mu m^2)$ \begin{equation}\label{45} (y+1)^3-\kappa y^2=0~,~\kappa=\frac{9\pi^4m^4}{8M^3\mu \end{equation} may have several solutions, depending on $\kappa$. We concentrate on $\mu>0$ and consider first $\kappa\gg 1$. The solution with small $y\approx\kappa^{-1/2}$, \begin{equation}\label{46} n_1=\frac{1}{3\pi^2}(2M\mu)^{3/2} \end{equation} describes a dilute gas of atoms. In lowest order the relation between $n$ and $\mu$ is indeed independent of $m^2$ and $\Lambda$. For large $\kappa$ the second solution with positive $y$ occurs for $y\approx\kappa$ \begin{equation}\label{47} n_2=\frac{9\pi^4m^6}{8M^3 \end{equation} while the third solution has negative $y$ and should be discarded. As $\kappa$ decreases (i.e. $\mu$ increases) the two solutions approach each other and melt for $\kappa_c=27/4~,~y_c=2$ or \begin{equation}\label{48} n_c=2\mu m^2=\frac{\pi^4m^6}{3M^3} \end{equation} which delimits the metastable gas phase. No solution with $y>0$ exists for $\kappa<\kappa_c$. The stability of the solution depends on the second derivative of $U_n$ with respect to $n$, i.e. for $T=0$ \begin{equation}\label{49} \frac{\partial^2U_n}{\partial n^2}=\frac{1}{m^2} \left\{1-\frac{3}{2}\left(\frac{1+y}{\kappa}\right)^{1/2}\right\}. \end{equation} The solution $n_1$ (with small $y$) turns out to be stable $(\partial^2U_n/\partial n^2>0)$ whereas the solution $n_2$ (with $y\approx\kappa)$ is unstable. For $\kappa_c$ one has $\partial^2U_n/\partial n^2=0$. What happens for $n>n_c$? In order to study this question we cannot neglect the ultraviolet cutoff $\Lambda$ anymore. In the evaluation of eq. (\ref{43}) the upper bound of the momentum space integral is actually given by $q_{max}=\Lambda$ instead of infinity. For $\Lambda<(2M(\mu+n/m^2))^{1/2}$ this multiplies the r.h.s of eq. (\ref{44}) by an additional factor $\Lambda^{3}/(2M(\mu+n/m^2))^{3/2}$ such that \begin{equation}\label{50} n=\frac{\Lambda^3}{3\pi^2} \hspace{0.2cm} \text{for} \hspace{0.2cm} \frac{n}{m^2}+\mu>\frac{\Lambda^2}{2M}. \end{equation} This solution is again stable with $\partial^2U_n/\partial n^2=1/m^2$. We associate it with the liquid or solid thermal equilibrium state. Indeed, the free energy for $n_{gs}=\Lambda^3/(3\pi^2)$ is much lower than for the solution $n_1$. Also the density is determined by the effective size of the atoms $\sim\Lambda^{-1}$. Inversely, we can define our cutoff by the density $n_{gs}$ of the liquid or solid ground state for $T=0$, $\Lambda = (3\pi^2n_{gs})^{1/3}$. Of course, in this region our approximations are no longer valid, but the detailed properties of the liquid or solid state are not the purpose of this paper. Beyond the limit $T\to 0$, a computation of the phase boundary for the existence of a metastable gas for arbitrary temperature requires the solution of the condition for the melting of the stable and unstable extrema, \begin{eqnarray} \label{ncrit} 0&\stackrel{!}{=}&\frac{\partial^2U_n}{\partial n^2}=\frac{1}{m^4}M_\sigma^2 \end{eqnarray} with \begin{eqnarray} M_\sigma^2 &=& m^2 + U''\\\nonumber && m^2 - \frac{1}{2T}\int\frac{d^3q}{2\pi^3}\cosh^{-2}\gamma. \end{eqnarray} One of the dependent variables, $\mu$ or $n$, has to be eliminated via the equation of state (\ref{43}). For the range of $T$ considered in this paper the influence of the temperature is negligible and the qualitative result (\ref{47}) is not affected. \section{Boson Propagator in MFT} \label{app:WFR} In mean field theory the gradient terms for the fields $\sigma$ and $\bar{\phi}$ obtain by evaluating the fermion loop in an inhomogeneous background field. We write the inverse fermion propagator as $ \mathcal{P}+\mathcal{F}$ with $\mathcal{P}$ the part independent of $\sigma$ and $\bar{\phi}$. We decompose the fields in a position independent part and a harmonic (single momentum mode) small local perturbation, \begin{eqnarray} \mathcal{F}(X,Y)&=&\mathcal{F}_{h} +\mathcal{F}_{inh} \\\nonumber &=& \left( \begin{array}{cc} {-\epsilon_{\alpha\beta}\bar{h}_\phi\bar{\phi}^*(X)} & {\delta_{\alpha\beta}\sigma(X)} \\ {-\delta_{\alpha\beta}\sigma(X)} & {\epsilon_{\alpha\beta}\bar{h}_\phi\bar{\phi}(X)} \end{array} \right)\delta (X-Y),\nonumber \end{eqnarray} where $\epsilon_{\alpha\beta}$ is the 2 dimensional totally antisymmetric symbol and \begin{eqnarray}\label{ansatzWFR} \bar{\phi}(X)&=&\bar{\phi}+ \delta\bar{\phi} \exp(\mathrm{i}K X),\\\nonumber \sigma(X)&=& \sigma + \delta\sigma \exp(\mathrm{i}K X)+\delta\sigma^* \exp(-\mathrm{i}K X). \end{eqnarray} The one-loop fermionic fluctuations can now be expanded around the homogeneous fields, \begin{eqnarray}\label{InhomExp} \Gamma_q^{(MFT)}\hspace{-0.25cm}&=& \hspace{-0.15cm}- \frac{1}{2}\mathrm{Tr}\ln(\mathcal{P}+\mathcal{F}_{h} +\mathcal{F}_{in})\\ \hspace{-0.25cm}&=& \hspace{-0.15cm}- \frac{1}{2}\mathrm{Tr}\ln(\mathcal{P}\hspace{-0.05cm}+\mathcal{F}_{h}) -\frac{1}{2}\mathrm{Tr}\ln(1\hspace{-0.12cm}+\mathcal{F}_{in}(\mathcal{P}\hspace{-0.05cm}+\mathcal{F}_{h})^{-1})\nonumber\\ \hspace{-0.25cm}&=&\hspace{-0.15cm}V U_{MFT} + \frac{1}{4}\mathrm{Tr}([\mathcal{F}_{in}(\mathcal{P}+\mathcal{F}_{h})^{-1}]^2) + \mathcal{O}(\mathcal{F}_{in}^4).\nonumber \end{eqnarray} The terms with odd powers in $\mathcal{F}_{inh}$ vanish due to translation invariance. For a practical computation, we switch to Fourier space where \begin{eqnarray} \mathcal{P}(Q_1,Q_2)= \left( \begin{array}{cc} {\hspace{-0.15cm}0} & {\hspace{-0.3cm}-P_F(-Q_1)} \\ {\hspace{-0.15cm}P_F(Q_1)} & {\hspace{-0.3cm}0} \end{array} \right) \delta_{\alpha\beta}\delta(Q_1 - Q_2),\nonumber\\ \end{eqnarray} and \begin{eqnarray} &&(\mathcal{P} + \mathcal{F}_h)^{-1}(Q_1,Q_2)=\mathcal{R}(Q_1)\delta (Q_1- Q_2),\\\nonumber &&\mathcal{R}(Q_1)=\left( \begin{array}{cc} {\epsilon_{\alpha\beta}\bar{h}_\phi\bar{\phi}} & {\delta_{\alpha\beta}(P_F(-Q_1)-\sigma)} \\ {-\delta_{\alpha\beta}(P_F(Q_1)-\sigma)} & {-\epsilon_{\alpha\beta}\bar{h}_\phi\bar{\phi}^*} \end{array} \right) \\ \nonumber &&\qquad \times \frac{1}{[P_F(Q_1)-\sigma][P_F(-Q_1)-\sigma] + \bar{h}_\phi^2\bar{\phi}^*\bar{\phi}}. \end{eqnarray} We concentrate first on the gradient term for the molecules ($\bar{A}_\phi$) and set $\delta\sigma =0$. Then the inhomogeneous part reads in momentum space \begin{eqnarray} \mathcal{F}_{in}(Q_1,Q_2) = \bar{h}_\phi\epsilon_{\alpha\beta}\left( \begin{array}{cc} {\hspace{-0.25cm}-\delta\bar{\phi}^*\delta_{Q_1,Q_2 + K} } & {\hspace{-0.25cm}0} \\ {\hspace{-0.25cm}0} & {\hspace{-0.25cm}\delta\bar{\phi}\delta_{Q_1,Q_2 - K}} \end{array} \right)\nonumber\\ \end{eqnarray} ($\delta_{R,S}=\delta(R-S)$). This yields \begin{eqnarray} &&\frac{1}{4}\mathrm{Tr}([\mathcal{F}_{in}(\mathcal{P}+\mathcal{F}_{h})^{-1}]^2)\\\nonumber &=&\frac{1}{4}\mathrm{tr}\int\limits_{Q_1,Q_2}\mathcal{F}_{in}(Q_1,Q_2)\mathcal{R}(Q_2) \mathcal{F}_{in}(Q_2,Q_1)\mathcal{R}(Q_1) \end{eqnarray} where $\mathrm{tr}$ is the trace over the ``internal'' $4\times 4$ matrix. One obtains \begin{eqnarray}\label{BosPropGen} &&\Gamma_q^{MFT}=\\\nonumber &&-\frac{V_3}{T} \bar{h}_\phi^2\delta\bar{\phi}^*\delta\bar{\phi} \int\limits_{Q'} \frac{P_F(-Q')-\sigma}{[P_F(Q')-\sigma] [P_F(-Q')-\sigma] + \bar{h}_\phi^2\bar{\phi}^*\bar{\phi}}\\\nonumber &&\times\frac{P_F(Q' - K)-\sigma}{[P_F(K- Q' )-\sigma][P_F(Q' - K)-\sigma] + \bar{h}_\phi^2\bar{\phi}^*\bar{\phi}} \end{eqnarray} Insertion of the ansatz (\ref{ansatzWFR}) into the effective action \footnote{Note the slight abuse of conventions. Here, $\bar{\mathcal{P}}$ stands for the 11 - entry of the inverse propagator matrix in the $\phi^*,\phi$ basis, instead of the full matrix.} \begin{eqnarray}\label{SlightAbuse} \Gamma = \frac{V_3}{T} \delta\bar{\phi}^*\delta\bar{\phi} \bar{\mathcal{P}}_\phi(K) \end{eqnarray} yields eq. (\ref{Zphi}). We note the simpler form for $\bar{\phi}=0$ (symmetric phase) where \begin{eqnarray}\label{ZphiSYM} \bar{\mathcal{P}}_\phi(K)&=& 2\pi \mathrm{i} n T +\frac{q^2}{4M} \\\nonumber &&\hspace{-1.2cm}-\int\limits_{Q} \frac{\bar{h}_\phi^2}{[P_F(Q)-\sigma][P_F(K-Q)-\sigma]}. \end{eqnarray} The quantum corrections differ from $P_\sigma$ (eq. (\ref{40})) by the overall factor $-\bar{h}_\phi^2/2$ and the different momentum structure in the denominator. We note that in the superfluid phase the symmetries would also be consistent with a gradient term of the form $\int_Q \bar{\phi}^*\delta\bar{\phi}(Q)\bar{\phi}^*\delta\bar{\phi}(-Q)$. The coefficient of this term cannot be computed with the ansatz (\ref{ansatzWFR}) - it would require a more general ansatz $\bar{\phi}(X) = \bar{\phi} + \delta\bar{\phi}_+\exp(\mathrm{i}K X) + \delta\bar{\phi}_-\exp(-\mathrm{i}K X)$. We will omit this term in the present paper. \subsection{Momentum dependence} For spacelike momenta the loop integral depends on the square of the external momentum only and we define $\bar{A}_\phi(q)$ and $\bar{A}_\phi$ by eqs. (\ref{Zphi1},\ref{Zphi2}). The fluctuation correction to $P_\phi$ at $n=0$ is given by \begin{eqnarray}\label{DeltaPfull} \Delta \bar{P}_\phi &=& -\bar{h}_\phi^2 T\sum\limits_m \int \frac{d^3q'}{(2\pi)^3}\frac{f(q')}{[((2m+1)\pi T)^2 + f^2(q') + r]}\nonumber\\ &&\,\, \times \frac{f(q' -q)}{[((2m+1)\pi T)^2 + f^2(q' -q) + r]}\\\nonumber &=& -\frac{\bar{h}_\phi^2}{2}\hspace{-0.1cm}\int\hspace{-0.1cm} \frac{d^3q'}{(2\pi)^3}\frac{f(q')f(q' -q)}{f(q')-f(q'-q)}\\\nonumber && \,\,\times\Big\{\frac{1}{\sqrt{b}}\tanh\frac{\sqrt{b}}{2T} - \frac{1}{\sqrt{a}}\tanh \frac{\sqrt{a}}{2T}\Big\} \end{eqnarray} with $f(q)=q^2/2M - \sigma$, $a=f^2(q')+r$, $b=f^2(q' -q)+r$. The momentum dependent gradient coefficient reads \begin{eqnarray} \bar{A}_\phi (q^2) &=& \frac{1}{4M} + \frac{\Delta \bar{P}_\phi(q^2) -\Delta \bar{P}_\phi(0)}{q^2} \nonumber\\ &=& \bar{A}_\phi^{(cl)} + \Delta \bar{A}_\phi(q^2). \end{eqnarray} As argued in sects. \ref{sec:MolFrac} and \ref{EffAtDens}, the physically relevant quantity is $\bar{A}_\phi/Z_\phi$. We have plotted the momentum dependence of $\bar{A}_\phi (q)/Z_\phi$ in the broad resonance limit $\tilde{h}_\phi^2 \gg 1$ in fig. \ref{Zqtot}. At large momenta, the gradient coefficient slowly tends to zero. This has no impact on observables, since the thermal distribution functions are suppressed at lower momenta already (see below, also fig. \ref{nqtot}). It might be an artefact of the neglected momentum dependence of $Z_\phi$. In the present paper we neglect for the numerical results the momentum dependence of $\bar{A}_\phi$ and approximate $\bar{A}_\phi (q) = \bar{A}_\phi\equiv\bar{A}_\phi (q=0)$. Let us discuss here the validity of this approximation in the broad resonance regime at the critical temperature. The impact of the momentum dependence of $\bar{A}_\phi (q)/Z_\phi$ is most clearly seen for the Bose distribution in fig. \ref{nqtot}. There we explicitly compare the results with a momentum dependent $\bar{A}_\phi (q)/Z_\phi$ and an approximation of constant $\bar{A}_\phi/Z_\phi$. In the BEC regime the error is very small with $\bar{A}_\phi (q)/Z_\phi$ close to the classical value $1/2$ for all $q$. The main difference from the classical propagator concerns the renormalization of the mass term $\bar{m}_\phi^2$. In the crossover regime the approximation of constant $\bar{A}_\phi/Z_\phi$ underestimates the number of molecules with large $q^2$, but the error remains acceptable. In contrast, the deviation from the result with the classical molecule propagator is already substantial. In the BCS regime the underestimate of $N_M(q)$ for the dominant range in $q$ is quite substantial. Though the overall effect of the boson fluctuations is small in the BEC regime, this may affect the quantitative results for the number density of molecules and the condensate. In view of fig. \ref{nqtot} (c) the estimates of $\bar{\Omega}_M$ and $\bar{\Omega}_C$ and $\Omega_M,\Omega_C$ in the present paper are most likely too small. We next discuss $\bar{A}_\phi = \bar{A}_\phi (q=0)$ more explicitly. With $\bar{A}_\phi = 1/4M + \Delta \bar{A}_\phi$ we can compute $\Delta \bar{A}_\phi$ as the term linear in $q^2$ in the Taylor expansion of $\Delta \bar{P}_\phi$ (\ref{DeltaPfull}). Using $\tilde{A}_\phi= 2M \bar{A}_\phi$ we find the result \begin{eqnarray}\label{Aphi1} \tilde{A}_\phi &=&\frac{1}{2} + \frac{\tilde{h}_\phi^2}{288\tilde{T}^3}\int \frac{d^3\tilde{q}}{(2\pi)^3}\,\,\tilde{q}^2\gamma_\phi^{-7}\big[3(5\gamma^4 - 5\gamma^2\gamma_\phi^2+2\gamma_\phi^4)\nonumber\\ &&\times[\tanh\gamma_\phi -\gamma_\phi\cosh^{-2}\gamma_\phi]+ 2\gamma^2\gamma_\phi(\gamma^2 -\gamma_\phi^2)\nonumber\\ && \times[\gamma_\phi \cosh^{-4}\gamma_\phi - 6\tanh\gamma_\phi -2\gamma_\phi\tanh^2\gamma_\phi\big], \end{eqnarray} simplifying in the symmetric phase $\gamma_\phi=\gamma$ to \begin{eqnarray}\label{AphiSYM} \tilde{A}_\phi &=&\frac{1}{2} + \frac{\tilde{h}_\phi^2}{48\tilde{T}^3}\int \frac{d^3\tilde{q}}{(2\pi)^3}\,\,\tilde{q}^2\gamma^{-3}\big[\tanh\gamma -\gamma\cosh^{-2}\gamma\big].\nonumber\\ \end{eqnarray} The loop correction to the gradient coefficient is strictly positive and monotonically growing for $\tilde{\sigma}$ increasing from negative values (BEC side) to its saturation value $\tilde{\sigma}=1$ on the BCS side. For the BEC regime it vanishes in the limit $\tilde{\sigma}\to -\infty$. However, the physical quantity $A_\phi$ =$\tilde{A}_\phi/Z_\phi$ approaches the finite value $1/2$ in the BEC regime - this is an indicator of the emergence of an effective bosonic theory. For $\tilde{\sigma}\approx 1$ instead, $\tilde{A}_\phi/Z_\phi$ is much larger than the value for elementary pointlike bosons, $1/2$. Indeed the integral is dominated by modes peaked around $\tilde{\sigma}$, explaining the strong increase as $\tilde{\sigma} \to 1$. In the limiting BEC and BCS cases, $\tilde{A}_\phi$ can be approximated by much simpler formulae. The BEC result is given in app. \ref{AnalBEC}, eq. (\ref{ZphiBEC}). In the BCS regime, the critical temperature is very small ($\tilde{T}\lesssim 0.1$) and $0.4\lesssim\tilde{\sigma}\lesssim 1$. We then find an approximate behavior for the $\tilde{r}$-dependent gradient coefficient \begin{eqnarray}\label{Zphi02} \tilde{A}_\phi &=& \frac{1}{2}+\frac{7\zeta(3)}{12\pi^4}\frac{\tilde{h}_\phi^2\tilde{\sigma}^{3/2}}{4\tilde{T}^2 + \tilde{r}}. \end{eqnarray} In the temperature range of interest the classical contribution to $\bar{A}_\phi$ is small ($\tilde{A}_\phi^{(cl)}=1/2$). Neglecting it and restricting to the symmetric phase ($\bar{\phi} =0$), this is consistent with the symmetric BCS result in \cite{CMelo93}, and we see how the condensate regulates the divergence for $T\to 0$. \subsection{Frequency dependence} Similarly the loop correction can be evaluated as a function of external Matsubara frequency $\omega_m$ ($Q = (\omega_m,\vec{q}\,)$) for vanishing external spacelike momentum $q$. This amounts to the renormalization correction to the operator $\phi^*\partial_\tau\phi$. In momentum space this corresponds to the part $\sim \mathrm{i}\omega_n$ in the inverse molecule propagator. Hence we only need to consider the imaginary part of the loop integral. The denominator in the integrand in eq. (\ref{BosPropGen}) is real and the imaginary part of the numerator becomes $2\pi \mathrm{i} n T f(q')$. This yields ($\omega = 2\pi n T$)\footnote{As in eq. (\ref{SlightAbuse}), $\bar{\mathcal{P}}$ stands for the 11 - entry of the inverse propagator matrix in the $\phi^*,\phi$ basis} \begin{eqnarray} \mathrm{Im} \Delta \bar{\mathcal{P}}_\phi(\omega, q=0) &=& \omega \bar{h}_\phi^2 T \sum\limits_m \int\frac{d^3q'}{(2\pi)^3} f(q') \\\nonumber &&\hspace{-0.5cm}\times\{((2m+1)\pi T)^2 + f^2(q') +r \}^{-1}\\\nonumber &&\hspace{-0.5cm}\times\{((2m+1)\pi T + \omega)^2 + f^2(q') +r \}^{-1}\hspace{-0.1cm}. \end{eqnarray} Obviously $\mathrm{Im}\Delta P_\phi$ vanishes for $n=0$ ($\omega=0$). For $n\neq 0$ we can perform the Matsubara sum (we suppress the argument of $f$), \begin{eqnarray} \mathrm{Im}\Delta \bar{\mathcal{P}}_\phi = \frac{\omega \bar{h}_\phi^2}{4} \int\frac{d^3q'}{(2\pi)^3}\frac{\tanh (\sqrt{f^2 +r}/2T)} {(\omega/2)^2 + f^2 + r}. \end{eqnarray} We may define the coefficient of the Matsubara frequencies as ($\omega_1 =2\pi T$) \begin{eqnarray} Z_{\phi,\tau}(\omega) &=& \frac{\mathrm{Im} \big(\bar{\mathcal{P}}_\phi(\omega,0)\big)}{\omega},\\\nonumber Z_{\phi,\tau}(\omega_1) &=& \mathrm{Im} \frac{\bar{\mathcal{P}}_\phi(\omega_1,0)}{\omega_1}\\\nonumber &=& 1 + \frac{\bar{h}_\phi^2}{4} \int\frac{d^3q'}{(2\pi)^3}\frac{\tanh (\sqrt{f^2 +r}/2T)}{(\pi T)^2 + f^2 + r}. \end{eqnarray} We can study the bosonic propagator as a function of $m$. The loop corrected imaginary part of the inverse boson propagator can be brought into the form \begin{eqnarray} \mathrm{i}\omega_m(1 + c(q^2, T,\sigma,m^2)), \quad \omega_m =2\pi m. \end{eqnarray} Each Matsubara mode is renormalized by an $m$-dependent quantity. In the present paper we neglect these corrections, i.e. we take $c(q^2,T, \sigma, m^2)=0$. \section{Analytical results in the BEC regime} \label{AnalBEC} We exploit the fact that in the BEC regime $\tilde{\sigma}/2\tilde{T} \to -\infty$, which means that we can replace the functions $\tanh\gamma \to 1, \gamma \cosh^{-2}\gamma\to 0$. In the superfluid phase, we use additionally $\tilde{r}/|\tilde{\sigma}| \to 0$. The loop integrals can then be evaluated analytically. They are temperature independent. Furthermore, their values coincide in the symmetric and superfluid phase, if terms of $\mathcal{O}(\tilde{r}/\tilde{\sigma})$ or higher order in $\tilde{\sigma}^{-1}$ are ignored. We find \begin{eqnarray} \Delta\tilde{m}_\phi^{(F)\,2} &=& \frac{\tilde{h}_\phi^2\sqrt{-\tilde{\sigma}}}{8\pi},\label{MassBCS}\\ \tilde{\lambda}_\phi^{(F)} &=& \frac{\tilde{h}_\phi^4}{128\pi\sqrt{-\tilde{\sigma}}^3},\\ \Delta \tilde{A}_\phi &=& \frac{\tilde{h}_\phi^2}{64\pi\sqrt{-\tilde{\sigma}}},\label{ZphiBEC}\\ \Delta Z_\phi &=& \frac{\tilde{h}_\phi^2}{32\pi\sqrt{-\tilde{\sigma}}}\label{ZRBEC}. \end{eqnarray} For the fermionic particle density contribution we find a term $\mathcal{O}(\tilde{r}/\sqrt{-\tilde{\sigma}})$, \begin{eqnarray} n_{F,0} &=& k_F^3\frac{\tilde{r}}{16\pi\sqrt{-\tilde{\sigma}}}. \end{eqnarray} The BCS gap equation is solved by using (\ref{MassBCS}), \begin{eqnarray} c^{-1} =\sqrt{-\tilde{\sigma}}, \end{eqnarray} independently of the value of $\tilde{r}$. Hence in the BEC limit, the relation between binding energy and scattering length \cite{Diehl:2005an} is independent of the density scale $k_F$. Indeed, many body effects should be unimportant in this regime. Approaching the resonance, the impact of the pairing gap $\tilde{r}$ becomes important and $\tilde{\sigma} \neq \epsilon_M /\epsilon_F$. In the limit of large Yukawa couplings, we can then evaluate the gradient coefficient of the effective Bose distribution (cf. eqs. (\ref{SymmDens},\ref{SuperFlDens})): \begin{eqnarray} A_\phi=\frac{\tilde{A}_\phi}{Z_\phi} = \frac{1}{2}. \end{eqnarray} Similarly, the fermionic contribution to the four-boson coupling evaluates to \begin{eqnarray} \lambda_\phi^{(F)} =\frac{\tilde{\lambda}_\phi^{(F)}}{Z_\phi^2} =\frac{8\pi}{\sqrt{-\tilde{\sigma}}}. \end{eqnarray} Using the relation $\lambda_p = 4\pi a_p/M_p$ between coupling strength and scattering length, the molecular scattering length in the BEC limit is given by \begin{eqnarray} a_M = 2 a_R \end{eqnarray} where we have changed back to dimensionful quantities. This reproduces the Born approximation for the scattering of particles of mass $M_p = 2M$. In approaching the resonance for the fermionic scattering length (crossover regime) $c^{-1} = 0$, the bosonic scattering length however remains finite. Note that both $A_\phi$ and $\lambda_\phi^{(F)}$ are effectively independent of $\tilde{h}_\phi$ in the broad resonance limit. \section{Schwinger-Dyson Equations for the molecule couplings} \label{app:SDE} In this appendix we provide details of our computation of the effective molecule-molecule interaction $\lambda_\phi$. We work in dimensionless renormalized units. In the symmetric phase we expand the effective potential $u$ and the mean field effective potential $\tilde{u}_{MFT}$ in powers of $\rho$, \begin{eqnarray} \tilde{u} &=& m_\phi^2\rho + \frac{1}{2}\lambda_\phi\rho^2 + ...\\\nonumber \tilde{u}_{MFT} &=& m_\phi^{(F)\,2}\rho + \frac{1}{2}\lambda_\phi^{(F)}\rho^2 + ... \end{eqnarray} The contribution from the boson loop reads \begin{eqnarray}\label{U1B} \tilde{u}_1^{(B)} &=& m_\phi^{(B)\,2}\rho + \frac{1}{2}\lambda_\phi^{(B)}\rho^2 + ...\\\nonumber &=& \big(m_\phi^2 - m_\phi^{(F)\,2}\big)\rho + \frac{1}{2}\big(\lambda_\phi - \lambda_\phi^{(F)} \big)\rho^2 + ... \end{eqnarray} (see below). In the symmetric phase we evaluate $\tilde{u}_1^{(B)}$ in an approximation where we truncate in quadratic order in $\rho$ (\ref{U1B}). \begin{eqnarray}\label{MLambda} m_\phi^2 &=& m_\phi^{(F)\,2}+ m_\phi^{(B)\,2},\nonumber\\ \lambda_\phi &=& \frac{\partial^2 U}{\partial \rho^2}\Big|_{\rho=0}= \lambda_\phi^{(F)} + \lambda_\phi^{(B)}. \end{eqnarray} We determine the coupling $\lambda_\phi$ from the ``Schwinger-Dyson'' equation \begin{eqnarray}\label{LBU} \lambda_\phi\hspace{-0.1cm} &=& \hspace{-0.1cm}\lambda_\phi^{(F)} + \frac{\partial^2 U_1^{(B)}}{\partial \rho^2}\Big|_{\rho=0}\\\nonumber &=& \lambda_\phi^{(F)} - \frac{3\lambda_\phi^{(F)} \lambda_\phi}{2\tilde{T}}\int\hspace{-0.15cm}\frac{d^3\tilde{q}}{(2\pi)^3}\alpha^{-1} \big[(\exp 2\alpha - 1\big)^{-1}\nonumber\\ &&\qquad + 2 \alpha \sinh^{-2}\alpha \big]\nonumber\\\nonumber &=& \lambda_\phi^{(F)} + \lambda_\phi \cdot I_\lambda. \end{eqnarray} which has the solution \begin{eqnarray} \lambda_\phi = \frac{\lambda_\phi^{(F)}}{1- I_\lambda}. \end{eqnarray} For $m_\phi^2 \to 0$ the last term in eq. (\ref{LBU}) becomes infrared divergent. Divergences of this type of quantum corrections to quartic couplings are familiar from quantum field theory and statistical physics of critical phenomena. Indeed, the point $m_\phi^2 = 0$ corresponds to the critical line (or hypersurface) for the phase transition to superfluidity - for negative $m_\phi^2$ the symmetric phase becomes unstable. The remedy to this infrared problem has been well understood from the solution of functional renormalization group equations: the strong fluctuation effects drive $\lambda_\phi$ to zero at the critical line \cite{ATetradis93,BTetWet94,CTetradis92}. Our gap equations recover this important feature in a direct way. As $m_\phi^2$ approaches zero the negative last term in eq. (\ref{LBU}) becomes more and more important as compared to $\lambda_\phi$ on the left hand side. The solution to eq. (\ref{LBU}) implies \begin{eqnarray}\label{LambdaLimit} \lim\limits_{m_\phi \to 0} \lambda_\phi(m_\phi) \to 0. \end{eqnarray} For small values of $m_\phi^2$ in the vicinity of the phase transition we can expand the integral in eq. (\ref{LBU}) as \begin{eqnarray} I_\lambda = -15\tilde{T} \lambda_\phi^{(F)} \int \frac{d^3\tilde{q}}{(2\pi)^3}\big(A_\phi \tilde{q}^2 +m_\phi^2\big)^{-2}. \end{eqnarray} One infers $\lambda_\phi\propto m_\phi$ according to \begin{equation}\label{FixedPoint} \lambda_\phi = \frac{8\pi}{15 \tilde{T}}A_\phi^{3/2} m_\phi. \end{equation} In the superfluid phase, we expand the effective potential around $\rho_0$ \begin{eqnarray}\label{TruncPotSSB} \tilde{u} = m_\phi^2(\rho - \rho_0) + \frac{\lambda_\phi}{2}(\rho-\rho_0)^2 \end{eqnarray} We choose again a basis of real renormalized fields $\phi_1, \phi_2$ according to \begin{eqnarray} \phi = \frac{1}{\sqrt{2}}\big(\phi_1 + \mathrm{i}\phi_2\big) , \quad \rho = \frac{1}{2}\big( \phi^2_1 + \phi_2^2\big). \end{eqnarray} Without loss of generality we may consider a background of real $\phi$, i.e. $\phi_{1,0}^2 =2\rho_0, \phi_{2,0}=0$. With $\phi_1' = \phi_1 - \phi_{1,0}$ the potential (\ref{TruncPotSSB}) becomes \begin{eqnarray}\label{TruncPotSSB2} &\tilde{u}&= \frac{1}{2}m_\phi^2\phi^2_2 +\frac{1}{2}(m_\phi^2 + 2\lambda_\phi \rho_0)\phi_1'^2 + \lambda_\phi\sqrt{\rho_0/2}\phi_1'\phi_2^2 \nonumber\\ &+&\frac{\lambda_\phi}{8}\phi^4_2 + \frac{\lambda_\phi}{4} \phi_2^2\phi_1'^2 + \frac{\lambda_\phi}{8} \phi_1'^4+ ...\nonumber\\ \end{eqnarray} We can associate $m_\phi^2$ and $\lambda_\phi$ with the terms quadratic and quartic in $\phi_2$. The dots denote cubic and quintic terms $\propto \phi_1'^3, \phi_2^4\phi_1', \phi_2^2\phi_1'^3$ that will not contribute in our approximation and we neglect terms of $\mathcal{O}(\phi^6)$. We use the Schwinger-Dyson equations for the $\phi_2^2$ and $\phi_2^4$ vertices which result in eq. (\ref{MBUB}) and \begin{eqnarray}\label{LBUB} \lambda_\phi \hspace{-0.2cm}&=&\hspace{-0.1cm}\lambda_\phi^{(F)} \hspace{-0.1cm}- \frac{3\lambda_\phi^{(F)} \lambda_\phi}{2\tilde{T}}\hspace{-0.2cm}\int\hspace{-0.2cm}\frac{d^3\tilde{q}}{(2\pi)^3}\alpha_\phi^{-3}\big[ \big(\alpha -\kappa\big)^2 \big(\exp 2\alpha_\phi - 1\big)^{-1} \nonumber\\ &&+ 2\big(\alpha + \kappa/2\big)^2 \alpha_\phi \sinh^{-2}\alpha_\phi\big]. \end{eqnarray} Again, we observe in eq. (\ref{LBUB}) the appearance of infrared divergences in the contribution $\propto \lambda_\phi^{(F)} \lambda_\phi$ from the Goldstone fluctuations. If we would define $\lambda_\phi(\rho)$ by the $\phi_2^4$ - vertex evaluated at some value $\rho > \rho_0$ they would be regulated. Taking the limit $\rho\to \rho_0$ we obtain similar to eq. (\ref{LambdaLimit}) $\lambda_\phi (\rho\to \rho_0)\to 0$. The Goldstone boson fluctuations renormalize $\lambda_\phi$ to zero, as found in \cite{ATetradis93,BTetWet94,CTetradis92}. In our approximation $\lambda_\phi$ vanishes for all $T < T_c$ in the superfluid phase. In consequence, the mass term $2\lambda_\phi \rho_0$ of the radial mode vanishes for all $T <T_c$. However, as we have discussed in the main text, this vanishing of $\lambda_\phi$ concerns only the effective vertex at zero external momentum, whereas the loop integrals often involve vertices at nonzero momentum. Furthermore, the contribution from the fluctuations of the radial mode in eqs. (\ref{LBU}) and (\ref{LBUB}) are not treated very accurately. First, the $\phi_1'^2\phi_2^2$ vertex contains in principle a contribution $\propto \nu_\phi\rho_0$ ($\nu_\phi$ the coefficient of the contribution $\propto (\rho - \rho_0)^3$) which shifts $\lambda_\phi^{(F)} \to \lambda_\phi^{(F)} + 2\nu_\phi^{(F)} \rho_0$ and is neglected here. Second, the structure of the inverse propagator of the radial mode is actually not simply $\bar{A}_\phi q^2$ with constant $\bar{A}_\phi$. Indeed, the effective quartic coupling $\lambda_\phi$ only vanishes for zero external momentum, with a typical momentum dependence $\lambda_\phi\propto |q|$. For a definition of a mass term at $q=0$ this is consistent with $\lambda_\phi\rho_0\to 0$. However, this effect will then become visible as an infrared divergence of the gradient coefficient for the radial mode, $\bar{A}_{\phi, r} \propto \rho_0 |q|^{-1}$ (which differs from $\bar{A}_\phi$ for the Goldstone mode). In consequence, one has $\bar{P}_\phi(q\to 0) \propto |q|$ and the ``radial mode contribution'' to the Schwinger-Dyson equation is not infrared divergent. We note, however, that the radial mode contribution is subleading as compared to the Goldstone mode contribution such that our approximation catches the dominant features for the behavior of $\lambda_\phi$. Finally, in the superfluid phase the potential (\ref{TruncPotSSB2}) also contains a cubic term $\propto \phi_1'\phi_2^2 \propto \lambda \rho_0^{1/2}$. This term contributes to the Schwinger-Dyson equation for $m_\phi^{(B)\, 2}$. The coefficient of this contribution $\propto \lambda_\phi^{(F)} \lambda_\phi\rho_0$ vanishes for $\lambda_\phi=0$. Nevertheless, for a momentum dependent $\lambda_\phi$ one has to take it into account, as well as similar corrections to the Schwinger-Dyson equation for $\lambda_\phi$ which involve quintic couplings. \section{Numerical Procedures} \label{app:numerics} In this section we give a short summary which equations are actually used for numerical solutions. All quantities are given in dimensionless renormalized units. We first give a complete list relating dimensionful, dimensionless and dimensionless renormalized parameters, couplings and fields.\\ (i) Relations between dimensionful and dimensionless parameters (the Fermi energy is given by $\epsilon_F = k_F^2/2M$) \begin{eqnarray} \tilde{q} &=& q/k, \quad \tilde{T} = T/\epsilon_F, \quad \tilde{\nu}=\bar{\nu}/\epsilon_F,\quad c = \bar{a} k_F\nonumber,\\\nonumber \tilde{\sigma} &=& \sigma/\epsilon_F,\quad \tilde{h}_\phi = 2M\bar{h}_\phi/k_F^{1/2}, \\\nonumber \tilde{A}_\phi &=& 2M \bar{A}_\phi, \quad \tilde{m}_\phi^2 = \bar{m}_\phi^2/\epsilon_F, \quad \tilde{\lambda}_\phi = 2M k_F \bar{\lambda}_\phi\\\nonumber \tilde{\psi} &=&k_F^{-3/2}\psi,\quad \tilde{\phi} = k_F^{-3/2}\bar{\phi}, \\ \tilde{\rho} &=&\tilde{\phi}^*\tilde{\phi}= k_F^{-3}\bar{\rho}, \quad \tilde{r}= \tilde{h}_\phi^2\tilde{\rho} = r/\epsilon_F^2. \end{eqnarray} (ii) Relations between dimensionless and dimensionless renormalized parameters \begin{eqnarray} A_\phi &=& \tilde{A}_\phi/Z_\phi , \quad m_\phi^2 = \tilde{m}_\phi^2/Z_\phi,\quad \nu = \tilde{\nu}/Z_\phi,\nonumber\\ \lambda_\phi &=& \tilde{\lambda}_\phi/Z_\phi^2,\quad h_\phi = \tilde{h}_\phi /Z_\phi^{1/2}, \quad \rho = Z_\phi \tilde{\rho}. \end{eqnarray} All other parameters, couplings and fields are invariant under a rescaling with $Z_\phi$. In particular, note the invariance of the concentration $c$ and the dimensionless superfluid gap parameter $\tilde{r}$. In our approximation, we neglect details of the renormalization coefficients for the Matsubara frequencies and use \begin{eqnarray} Z_{\phi,\tau}/Z_\phi = 1. \end{eqnarray} The input parameters are the detuning $\tilde{\nu}$, the temperature $\tilde{T}$ and the Yukawa coupling $\tilde{h}_\phi$, with $c$ following from eq. (\ref{NuMuC}). Alternatively, we could choose $c^{-1},\tilde{h}_\phi,\tilde{T}$. We have to solve the field equations for the effective chemical potential $\tilde{\sigma}$ (density equation) and the field expectation value $\phi$. The latter has the general form \begin{eqnarray} \frac{\partial u}{\partial\rho}(\rho_0) \cdot \phi_0 = 0 \end{eqnarray} In the superfluid phase it determines the expectation value $\phi_0$. In the normal or symmetric phase the field expectation value vanishes, $\phi_0=0$. The field equation for $\phi$ is now replaced by the gap equation for the boson mass term $m_\phi^2= \partial u(0)/\partial\rho$. The two equations provide a closed system, whose solution for $\tilde{\sigma}$ and $\rho_0$ or $m_\phi^2$ determines all observables of the crossover problem. In the superfluid or symmetry-broken phase we have $\phi_0 \neq 0$ such that $ \partial u(\rho_0)/\partial\rho =0$ must hold. The density equation and the latter condition are then solved for $\tilde{\sigma}$ and $\tilde{r} = \tilde{h}_\phi^2\tilde{\phi}^*\tilde{\phi}$. However, the full four-boson coupling $\lambda_\phi$ enters both the density equation and the gap equation. Therefore, an additional gap equation for $\lambda_\phi$ is needed, cf. sect. \ref{sec:beyond} and app. \ref{app:SDE}. Hence the solution is given by $\tilde{\sigma}, \tilde{r}, \lambda_\phi$. In the ``zero momentum approximation'' we find $\lambda_\phi=0$. All other quantities of interest can then be reconstructed from the solution. For example, we can obtain the bare noncondensed molecule fraction and the condensate fraction by \begin{eqnarray}\label{barOm} \bar{\Omega}_M = 6\pi^2 k_F^{-3} \bar{n}_M,\quad \bar{\Omega}_C = 6\pi^2 \frac{\tilde{r}}{\tilde{h}_\phi^2} \end{eqnarray} where $\bar{n}_M$ is given by eqs. (\ref{SymmDens}). The dressed noncondensed molecule fraction and condensate fraction are the given by \begin{eqnarray} \Omega_M = Z_\phi \bar{\Omega}_M ,\quad \Omega_C = Z_\phi \bar{\Omega}_C. \end{eqnarray} Let us now give the explicit formulae used in the different regions of the phase diagram.\\ (i) Normal phase.\\ The full boson mass is determined by the condition \begin{eqnarray}\label{Cross1SYM} m_\phi^2 = m_\phi^{(F)\, 2} + \frac{1}{3\pi^2}\lambda_\phi^{(F)}\Omega_M. \end{eqnarray} Here $\Omega_M$ depends on the full boson mass term $m_\phi^2$, and $m_\phi^{(F)\, 2},\lambda_\phi^{(F)}$ (eqs. (\ref{FermSYMMass}, \ref{147})) have to be evaluated at $\tilde{r}=0$. The field equation for $\tilde{\sigma}$ is equivalent to the condition \begin{eqnarray}\label{DensSYM} 1 = \Omega_F + \Omega_M \end{eqnarray} which uses the dressed density fractions. Here \begin{eqnarray} \Omega_F &=& 6\pi^2\int \frac{d^3\tilde{q}}{(2\pi)^3}\big(\mathrm{e}^{ 2\gamma} + 1\big)^{-1},\label{OmegF}\\ \Omega_M &=& 6\pi^2\int\frac{d^3\tilde{q}}{(2\pi)^3} \Big(\mathrm{e}^{(A_\phi \tilde{q}^2 + m_\phi^2)/\tilde{T}} - 1\Big)^{-1}\nonumber\\ &=& \frac{3\Gamma(3/2)}{2}\Big(\frac{\tilde{T}}{A_\phi}\Big)^{3/2}\mathrm{Li}_{3/2} \big(e^{-m_\phi^2/\tilde{T}}\big)\label{OmegM}. \end{eqnarray} (ii) Superfluid phase.\\ Now $\tilde{r}$ and $\tilde{\sigma}$ are determined by the equations \begin{eqnarray}\label{Cross1SSB} m_\phi^{(F)\, 2} + \frac{1}{3\pi^2}\lambda_\phi^{(F)} \Omega_M &=&0,\\ \Omega_{F,0} + \Omega_M + \bar{\Omega}_C &=& 1. \end{eqnarray} These equations are still coupled to a gap equation for $\lambda_\phi$ (cf. app. \ref{app:SDE} for details). Here $\Omega_M$ is given by eq. (\ref{SymmDens})($\Omega_M = 6\pi^2k_F^{-3}n_M$), $\bar{\Omega}_C$ by eq. (\ref{barOm}) and \begin{eqnarray} \Omega_{F,0} = -3\pi^2\int \frac{d^3\tilde{q}}{(2\pi)^3}\big(\frac{\gamma}{\gamma_\phi}\tanh\gamma_\phi -1\big). \end{eqnarray} (iii) Phase transition.\\ At the critical line $T=T_c$ we have $m_\phi^2=\phi_0 =0$ and we solve \begin{eqnarray} m_\phi^{(F)\, 2} + \frac{1}{3\pi^2}\lambda_\phi^{(F)}\Omega_M =0,\\ \Omega_F + \Omega_M = 1 \end{eqnarray} at $\tilde{r} =0$ for $\tilde{\sigma}$ and $T_c$. Here $\Omega_M$ is evaluated at $m_\phi^2 = 0$ and reads \begin{eqnarray} \Omega_M = \frac{3\Gamma(3/2)\zeta(3/2)}{2}\Big(\frac{\tilde{T}}{A_\phi}\Big)^{3/2}\label{OmegMPT}. \end{eqnarray} \end{appendix} \bibliographystyle{apsrev}
proofpile-arXiv_065-3159
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Dunes are naturally occurring, beautifully shaped sand deposits. Since the middle of the previous century, they have attracted the attention of scientists who have been seeking to model them and understand the processes leading to their formation. From the point of view of the physicist, sand dunes constitute a variable boundary problem: The air flow is determined by the shape of the dune and in turn influences the dune shape by transporting sand grains. Therefore the air flow over dunes is of great importance for understanding dune formation and evolution. Consequently, this topic has aroused a great deal of interest since the days of Bagnold~\cite{Bagnold41,Bagnold51} and led to a significant number of publications \cite{Benjamin59,Sutherland67,Brown79,Haff83,McLean86,Rubin87,Wood95,NelsonSmith89,Weng91,Ayotte04}. Since the start of scientific interest in dunes, there has been some work on the topic of flow separation in the lee of dunes, both theoretical~\cite{NelsonSmith89,Parsons04} and experimental (e.\ g.\ \cite{Engel81,SweetKocurek90,Rasmussen96}). However, due to the difficult nature of the problem, these papers have only tackled part of the problem. In several publications, transverse dunes have been modelled as triangular structures~\cite{Engel81,Parsons04,Parsons04a}. Field measurements of air flow over dunes, on the other hand, tend to lack measurements of the dune profile~\cite{SweetKocurek90,FrankKocurek96}. A recent field measurement~\cite{Parteli04} suggests that the shape of transverse dunes has significant influence on the length of the recirculation region. Since the sand transport in the recirculation region in the lee of a dune is negligible, the foot of the following dune shape is located at or downwind of the flow reattachment point (if one assumes the dune shapes to be stable). Therefore the distance of closely spaced dunes is a limiting measure of the length of the recirculation region. In reference~\cite{Parteli04} the separation length after different dunes was determined in this way. In this paper we will present results for widely spaced or isolated transverse dunes. This is to some extent an idealisation. However, we think it is a useful idealisation: We want to concentrate on the effect of the dune shape of a single dune, and taking into account the presence and shape of neighbouring dunes would introduce additional parameters. In Section~\ref{sec:close} we discuss the effect of considering transverse dunes which are part of a dune field. This text is organised as follows: In the following Section~\ref{sec:method} the models and parameters of our CFD simulations are described. The geometry of the dune shapes we modelled is also presented there. Section~\ref{sec:seplen} presents our results for the length of flow separation and the phenomenological formula we found. In Section~\ref{sec:sepline} the shape of the separating streamline extracted from the simulation is modelled mathematically. In Section~\ref{sec:close} we briefly discuss the situation of a transverse dune in a field of closely spaced dunes. Section~\ref{sec:discuss} compares our results with previous work. The last section presents a summary. \begin{figure} \hbox to \textwidth{ \hfill \input{lencomp4.ptex} \hfill \input{lencomp7.ptex} \hfill} \caption{Realistic profiles of transverse dunes can be described approximately by two circle segments. Data from \protect{\cite{Parteli04}}.} \label{fig:lencomp} \end{figure} \section{Method} \label{sec:method} Our simulations were performed with the computational fluid dynamics software FLUENT \cite{FLUENT}. This software simulates the Reynolds-averaged Navier-Stokes equations complemented by a turbulence model. The simulations were two-dimensional. This implies translationally invariant dune shapes, i.\ e.\ perfectly straight transverse dunes, and a wind direction perpendicular to the dunes. The simulation grid was square. We refined it near the ground to allow a modelling of near-wall flow as accurate as possible using wall functions. Second-order discretisation schemes were used for all quantities for which this was possible. Besides the Reynolds-averaged Navier-Stokes equations, an additional set of equations called the turbulence closure is required to determine a solution. We use the $k$-$\epsilon$ model with renormalisation group (RNG) extensions. This variant of the $k$-$\epsilon$ model was found to yield the most accurate results in flow separation situations \cite{Lien94,Bradbrook98,WalkerNickling02}. The cross sections of the dune shapes were constructed from two circle segments, a concave one modelling the foot of the dune and a convex one for the crest. This shape was chosen for reasons of convenience --- the program we used to create the geometry supports circle segments. But as can be seen from Figure~\ref{fig:lencomp}, our shape provides a reasonable fit for real dunes. The figure displays the dunes number 4 and 7 from Reference~\cite{Parteli04}. Given the great variety of shapes found in nature, the measured shape may not be universal. But our geometric construction reflects the fact that the dune profile is curved upward at its foot and downward at its crest and therefore constitutes an improvement over the triangular shapes used previously. To obtain different shapes, the position of the slip face was varied from the start to the end of the convex part, see Figure~\ref{fig:shape}. Note that this has the consequence that not all the dunes have the same height. The heights and other geometrical data are given in Table~\ref{tab:geometry}. The simulation results for the length of flow separation, our main quantity of interest, was found to depend slightly on the spacing of the simulation grid. To account for this small grid dependence, we performed the simulation of the flow over each dune with three different grid sizes and extrapolated the separation lengths to the continuum. The average grid spacings were 10, 7 and 5~cm, respectively. \begin{figure} \hbox to \textwidth{\hss\input{shape.pstex_t}\hss} \caption{The seven different dune shapes investigated. The scale displays the brink position $d$. The crest height of the dunes with positive brink position is 3 metres; the height of those with negative brink position equals the brink height, which is the smaller the more negative $d$ is.} \label{fig:shape} \end{figure} \begin{figure} \input{geometry.pstex_t} \caption{The geometric variables characterising the dune shapes. The brink angle is positive for dunes with a sharp brink and negative for round dunes as the one shown in this figure.} \label{fig:geometry} \end{figure} \begin{table} {\offinterlineskip \def\hskip 5pt plus 1fil \relax{\hskip 5pt plus 1fil \relax} \halign{\vrule height2.5ex depth1ex width 0.7pt \hskip 5pt plus 1fil \relax#\qquad\qquad&\vrule\qquad \hskip 5pt plus 1fil \relax#&#\hskip 5pt plus 1fil \relax &\vrule\qquad\quad \hskip 5pt plus 1fil \relax#&#\hskip 5pt plus 1fil \relax &\vrule\qquad\quad\hskip 5pt plus 1fil \relax#&#\hskip 5pt plus 1fil \relax \vrule width 0.7pt\cr \noalign{\hrule height 0.7pt} \omit\vrule height3ex depth 1.5ex width 0.7pt\hskip 5pt plus 1fil \relax Brink position $d$ [m] \hskip 5pt plus 1fil \relax\ &\omit\span\omit\vrule\hskip 5pt plus 1fil \relax Height $H$ [m] \hskip 5pt plus 1fil \relax &\omit\span\omit\vrule\hskip 5pt plus 1fil \relax Brink height $\Delta$ [m] \hskip 5pt plus 1fil \relax &\omit\span\omit\vrule\hskip 5pt plus 1fil \relax Brink angle $\alpha$ [$^\circ$] \hskip 5pt plus 1fil \relax \vrule width 0.7pt \cr \noalign{\hrule height 0.7pt} $-$15 & 1&.5 & 1&.5 & 11&.4 \cr \noalign{\hrule} $-$10 & 2&.337 & 2&.337 & 7&.6 \cr \noalign{\hrule} $-$5 & 2&.835 & 2&.835 & 3&.78 \cr \noalign{\hrule} 0 & 3& & 3& & 0& \cr \noalign{\hrule} 5 & 3& & 2&.835 & $-$3&.78 \cr \noalign{\hrule} 10 & 3& & 2&.337 & $-$7&.6 \cr \noalign{\hrule} 15 & 3& & 1&.5 & $-$11&.4 \cr \noalign{\hrule height 0.7pt} }} \vskip 3pt \caption{Geometric variables of the simulated dunes. See Figure~\protect{\ref{fig:geometry}} for a definition of the geometric variables. The brink angle is defined to be positive if the upwind slope is positive at the brink.} \label{tab:geometry} \end{table} The region around the dune in which the flow was simulated was chosen large enough so that the boundaries did not influence the results. This was verified by performing simulations with larger simulation areas for some dune shapes and comparing the results. The simulation region extends 45\,m to the left and 70\,m to the right from brink position 0 (see Figure~\ref{fig:simregion}). The height of the simulated region was chosen to be 30\,m for all dunes except the one with the most negative brink position which had the smallest height, where 20\,m was found to be sufficient. \begin{figure}[h] \hbox to \textwidth{\hss\input{simregion.pstex_t}\hss} \caption{The simulation region around the dune.} \label{fig:simregion} \end{figure} The velocity profile at the influx boundary of the simulation region was set to the logarithmic profile which forms in flow over a plane in neutral atmospheric conditions: \dmath1{ v(z) &=& \frac{u_*}{\kappa} \ln \frac z{z_0}\,, &eq:logprof} where $\kappa\approx0.4$ is the von K\'arm\'an constant. The shear velocity was chosen to be $u_*=0.4$ m/s. The size of the roughness elements on the ground, i.\ e.\ the sand grains, was chosen as $250\,\mu$m. The roughness length is 1/30 of the grain size, $z_0\approx 8.33\,\mu$m \cite{Bagnold41,Wright97}. \section{The flow separation length} \label{sec:seplen} The length of flow separation, our quantity of interest, was measured from the slip face brink, where the flow separates, to the flow reattachment point (see Figure~\ref{fig:geometry}), defined to be the position at which the velocity near the ground changes direction from against the flow to in flow direction. The separation lengths determined from simulations with different grid spacings were extrapolated to the continuum with the standard linear regression formulas. To non-dimensionalise the separation length $\ell$, it was divided by the height of the slip face. Table~\ref{tab:seplen} shows the results for all simulated dunes. \begin{table} \hbox to \textwidth{\hss\vbox{\offinterlineskip \def\hskip 5pt plus 1fil \relax{\hskip 5pt plus 1fil \relax} \def\,$\pm$\,{\,$\pm$\,} \halign{\vrule height2.5ex depth1ex width 0.7pt \quad \hskip 5pt plus 1fil \relax#&#\hskip 5pt plus 1fil \relax &\vrule\quad \hskip 5pt plus 1fil \relax#&#\hskip 5pt plus 1fil \relax &\vrule\quad \hskip 5pt plus 1fil \relax#&#\ \ \hskip 5pt plus 1fil \relax \vrule width 0.7pt\cr \noalign{\hrule height 0.7pt} \omit\span\omit\vrule height2.5ex depth 1ex width 0.7pt\hskip 5pt plus 1fil \relax Brink angle $\alpha$ [$^\circ$] \hskip 5pt plus 1fil \relax\ &\omit\span\omit\vrule\hskip 5pt plus 1fil \relax Separation length $\ell$ [m] \hskip 5pt plus 1fil \relax &\omit\span\omit\vrule\hskip 5pt plus 1fil \relax $\ell/\Delta$ \hskip 5pt plus 1fil \relax \vrule width 0.7pt \cr \noalign{\hrule height 0.7pt} 11&.4 & 13&.22\,$\pm$\, 0.5 & 8&.81\,$\pm$\, 0.33 \cr \noalign{\hrule} 7&.6 & 19&.16\,$\pm$\, 0.5 & 8&.20\,$\pm$\, 0.21 \cr \noalign{\hrule} 3&.78 & 20&.78\,$\pm$\, 0.5 & 7&.33\,$\pm$\, 0.18 \cr \noalign{\hrule} 0& & 19&.47\,$\pm$\, 0.5 & 6&.49\,$\pm$\, 0.17 \cr \noalign{\hrule} $-$3&.78 & 15&.90\,$\pm$\, 0.53 & 5&.61\,$\pm$\, 0.19 \cr \noalign{\hrule} $-$7&.6 & 11&.20\,$\pm$\, 0.73 & 4&.79\,$\pm$\, 0.31 \cr \noalign{\hrule} $-$11&.4 & 5&.91\,$\pm$\, 0.5 & 3&.94\,$\pm$\, 0.33 \cr \noalign{\hrule height 0.7pt} }}\hss} \vskip 3pt \caption{Results for the flow separation length. The error is composed of the discretisation error of the determination of the flow reattachment point and a systematic error (see text).} \label{tab:seplen} \end{table} The error in the separation length was calculated as follows: The determination of the flow reattachment position for one particular simulation was accurate to one grid spacing. The corresponding error in the continuum limit results from the linear regression formulas. This error does not account for biases which may be inherent in the turbulence model, the kind of grid and the parameter settings used. We estimate that systematic error in the absolute separation length to be 0.5\,m. The errors given in Table~\ref{tab:seplen} are the result of adding these errors quadratically. The systematic error dominates in most cases. Our main interest here is in the dependence of the separation length on the dune shape. We find that $\ell/\Delta$ is larger for dunes with a sharp brink than for rounded dunes. It depends linearly on the brink position~$d$, respectively on the angle of the dune shape at the brink,~$\alpha$. As can be seen in Figure~\ref{fig:brinkangle}, the linear relation holds for the whole range of brink angles investigated here. Fitting the relation \begin{equation} \ell(\alpha)/\Delta(\alpha)= A\cdot \alpha + B\,, \label{eq:brinkangle} \end{equation} we obtain $A=0.22/^\circ$ and $B=6.473$. \begin{figure} \input{brinkangle.ptex} \caption{Dependence of the non-dimensionalised separation length on the angle~$\alpha$. The relationship is remarkably linear. Note that the rightmost value of $\alpha$ belongs to the dune with the sharpest brink, i.\ e.\ the shortest dune.} \label{fig:brinkangle} \end{figure} \begin{figure} \input{brinkpos.ptex} \caption{Dependence of the absolute flow separation length on the brink position. The expression derived from the linear angle dependence displayed in Figure~\protect{\ref{fig:brinkangle}} provides a much better fit than a parabola. Note that only the dunes with $d\geq 0$ have the same height, while the others become smaller with decreasing $d$ (see Figure~\protect{\ref{fig:shape}}).} \label{fig:brinkpos} \end{figure} To give the reader an idea of the actual separation lengths we obtained, we also give our results for the absolute separation length. The length of flow separation decreases both for large and for very negative brink positions. As one can see from Figure~\ref{fig:brinkpos}, the maximum does not coincide with $d=0$ but lies to the left of that value. We compute the absolute separation lengths from Equation (\ref{eq:brinkangle}) by using the geometrical relation between the brink angle $\alpha$ and the brink position $d$. This relation can be obtained from the geometry of our dune profiles described above. \begin{eqnarray} \label{eq:anglefitpos} \ell(\alpha(d))&=& (A\cdot\alpha(d) + B)\;\Delta(\alpha(d)) \nonumber\\ &=& \left(-A \,\arcsin \frac dR + B\right)\cdot \left(H_{\hbox{\scriptsize max}} - d\,\tan \left( \frac12 \,\arcsin\frac dR \right) \right) \end{eqnarray} This equation contains the crest height of the round dunes, $H_{\hbox{\scriptsize max}}$, and the curvature radius of the dune shape at the crest, $R$. Both are quantities related to the set of dunes we study here, not single dunes, and therefore stay constant during our investigation. For curiosity we can also try a different fit to the one in Equation~\ref{eq:anglefitpos} and compare their quality. As the data have a maximum and everywhere negative curvature, the most obvious candidate for a fit is a polynomial of second order, that is a parabola. It is plotted in Figure~\ref{fig:brinkpos} but does not fit particularly well, even though it has three parameters compared to two for our fit. The angle-based fit (\ref{eq:anglefitpos}) has a mean deviation of $1.65$ compared to $5.1$ for the parabola fit. \section{The separating streamline} \label{sec:sepline} In order to model the formation and evolution of sand dunes, it is necessary to calculate the ground shear stress on which the flux of transported sand crucially depends. While analytic derivations of the shear stress on landforms exist \cite{Hunt88,Weng91}, they apply to round hills from which air flow does not separate. The sand flux over dunes has been computed without taking into account flow separation by Weng et al.~\cite{Weng91}. One can go one step further and compute the shear stress over a shape which for the most part follows the dune shape, but coincides with the separating streamline in the region of flow separation \cite{Sauermannphd,KroySauer03}. Since the shape of stationary dunes depends sensitively on the shear stress, it is of great importance to know the shape of the separating streamline. \begin{figure} \hbox to \textwidth{\hss\hskip -5cm\input{ellipse.pstex_t}\hss} \caption{The parametrisation of the ellipse describing the separating streamline. In this example, $d>0$ and both $x_0$ and $y_0<0$. $C$ is the centre of the ellipse, and $O$ is the origin, at ground level and at the horizontal position of the dune crest.} \label{fig:ellipse} \end{figure} From our CFD simulation, we extracted the streamline which just touches the brink of the dune and which therefore represents a very good approximation of the separating streamline. In each case, we used the simulation with the smallest grid spacing, 5\,cm. The simulation streamline does not separate directly at the brink, but a small distance down the slip face. But since this distance amounted to two grid spacings in all simulations, independent of which grid spacing was chosen, this is a numerical effect due to the difficult numerics at the flow separation point. Therefore we aim to model only the part of the separating streamline which curves downwards, not the dip near the separation point. \begin{figure} \input{allbubfit_-15.ptex} \input{allbubfit_-10.ptex} \linebreak \input{allbubfit_-5.ptex} \input{allbubfit_0.ptex} \linebreak \input{allbubfit_5.ptex} \input{allbubfit_10.ptex} \linebreak \input{allbubfit_15.ptex} \input{allbubfit_key.ptex} \caption{Fit for the separating streamlines. All coordinates are rescaled using the maximal height of the dunes, 3\,m.} \label{fig:allbubfit} \end{figure} We found that the shape of the separating streamline is well described by an ellipse. An ellipse is determined by four parameters, the coordinates of the centre and the two semiaxes (see Figure~\ref{fig:ellipse}): \dmath2{ \frac{(y-y_0)^2}{a^2} + \frac{(x-x_0)^2}{b^2} &=& 1 &eq:ellipse} Both the brink and the reattachment point have to lie on this ellipse, so there remain two free parameters which have to be fitted. We choose to fit $x_0$ and $b$ and calculate $y_0$ and $a$ from them using the position of the brink and the reattachment point. The brink is located at the point $(d, \Delta)$, the reattachment point is $(d+\ell, 0)$. Putting these two points into the ellipse equation (\ref{eq:ellipse}) and performing some algebra, we obtain a biquadratic equation for $a$: \dmath1{ a^4 + \frac{4\,\Delta^2\,b^2}{(2\,\delta+\ell)^2\,\ell^2}\, \bigg[\,(\delta+\ell)^2-b^2 &-& \frac12\,(2\,\delta+\ell)\,\ell\,\bigg]\;a^2 + \frac{\Delta^4\,b^4}{(2\,\delta+\ell)^2\,\ell^2} = 0\,, &eq:a4\cr \hbox{where}\quad\delta &=& d - x_0\,. } The biquadratic equation (\ref{eq:a4}) can be solved with the standard formula. Choosing the solution for which the ellipse intersects the ground with negative slope, we obtain: \dmath1{ a^2 &=& - \frac{2\,\Delta^2\,b^2}{(2\,\delta+\ell)^2\,\ell^2}\,\bigg[\ldots\bigg] +\sqrt{\left(\frac{2\,\Delta^2\,b^2}{(2\,\delta+\ell)^2\,\ell^2}\, \bigg[\ldots\bigg]\right)^2 - \frac{\Delta^4\,b^4}{(2\,\delta+\ell)^2\,\ell^2}} \;,&eq:a2} where the expression in square brackets is the same as in Equation~\ref{eq:a4}. Since $a$ is positive by definition, it is thereby uniquely determined. $y_0$ can then be computed from $a$ and the constraints, giving: \dmath1{ y_0 &=& \frac{a^2}{2\,\Delta}\,\left(\frac{\Delta^2}{a^2} - \frac{(2\,\delta+\ell)\,\ell}{b^2}\right)\,. &eq:y0} It remains to determine the unknown variables in Equation~\ref{eq:a4}. Besides the measures given by the geometry of the dune, the equation contains $\ell$, $b$ and $x_0$. $\ell$ is given by Equation~\ref{eq:brinkangle}. The other two quantities have to be fitted. We obtain the best overall fit with the following expressions: \dmath{1}{ x_0 &=& \left\{\;\vcenter{\halign{ # \hfil & # \hfil \cr 0 & \quad $d\geq 0$\,, \cr\noalign{\vskip -2mm} $-7\;(H_{\hbox{\scriptsize max}}-\Delta)$ & \quad $d<0$ \cr }}\right. &eq:x0\cr b &=& (d + \ell - x_0) + 0.04\cdot H_{\hbox{\scriptsize max}} &eq:b} It is clear that the difference between the $x$ coordinates of the ellipse's centre and the reattachment point, $d + \ell - x_0$, is a lower bound for the horizontal semiaxis. The additional term in Eq.~\ref{eq:b} is required for the ellipse to intersect the line $y=0$ at an angle rather than vertically. It does not depend on the brink position. The fit together with the data from both sets of simulations is displayed in Figure~\ref{fig:allbubfit}. It can be seen that the fit is very accurate. The upward curvature of the simulation streamlines close to the brink is associated with the delay of flow separation by two grid spacings. This is a numerical effect which we do not model. \section{Closely spaced dunes} \label{sec:close} In the previous sections we have considered single transverse dunes. This was done to be able to make a statement about the shape dependence of flow separation without at the same time dealing with complications due to potential neighbouring dunes. In reality, this corresponds to the case of isolated dunes, which have a distance to their neighbours of around three times their length or more. To get an idea of the influence of neighbouring dunes close by, we performed a simulation of closely spaced dunes. The shape of the dunes was the same as for $d=0$ in Figure~\ref{fig:shape}. The dunes were set next to each other so that the foot of the upwind slope of each following dune coincided with the slip face foot of the previous one, as shown in Figure \ref{fig:multiple}. The simulation parameters were the same as previously. This simulation was only done with the grid spacing 0.1\,m. It should be understood that our geometrical construction leads to the upwind side of a dune rising immediately at the foot of the previous dune. Since the sand cannot be moved within the separation region, this means that this profile is not stable. However, as we can know the length of flow separation only after our simulation, we cannot know in advance what a stable profile would look like. \begin{figure} \hbox to \textwidth{\hfill \includegraphics[height=14cm,angle=-90]{simregion10.eps}\hfill} \caption{The simulation region in the simulation of multiple dunes.} \label{fig:multiple} \end{figure} \begin{table} \halign{ \vrule height 2.5ex depth 1ex width 0.7pt \ # \hfil \vrule &\ \hfil # \hfil \vrule &\ \hfil # \hfil \vrule & \hfil\ # \hfil \vrule &\ \hfil # \hfil \vrule &\ \hfil # \hfil \vrule & \hfil\ # \hfil \vrule &\ \hfil # \hfil \vrule &\ \hfil # \hfil \vrule & \hfil\ # \hfil \vrule width 0.7pt\cr\noalign{\hrule height 0.7pt} Dune number & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \cr\noalign{\hrule} Separation length & 17.77 & 15.78 & 15.41 & 15.19 & 15.00 & 14.90 & 14.80 & 14.71 & 14.71 \cr \noalign{\hrule height 0.7pt} } \caption{The separation lengths obtained from the simulation of closely spaced dunes. The errors are a statistical error of 0.05\,m in the determination of the reattachment point and a systematical error of 0.5\,m.} \label{tab:multisep} \end{table} \begin{figure} \hbox to \textwidth{\hfil\input{closebubble.ptex}\hfil} \caption{The ellipse fit also describes the separating streamline of closely spaced dunes. This Figure shows the last of the dunes in Figure~\protect{\ref{fig:multiple}}. Both $h$ and $x$ are normalised by dividing by the crest height, 3 m.} \label{fig:closebubble} \end{figure} Table~\ref{tab:multisep} shows the separation lengths in the lee side of the nine dunes in the simulation. One can see that the values converge towards the downwind end of the simulation area. Therefore we take the separation length of the last dune as the value for a dune in an extended dune field. By comparison with table~\ref{tab:seplen}, it is $\approx 25$\,\% smaller than for an isolated dune. We can now fit the separating streamline in the same way as for the isolated dunes. The Formulas (\ref{eq:ellipse}) to (\ref{eq:b}) apply unchanged except for one modification: Since the foot of the downwind dune curves upwards, the separating streamline intersects the ground upwind of where it would for an isolated dune. We account for this by replacing the separation length $\ell$ by an $\tilde\ell$, which is larger than the separation length. $\tilde\ell$ is the intercept of the separation bubble shape with the line $h=0$, while for closely spaced dunes the reattachment position at $x=\ell$ has a height $h>0$. The best fit is obtained for $\tilde\ell=16\,$m. It is shown in Figure~\ref{fig:closebubble}. The upward curvature of the simulated streamline close to the brink is especially pronounced because of the large grid spacing of this simulation. \section{Discussion} \label{sec:discuss} This section compares our results to previous work. A recent review of air flow over transverse dunes \cite{WalkerNickling02} cites values of 4--10 for $\ell/\Delta$. Our results also lie within that range (see Figure~\ref{fig:brinkangle}). Engel \cite{Engel81} finds values for the non-dimensionalised separation length between 4 and a little over 6, depending on the roughness and the aspect ratio of triangular dunes. In Reference~\cite{Parsons04} a wide range between 3 and 15 is given for the same quantity. Their values are for an aspect ratio of 0.1, which applies to our dune with $\alpha=0$, are 5.67 and 8.13, depending on the height. This compares well with our value of 6.49. The discrepancy can be explained by the different shape, in particular the fact that our dune shape for $\alpha=0$ has a horizontal tangent at the brink, whereas the dunes in Ref.~\cite{Parsons04} are triangular. The field measurements \cite{Parteli04} were performed in a closely spaced dune field. The dune profile was measured along a straight line in wind direction, perpendicular to the dunes. The authors find that the distance between the brink of each dune and the foot of the following one is typically four times the height or below. Under the assumption that the dune field is stationary, this distance is an upper limit of the separation length. We found a separation length of 4.9 times the height for closely spaced dunes with a horizontal tangent at the brink. Considering that only two of the six dunes measured in Ref.~\cite{Parteli04} had positive slope at the brink and that the dunes with the shortest separation length were indeed very round, the agreement is not bad. Last but not least, our results are supported by a recent fit to experimental data \cite{Paarlberg05}. The authors obtain a non-dimensionalised separation length in the range from 4 to 7.5 for a brink angle ranging from $-10^\circ$ to $10^\circ$. This is similar to our results, and Figure 12 in \cite{Paarlberg05} strongly resembles our Figure~\ref{fig:brinkangle}. The authors present a polynomial fit for the separating streamline. Unfortunately a re-parametrisation of either fit which would be necessary for a quantitative comparison is beyond the scope of this paper. \cite{Paarlberg05} find that the separating streamline intersects the ground at the reattachment point at an angle. This is contrary to what previous fits of the separating streamline have assumed, but is again in line with our findings. \section{Summary and outlook} \label{sec:concl} We have investigated the air flow over transverse dunes of different shapes using the commercial CFD software FLUENT. The variation in shape was achieved by moving the position of the slip face of the dune to different places. We have determined the length of flow separation in the lee side of these dunes. For each dune shape, six simulations were performed, with two absolute sizes of the dune and three different grid spacings to be able to remove the remaining influences of the grid spacing. The maximal separation length does not occur for dune shapes with a horizontal tangent at the brink, but for shapes with a somewhat sharper brink. The separation length non-dimensionalised through division by the slip face height was found to depend linearly on the position of the slip face as represented in Equation~\ref{eq:brinkangle}. This linear law can be rewritten with the help of geometric properties of the dune to give the absolute separation length. The shape of the separating streamline, that is the boundary of the recirculation region, is well approximated by an ellipse. This ellipse is constrained by the requirement that the brink and the flow reattachment point lie on it. The horizontal position of its centre is at the crest for rounded dunes. For dunes with a sharp brink, it lies to the left of the crest of the rounded dunes, and its position is proportional to the difference in height between the round dunes and the sharp dune in question (see Equation~\ref{eq:x0}). The horizontal semiaxis of the ellipse has to be chosen so that its rightmost point lies to the right of the flow reattachment point by 0.04 times the height of the rounded dunes, independent of the brink position. Lastly, we have extended our investigation by simulating the flow over a field of ten closely spaced transverse dunes. Here we restricted ourselves to one dune shape with horizontal tangent at the brink. The separation length reached an asymptotic value behind the ninth dune, which was 25\,\% less than the value for an isolated dune. There still remain many open questions concerning the air flow over dunes. The most obvious restriction of our results is that they were obtained for transverse dunes only. The three-dimensional shape of other dunes, for instance barchans, calls for a three-dimensional description of their recirculation region. Furthermore, one should investigate how accurate the concept of a separation bubble is: It has been assumed for the purpose of sand transport simulations \cite{KroySauer03} that the wind shear stress on a dune shape is the same as the shear stress over a shape composed of a dune and the recirculation region in its lee. While good results for dune shapes support this assumptions, it should be verified from fluid dynamics. Lastly, the influence of the dune size should be investigated. The flow over dunes is fully turbulent and therefore scale invariant. However, if the dune is scaled up while the ground roughness and the inflow velocity profile are kept invariant, the separation length can change. This was found for instance by Engel \cite{Engel81}. It bears investigation how the phenomenological laws and constants found in this work depend on the dune size. \section*{Acknowledgements} We thank Martin Winter, Jos\'e Soares de Andrade Jr.\ and Murilo Pereira de Almeida for helpful comments and discussions and for information on the FLUENT software. We thank the Volkswagen Stiftung and the Max Planck Price for funding much of our research in this field.
proofpile-arXiv_065-3161
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the past, the self-organized microdomain structures of diblock copolymers have been the target of extensive studies \cite{Matsen Bates,Hamley book,Bates Fredrickson,Fredrickson}. Especially, the order-order transitions (OOTs) between the microdomain structures are one of the central issues of the current experimental and theoretical studies. Among various microdomain structures, the bicontinuous double gyroid (G) structure has attracted a great interest because of its complex structure (space group $Ia\bar{3}d$) \cite{Hajduk}. Although this G phase exists only in a narrow region of the phase diagram of a diblock copolymer, i.e. the region between the lamellar (L) phase and the hexagonally packed cylinder (C) phase \cite{Matsen Bates 96}, its complex domains are expected to have a wide applicability to various techniques, for example, microporous systems, nano-reactors, and so on \cite{Hashimoto,Zhao,Chan}. Experimentally, the OOT from the G phase to the C phase is believed to be an epitaxial transition where the created cylindrical domains are commensurate to the original gyroid domains. However, the microscopic detailed process of this epitaxial transition has not been understood yet. Epitaxial relationships between the G and C structures were observed in several experiments. Upon a temperature change, Ran\c{c}on and Charvolin have observed that the \{10\} plane of the C is in commensurate to the \{211\} plane of the G domains in a surfactant system \cite{Y. Rancon J. Charvolin}. In the present paper, we will use simple notation as C \{10\} $\rightarrow$ G \{211\} for such an epitaxial relationships between planes. An external shear flow also accelerates the OOTs. Under a shear flow and a temperature change, Schulz et al. have found different epitaxial relationships C \{10\} $\rightarrow$ G \{220\} and C \{11\} $\rightarrow$ G \{211\} in a block copolymer mixture by small-angle neutron scattering (SANS) experiments \cite{M. F. Schulz F. S. Bates K. Almdal K. Mortensen}. Under similar experimental conditions using small-angle X-ray scattering (SAXS) diffraction techniques, F\"orster et al. have observed the epitaxial relationship C \{10\} $\rightarrow$ G \{211\} similar to the Ran\c{c}on and Charvolin's observation \cite{Forster}. The same epitaxial relationship was also observed by Vigild et al. in a block copolymer system using SANS \cite{M. E. Vigild}. A cyclic transition C $\rightarrow$ G $\rightarrow$ C in a block copolymer solution has been studied by Wang and Lodge, who supported the epitaxial relationship G \{211\} $\rightarrow$ C \{10\} \cite{C. -Y. Wang T. P. Lodge}. From the point of view of kinetics, a long-lived coexistence between the C phase and the G phase has been found in the C $\rightarrow$ G of a block copolymer system under a shear flow and a temperature change \cite{Floudas Ulrich Wiesner Chu}. Furthermore, a grain boundary between the C phase and the G phase has been observed by a polarized optical microscopy in a quenched polymer solution \cite{T. Q. Chastek T. P. Lodge}. These observations suggest an existence of a stable boundary between the C phase and the G phase. On the theoretical side, mean field theories have been used to investigate microdomain structures of diblock copolymers. Using the self-consistent field (SCF) technique, Helfand and Wasserman have evaluated the free energy and predicted the equilibrium domain sizes of the classical phases in the strong segregation regime such as the body centered cubic crystal of spherical domains (BCC), the C phase, and the lamellar (L) phase \cite{Helfand Wasserman 4,Helfand Wasserman 5, Helfand Wasserman 6}. On the other hand, the phase diagram of diblock copolymer in the weak segregation regime was predicted by Leibler using the random phase approximation (RPA) \cite{Leibler}. Leibler's phase diagram is composed of classical phases and the disordered (D) phase depending on the values of the block ratio and the $\chi N$, i.e. the product of the Flory-Huggins interaction parameter $\chi$ and the total degree of polymerization of diblock copolymer $N$. The entire phase diagram including both the weak segregation regime and the strong segregation regime has been constructed by Matsen and Shick using the SCF technique in the reciprocal lattice space. Besides the classical phases, they predicted the complex G phase in the weak and intermediate segregation regime \cite{Matsen Bates,Matsen Schick}. This theoretical phase diagram was confirmed experimentally \cite{Khandpur et al.}. Despite the success of the mean field theories on the equilibrium phase behavior, the investigation on the dynamic properties has not been fully developed yet. There have been a few trials on the dynamics of OOTs and order-disorder transitions (ODTs) of the microdomain structures of diblock copolymers using the mean field approximation. A time dependent Ginzburg-Landau (TDGL) model was used to investigate the instability in the OOTs and ODTs such as OOTs L $\rightarrow$ C and L $\rightarrow$ S, and C $\rightarrow$ S. \cite{Qi Wang 1,Qi Wang 2,Qi Wang 3}. In these studies, the authors retained the most unstable modes in the Fourier amplitudes of the density fluctuations emerging in the vicinity of the critical point. A TDGL model described in terms of Fourier modes with two sets of wave vectors with different magnitudes has been used to study the transitions D $\rightarrow$ G, G $\rightarrow$ C, and so on \cite{Nonomura Ohta 1,Nonomura Ohta 2,Nonomura Ohta 3}. Although the TDGL theory is efficient in investigating large-scale systems, it is in principle applicable only to the weak segregation regime. On the other hand, the SCF theory can be used to study the phase transitions in weak, intermediate and strong segregation regimes. The quantitative accuracy of the SCF theory is another advantage compared to the TDGL theory. This is because the SCF theory takes the conformational entropy of the polymer chains into account precisely \cite{Helfand Wasserman 4,Hong,Fleer,kawakatsu book}. Using the SCF theory, Laradji et al. have investigated the epitaxial transitions such as L $\leftrightarrow$ C, C $\leftrightarrow$ S, and G $\rightarrow$ C taking the anisotropic fluctuations into account \cite{Laradji,MatsenComment}. Matsen has also studied the transitions C $\leftrightarrow$ S and C $\leftrightarrow$ G using the SCF theory and has proposed a nucleation and growth model of the epitaxial transitions \cite{Matsen Cylinder,Matsen Gyroid}. All of these theoretical studies mentioned above rely on the reciprocal space representations \cite{Nonomura Ohta 3,Laradji,Matsen Gyroid} and most of experimental studies \cite{Y. Rancon J. Charvolin,Forster,M. E. Vigild} have supported the existence of the epitaxial OOT G \{211\} $\leftrightarrow$ C \{10\} except for Schulz et al., who have supported the epitaxial OOT G \{220\} $\leftrightarrow$ C \{10\}. Experimentally, the epitaxial OOT G \{211\} $\leftrightarrow$ C \{10\} and G \{220\} $\leftrightarrow$ C \{10\} are recognized as the same epitaxial relationships \cite{M. E. Vigild} because the diffraction peaks from both G \{211\} and G \{220\} match well with the diffraction peaks from the C \{10\}. In this argument, however, the kinetic pathway in real space was not considered. In Figure 1, we show a projection of the G structure onto the [111] direction in the real space. The epitaxial relations G \{211\} $\leftrightarrow$ C \{10\} and G \{220\} $\leftrightarrow$ C \{10\} are shown in Figure 1(a) and 1(b), respectively, where the spacing of the G planes and the epitaxial cylindrical domains are shown. As the directions of these two planes G $\{112\}$ and G $\{220\}$ are perpendicular with each other and the spacings between adjacent planes are also different, the two growth mechanisms shown in Figures 1(a) and (b) should be regarded as different ones. Furthermore, the OOTs of block copolymer melts are first order phase transitions and the nucleation and growth process of domains is expected. Since these process is spatially inhomogeneous, such a transition is not compatible with the treatment in the reciprocal lattice space (Fourier space), where spatially periodic lattice structures are assumed. Therefore, dynamical simulations in real space, such as the dynamical SCF simulation, is necessary to correctly investigate this transition \cite{Fraaije,Zvelindovsky,Hasegawa Doi,Hamley latest}. In the present paper, we study the epitaxial OOT G $\rightarrow$ C using the dynamical SCF theory under shear flows. In order to treat the first order transition, we introduce a system size optimization (SSO) method, in which the side lengths of the simulation box are automatically adjusted so that the size and the shape of the simulation box are fit for the lattice spacing and the lattice axes of the ordered structures. Recently, Barrat et al. proposed a similar technique to study equilibrium domain morphology of block copolymer systems \cite{Barrat}. In the course of the transition, we observe a complex transient state that is composed of cylindrical domains parallel to the G \{220\} plane. We also confirm that our SSO method can reproduce spatially inhomogeneous nucleation and growth processes. Actually we observe the coexistence between the G phase and the C phase, which is consistent with the experimentally observed first order transition behavior \cite{T. Q. Chastek T. P. Lodge}. Furthermore we found that the G structure shows different deformation behaviours depending on the direction of the velocity gradient of the shear flow. We cannot confirm the scenario of the transition proposed by Matsen \cite{Matsen Gyroid}, where the three fold junctions transform into five fold junctions. Finally, we clarify the kinetic pathway from the G phase to the C phase under a shear flow. To the best of our knowledge, this kinetic pathway in the real space has not been reported in the literature. \section {Theory} \subsection{Dynamical self-consistent field theory} Here, we briefly summarize the SCF theory for an A-B diblock copolymer \cite{Fredrickson,Fleer,Fraaije,Morita Kawakatsu}. Let us consider a melt of A-B diblock copolymer. Due to the screening effect in the melts, we can assume Gaussian statistics for the chain conformation. Within this Gaussian statistics, the $K$-type ($K=$A or B) segment is characterized by the effective bond length $b_K$, and the $K$-type block is characterized by the degree of polymerization $N_K$. Then the total degree of polymerization $N$ is defined as $N \equiv N_A+N_B$. We introduce an index $s$ to specify each segment, where $s=0$ corresponds to the free end of the $A$-block and $s=N$ corresponds to the other free end of the $B$-block. Therefore, $0 \le s \le N_A$ and $N_A \le s \le N$ correspond to the $A$- block and the $B$-block, respectively. In order to evaluate the conformational entropy, we need the statistical weight of any subchains. Let us use the notation $Q(s', {\bf r}'; s, {\bf r})$ to denote the statistical weight of a subchain between $s$-th and $s'$-th segments ($0 \le s' \le s \le N$) that are fixed at the positions ${\bf r}$ and ${\bf r}'$. This statistical weight can be obtained by solving the following Edwards equation within the mean-field approximation \begin{equation} \frac{\partial}{\partial s } Q(s', {\bf r}'; s, {\bf r}) = \Bigl[ \frac{b(s)^2}{6} \nabla^2 - \beta V(s, {\bf r}) \Bigr] Q(s', {\bf r}'; s, {\bf r}), \label{Schroedinger equation1} \end{equation} where $\beta = 1/(k_B T)$, $b(s) = b_K$ if the $s$-th segment is the $K$-type segment, and $V(s, {\bf r})$ is an external potential acting on the $s$-th segment at ${\bf r}$ imposed by the surrounding segments. Here, we assume that the external potential $V(s, {\bf r})$ is the same if the segment species ($A$ or $B$) is the same. Thus, \begin{eqnarray} V(s, {\bf r}) = \left\{ \begin{array}{rl} V_A({\bf r}) & \mbox{if $s$ indicates an $A$-segment} \\ V_B({\bf r}) & \mbox{if $s$ indicates a $B$-segment}. \\ \end{array} \right. \end{eqnarray} Equation (\ref{Schroedinger equation1}) should be supplemented by the initial condition $Q(0, {\bf r}'; 0, {\bf r})= \delta( {\bf r}' - {\bf r} )$. As the two ends of the block copolymer are not equivalent, we should introduce another statistical weight $\widetilde{Q}(s', {\bf r}'; s, {\bf r})$, which is calculated in the opposite direction along the chain starting from the free end $s=N$. To reduce the computational cost, we define an integrated statistical weights $q(s, {\bf r})$ and $\widetilde{q}(s, {\bf r})$as follows: \begin{eqnarray} q(s, {\bf r}) \equiv \int d{\bf r}' Q(0, {\bf r}'; s, {\bf r}) \nonumber \\ \widetilde{q}(s, {\bf r}) \equiv \int d{\bf r}' \widetilde{Q}(0, {\bf r}'; s, {\bf r}). \label{normal path integral of subchain} \end{eqnarray} It is easy to confirm that $q(s, {\bf r})$ and $\widetilde{q}(s, {\bf r})$ also satisfy eq.~(\ref{Schroedinger equation1}). By using eqs. (\ref{normal path integral of subchain}), the density of the $K$-type segments at position ${\bf r}$ is given by \begin{eqnarray} \phi_K({\bf r}) = C \int_{s \in K{\rm -block}} ds \ q(s, {\bf r}) \widetilde{q}(N-s, {\bf r}), \label{concentration field of polymer} \end{eqnarray} where $C$ is the normalization constant: \begin{eqnarray} C = \frac{M}{ \int d{\bf r}\ \int ds q(s, {\bf r}) \widetilde{q}(N-s, {\bf r}) } = \frac{M} { {\cal Z} }. \end{eqnarray} The parameter $M$ is the total number of chains in the system and ${\cal Z}$ is the single chain partition function which is independent of $K$, i.e. $ {\cal Z}=\int d{\bf r}q(s, {\bf r}) \widetilde{q} (N-s, {\bf r})= \int d{\bf r} q(N, {\bf r}) = \int d{\bf r} \widetilde{q}(N, {\bf r})$ . The external potential $V_K({\bf r})$ can be decomposed into two terms as follows \begin{eqnarray} V_K({\bf r}) = \sum_{K'} \epsilon_{KK'} \phi_{K'}({\bf r}) - \mu_{K}({\bf r}). \label{external potential} \end{eqnarray} The first term is the interaction energy between segments and the $\epsilon_{KK'}$ is the nearest-neighbor pair interaction energy between a $K$-type segment and a $K'$-type segment, which is related to the Flory-Huggins interaction parameter via $\chi_{AB} \equiv z \beta \bigl[ \epsilon_{AB} - (1/2) (\epsilon_{AA} + \epsilon_{BB}) \bigr] $ where $z$ is the number of nearest neighbor sites. The $\mu_{K}({\bf r})$ is the chemical potential of the $K$-type segment, which is the Lagrange multiplier that fixes the density of the $K$-type segments at the position ${\bf r}$ to the specified density value. The $V_{K}({\bf r})$ must be determined in a self-consistent manner so that this constraint is satisfied. Such a self-consistent condition is achieved by an iterative refinement of the $V_{K}({\bf r})$. To improve the stability of the numerical scheme, we used the following finite difference scheme for the Edwards equation, eq. (\ref{Schroedinger equation1}) \begin{equation} q(s+ \Delta s, {\bf r}) = \exp \bigl[ - \frac{\beta V(s, {\bf r}) \Delta s } {2} \bigr] \Bigl( 1 + \frac{b(s)^2}{6} \nabla ^2 \Delta s \Bigr) \exp \bigl[ - \frac{\beta V(s, {\bf r}) \Delta s } {2} \bigr] q(s, {\bf r}). \label{multistate eq} \end{equation} The Helmholtz free energy of the system can be given as follows \begin{equation} {\cal F} = - k_{\rm B}T M \ln{{\cal Z}} + \frac{1}{2} \sum_{K} \sum_{K'} \int d{\bf r}\epsilon_{ KK'} \phi_K({\bf r}) \phi_{K'}({\bf r}) - \sum_K \int d{\bf r}V_K({\bf r}) \phi_K({\bf r}). \label{free energy} \end{equation} To introduce dynamics into the model, we assume Fick's law of linear diffusion for the segment densities and an effect of the flow advection as follows \begin{equation} \frac{\partial}{\partial t} \phi_K({\bf r},t) = L_K \nabla^2 \mu_{K}({\bf r}) - \nabla \{ {\bf v}({\bf r},t) \phi_K({\bf r},t) \}, \label{dynamical equation of diffusion} \end{equation} where $L_K$ is the mobility of $K$-type segment and ${\bf v}({\bf r},t)$ is the local flow velocity such as the velocity of the externally imposed shear flow. \subsection{System size optimization method} Periodic microdomain structures of diblock copolymers have the crystal symmetry. To obtain equilibrium states of these periodic structures using the mean field theory, the free energy density of the system must be minimized with respect to the lattice structures of the ordered microdomains. Same is true for two phase coexisting states where the system size should be optimized with respect to the coexisting two periodic structures. For these purpose we introduce the system size optimization (SSO) method that minimizes the free energy density of the system by optimizing the side lengths of the simulation box on which periodic boundary conditions are imposed. This is a similar method as the constant pressure molecular dynamics simulation proposed by Andersen\cite{Andersen}. In the static SCF calculations, this optimization can be performed by requiring the following local equilibrium condition for each side length of the simulation box: \begin{equation} \frac {\partial \cal F}{\partial \mathcal{L}_i} = 0, \label{SSO static} \end{equation} where $\mathcal{L}_i(i=x,y,z)$ is the side length of the simulation box. The left-hand side of eq. (\ref{SSO static}) can be evaluated numerically using the following central difference approximation \begin{equation} \frac {\partial \cal F}{\partial \mathcal{L}_i} = \frac { F (\mathcal{L}_i + \Delta \mathcal{L}_i ) - F ( \mathcal{L}_i - \Delta \mathcal{L}_i ) } { 2 \Delta \mathcal{L}_i }, \label{SSO static numerial} \end{equation} where $\Delta \mathcal{L}_i$ is a small variation of $\mathcal{L}_i$. We used the parabolic optimization method\cite{Numerical Recipe in C} to solve eq. (\ref{SSO static}). On the other hand, when the dynamical SCF calculation is performed, we should regard $\mathcal{L}_i$ as a dynamical variable whose dynamics is described by the following ficticious equation of motion \begin{equation} \frac {\partial \mathcal{L}_i} {\partial t} = - \zeta_i \frac {\partial \cal F}{\partial \mathcal{L}_i}, \label{dynamics_L} \end{equation} where $\zeta_i$ is a positive coefficient whose value is chosen properly so that the local equilibrium condition eq.(\ref{SSO static}) for $\mathcal{L}_i$ is guaranteed at every time step. We checked the validity of our dynamical SSO method by using an A-B diblock copolymer melt whose stable equilibrium phase is the C phase. We performed two dimensional simulations where we assumed that $\zeta_x = \zeta_y = \zeta$ for simplicity, and we changed $\zeta$ from 0.0 to 0.5. The parameters characterizing the A-B diblock copolymer are as follows: the total length of the copolymer $N = 20$, the block ratio of the A block $f = N_A/N = 0.35$, and the effective bond lengths of each segment type are unity. The interaction parameter is set to be $\chi N = 15$, which corresponds to the C phase in its equilibrium state \cite{Matsen Schick}. The initial state is set to the D phase to which we added small random noise with the standard deviation 0.0006. The initial shape of the simulation box is a square with side length $32.0$. As the square shape of the simulation box is not compatible with the perfect C phase, the SSO method adjusts the side lengths of the simulation box automatically. In Figure 2, we show a comparison of the domain morphologies in the late stage ($t=5000$) between the two cases (a) with $\zeta = 0.001$ and (b) with $\zeta=0.05$, respectively. In case (a), the C structure is distorted because the rate of the change in the side lengths of the simulation box is too slow to catch up with the change in the domain periodicity. On the other hand in case (b), a perfect C phase is realized. When $\zeta=0.5$, we observed that the dynamical scheme eq.(\ref{dynamics_L}) becomes unstable. Other dynamical variables that depend on the value of $\zeta$ are shown in Figures 3 and 4. Figure 3 shows the time evolution of the free energy. The dotted line is the reference state with $\zeta=0$ (i.e. the case without SSO) which reaches the distorted morphology shown in Figure 2(a). When the value of $\zeta$ is small ($\zeta=0$ and 0.001), it takes longer time for the free energy to relax and finally the system is trapped in a local minimum of the free energy. For the intermediate values of $\zeta$ ($\zeta=0.05$, 0.1, and 0.2), the system reaches the perfect C phase as shown in Figure 2(b). When the value of $\zeta$ is large ($\zeta=0.5$), the free energy initially drops rapidly and then the system is trapped by a local minimum of the free energy. These results mean that choosing an appropriate value of $\zeta$ accelerates the system to relax to the equilibrium domain morphology without distortions and defects. Figure 4 shows the time evolution of the side lengths of the simulation box. The solid curves and the dotted curves indicate the $\mathcal{L}_x$ and the $\mathcal{L}_y$, respectively. In all cases, the side lengths increase in the initial stage. After such an initial stage, the side lengths reach their maximum values and then decrease for large $\zeta$ value. When $\zeta = 0.05$, the curve does not show an overshoot, and the system smoothly reaches the perfect C phase. Thus, we judge that $\zeta=0.05$ is the most appropriate value for our system. The above-mentioned dynamical SCF simulation can be performed with use of the "Simulation Utilities for Soft and Hard Interfaces (SUSHI)" in OCTA system \cite{SUSHI}. The simulation results reported in this article is obtained using SUSHI. \section{Simulation Results} We simulated the epitaxial OOT G $\rightarrow$ C by imposing an external shear flow to an A-B diblock copolymer that is characterized by the parameters given in Section 2.2 using the technique described in the previous section. The details of the simulation procedure are given below. \subsection{Initial gyroid structure and final cylindrical structure} The initial state of the simulation is chosen as the equilibrium G structure at $\chi N=20$. To generate such an equilibrium G structure, we used the following procedure. Let us denote the equilibrium (or steady state) side length of the unit cell of the G structure as $D_G$, and the equilibrium (steady state) spacing of the lamellar structure formed by the same block copolymer at $\chi N=20$ as $D_L$. The value of $D_L$ can easily be obtained using a one dimensional SCF calculation with SSO. Then, assuming an epitaxial relationship in the transitions L\{10\} $\rightarrow$ C \{10\} $\rightarrow$ G \{211\} at a fixed value of $\chi N$, we can obtain an approximant for $D_G$ as follows \begin{equation} D_G = \sqrt{ 6 } D_L. \label{D_G definition} \label{G_SSO} \end{equation} Using this value of $D_G$ as the initial size of the simulation box, we set the SCF potential with the G symmetry as \begin{equation} V(x,y,z) = V_0 \Bigl( \cos \frac{2 \pi x}{D_G} \sin \frac{2 \pi y}{D_G} + \cos \frac{2 \pi y}{D_G} \sin \frac{2 \pi z}{D_G} + \cos \frac{2 \pi z}{D_G} \sin \frac{2 \pi x}{D_G} \Bigr) ^2, \label{G_level_surface} \end{equation} where $x$, $y$, and $z$ are the Cartesian coordinates, and the $V_0$ is an arbitrary small coefficient which we assume to be 0.001 for the minor segments and -0.001 for the major segments. The use of the squared form on the right-hand side of eq.(\ref{G_level_surface}) originates from the fact that the gyroid structure in block copolymer melt is formed by double networks each with the G symmetry. By assigning different signs to the $V_0$'s for major and minor segments, we can let the minor phase to gather inside the gyroid network while the major phase becomes rich in the matrix region. Starting from the SCF potential given by eq.(\ref{G_level_surface}), we perform a three dimensional static SCF calculation with SSO, which gives the equilibrium G structure. Figure 5 shows the optimized bicontinuous double gyroid structure obtained using the above method, where the parameter $\Delta s$ in eq. (\ref{multistate eq}) is taken as 0.2. Figure 5 shows the super cell composed of eight optimized conventional unit cells of the G structure. The side length of the optimized G unit cell is $D_G^0 = 17.2$. From this G super cell, we can extract another unit cell as shown in Figure 6(a) where $X$, $Y$ and $Z$ axes are chosen to be parallel to the $[1\bar{1}0]$, $[11\bar{2}]$, and $[111]$ directions, respectively. The obtained unit cell in Figure 6(a) is the minimal periodic unit cell with the $Z$ axis oriented to the [111] direction of the G unit cell. The side lengths of the unit cell are $\sqrt{2} D_G^0$, $\sqrt{6} D_G^0$, and $(\sqrt{3}/2) D_G^0$, respectively, where the volume of the unit cell is three times larger than that of the cubic G unit cell. Figure 7 shows the projections of the G structure onto three different directions. The bicontinuously arranged rods are the domains composed of the minor A phase. Figure 7(a) shows the projection along the [111] direction that is the same as the left-hand side picture in Figure 6(a). Figures 7(b) and 7(c) show the projections along the $[1\bar{1}0]$ and the $[11\bar{2}]$ directions, respectively. In Figure 7, we can see the edges of the G unit cell (tilted cube) drawn by dotted lines and the extracted unit cell (cuboid) drawn by solid lines. The self-consistent field on the $X$-$Y$ plane in Figure 6(a) is used as the initial condition for the static two dimensional SCF calculation for the C structure at $\chi N=15$, where we assumed an epitaxial OOT G $\rightarrow$ C. The optimized two dimensional C structure is shown in Figure 6(b), where we used the same scale as in Figure 6(a) for a direct comparison. The lengths of the vertical and horizontal axes of the two dimensional C structure shown in Figure 6(b) are 2.0\% and 3.2\% larger than those of the G structure shown in Figure 6(a), respectively. As the changes in the side lengths are rather minor, we expect an epitaxial transition for the G structure at $\chi N=20$ to the C structure at $\chi N =15$. The direction of the \{10\} plane of the cylindrical domains in Figure 6(b) coincides with that of the cylindrical domains in Figure 1(b). This result contradicts the standard explanation of the epitaxial transition G \{211\} $\rightarrow$ C \{10\} which was proposed in the previous experimental works and mean field calculations. Instead, we expect that the actual epitaxial transition should be G \{220\} $\rightarrow$ C \{10\} as shown in Figure 1(b). \subsection{The epitaxial OOT from the G structure to the C structure} The OOT G $\rightarrow$ C is induced by a sudden increase in the temperature from $\chi N=20$ to $\chi N=15$, the former and the latter corresponding to the G and C phases, respectively \cite{Matsen Schick}. This phase transition is believed to be first order and should basically be driven by the thermal fluctuations. An introduction of an external flow accelerates the transition \cite{Zvelindovsky}. We introduce a shear flow whose direction is oriented to the [111] direction of the G unit cell. The velocity field ${\bf v}({\bf r})$ of this external shear flow is given by \begin{equation} {\bf v}({\bf r}) = \Bigl( 0, 0, \dot{\gamma} (\frac{\mathcal{L}_y}{2} - y) \Bigr), \end{equation} where $y$ is the Cartesian coordinate along the $Y$ axis and the $ \mathcal{L}_y /2$ is the $Y$-coordinate of the center of the system. This flow field is indicated in Figure 6(a) by the arrows. The Lees-Edwards boundary condition was employed in the $Y$ direction \cite{Computer Simulation of Liquids}, and the periodic boundary conditions were employed in the other directions. For the dynamical SCF calculation, the parameters were set as follows: the criterion of the convergence of the segment density is $\Delta \phi = 0.0005$, i.e. if the difference between the two segment density fields at consecutive steps in the SCF iteration becomes everywhere below $\Delta \phi$, we regard the segment density field has converged. The mobility $L_K$ in eq. (\ref{dynamical equation of diffusion}) is set to $L_K = 1.0$, the shear rate $\dot{ \gamma } = 0.001$, and $\Delta t = 0.01$, respectively. The parameter $\zeta_i$ for the SSO is set 0.05 with which the SSO can reproduce the complete C domain in the two dimensional system as described in Section 2.2. With this parameter, the SSO is performed at every other 100 time steps. The temporal change of the microphase structure is shown in Figure 8. The G structure is deformed by the shear flow as shown in Figures 8(a) and (b). Suddenly, a grain boundary is generated in Figure 8(c), which is indicated by a white arrow. This grain boundary consists of several cylinders parallel to the [111] direction of the G unit cell and this boundary region separates the upper G phase and the lower G phase. The transition from the G structure to the C structure takes place in this lower phase as shown in Figures 8(d)-(f). The cylinders are tilted to the [111] direction of the G unit cell as shown in the side view of Figure 8(f). The tilting of the cylinders is caused by the constant shear flow because a steady shear flow is composed of two contributions, a uniaxial extension and a rotation \cite{kawakatsu book}. This rotational contribution tilts the cylinders. Such a tilting is suppressed in experiments by using oscillatory shear flows. Three fold junctions in the upper G phase shown in Figure 8 (f) are stable and are migrated by the shear flow. Even in the late stage ($t$=10000), we cannot obtain the final equilibrium C structure. This result suggests that boundary between the upper G phase and the lower C phase is stable and the separated G and C phases coexist stably. Actually, a clear boundary between the G and C grains is observed by a polarized optical microscopy in a polymer solution \cite{T. Q. Chastek T. P. Lodge} and the long-lived coexistence between the G and the C phases is experimentally observed in a block copolymer \cite {Floudas Ulrich Wiesner Chu}. The changes in the side lengths of the simulation box are shown in Figure 9. The side lengths in the $X$ and $Z$ directions are almost constant, which means that the epitaxial condition is satisfied. On the other hand, the side length in the $Y$ direction increases with time. The reason of the increase is explained below. To check the effect of the SSO method, we carried out the same dynamical SCF simulation but without the SSO method. The time evolution of the domain morphology is shown in Figure 10. In this case, as shown in Figures 10(a)-(d), the OOT occurs at the center of the system where the G structure transforms into the C structure. The stable grain boundary as shown in Figure 8 is, however, not observed in this case without SSO. The cylindrical domains are also tilted to the [111] direction of the G unit cell as is shown in the side view of Figure 10(c). After such a transient state, the system reaches the complete C structure. A characteristic phenomenon is observed near the center of the system where the cylindrical domains reconnect as shown in Figures 8(d)-(f). Such reconnections continue steadily for certain time duration. This reconnection phenomenon means that the system is in a dynamical steady state where the energy injected by the shear flow into the system is released by the energy dissipation accompanied by the periodic reconnections of the cylindrical domains. In order to check the stability of the systems, we show in Figure 11 the time evolution of the free energy density during the phase transition. The free energy density in the case with the SSO shows a moderate change compared to that in the case without the SSO, which shows an oscillation synchronized to the reconnections of the cylindrical domains. Such a periodic change in the free energy density is observed after $t=3000$ when the system reaches the almost perfect C structure without defects. In order to obtain such a perfect C phase, the system goes over energy barriers by the driving force of the shear flow. In the case without SSO, however, the energy barrier should be much higher than the case with SSO because the condition of the constant system size imposes a sever restrictions on the reconnection of the cylindrical domains. In the case with SSO, such a restriction is avoided by the increase in the side length of the simulation box in the $Y$ direction as shown in Figure 9. We also tried simulations under a shear flow whose velocity gradient is set parallel to the $X$ direction. In this case, the free energy of the system increases slightly but the nucleation and growth of cylindrical domains can not be observed even in the late stage $t$=6000 either with SSO or without SSO. The epitaxial condition for the OOT is expected to be more precise for this direction of the velocity gradient than the case with the velocity gradient in the $Y$-direction. This is because the periodicity of the G structure in $X$ direction matches the periodicity of the C structure than that in the $Y$ direction, the former promoting the generation of the C structure. Contrary to this expectation, three fold junctions perpendicularly oriented to the [111] direction of the G unit cell continue to disconnect and reconnect due to the shear flow. Figure 12 shows this phenomenon. The circles in the Figure 12 indicate the three fold junctions where the disconnections and the reconnections take place. Figure 12(a) shows the structure after the disconnections, where we can observe the remains of three fold junctions indicated by the circles. Figure 12(b) shows the structure after the reconnections, where the three fold junctions regenerated. This result indicates that the G structure has different stabilities to different directions of the shear velocity gradient. The reason of this different stability is explained by using Figure 13. Figure 13 shows three fold junctions perpendicularly oriented to the [111] direction of the G unit cell with different rotational angles to the direction of the shear gradient. Figure 13(a) shows a three fold junction under a shear flow with its velocity gradient in the $Y$ direction. The three domains extending from the center of the three fold junction are subjected to different shear flow velocities, i.e. the three domains do not move with the same velocity $v_y$. Thus, the three domains are elongated to the different directions with different rates, and the elongation finally makes the three fold junction disconnected. On the other hand, in the case of a three fold junction under a shear flow with its velocity gradient in the $X$ direction as shown in Figure 13(b), two domains have the same velocity $v_x$. In this situation, the three fold junction is elongated to the positive direction of the $X$-axis. Even after the disconnection, the separated domains keep closer and will be reconnected easily to form the three fold junction structure as shown in Figure 12. \section{Discussions} Our simulation showed that the epitaxial OOT G $\rightarrow$ C takes place in the [111] direction of the G unit cell and that the epitaxial relation for the G \{220\} $\rightarrow$ C \{10\} transition is achieved. The transition does not occur uniformly as shown in Figures 8(d)-(f) and Figure 10(c), where the C domain nucleates and grows. Most of the experiments have reported the epitaxial OOT C \{10\} $\leftrightarrow$ G \{211\} which disagrees with our simulation result. A possible reason of this discrepancy is as follows. The first diffraction peak from a G structure is the peak from \{211\} and the intensity of this peak is stronger than that of \{220\} peak (secondary peak). Moreover, the positions of the peaks from G \{220\}, G \{211\} and C \{10\} are so close that it is not easy to judge which of the peaks from G \{220\} and G \{211\} epitaxially matches with the peak from C \{10\}. We calculated a three dimensional scattering function of the optimized G structure obtained in the simulation, and confirmed that the \{211\} spots are dominant and their intensities are about four times larger than those of the \{220\} spots. If the G \{211\} $\rightarrow$ C \{10\} is realized under a shear flow with the velocity gradient in the $Y$ direction, the planes composed of cylinders are directed in parallel to the sheared plane, i.e. the $XZ$ plane. Thus, the friction generated by the reconnecting domains to the shear flow is expected to be smaller than that for the G \{220\} $\rightarrow$ C \{10\} case where the cylinder planes are perpendicular to the sheared plane. In our simulations, however, the system prefers the pathway as G \{220\} $\rightarrow$ C \{10\}. Therefore, we conclude that the direction of the velocity gradient is not an important factor in determining the direction of the C planes for the epitaxial transition. On the other hand, we confirmed that matching between the lattice constants is more important. As is shown in Figure 6(b), the origin of the selection of the generated C structure from the G [111] plane (Figure 6(a)) is the matching between the lattice constants. That is, the system prefers the kinetic pathway that minimizes the free energy of the system by matching the lattice constants, which leads to the G \{220\} $\rightarrow$ C \{10\} transition. Previous theoretical studies have also supported the OOT G \{211\} $\rightarrow$ C \{10\}. These studies relied on the reciprocal space representations. However most of the experiments have been done under a condition with a shear flow and a temperature change. We succeeded in reproducing such experimental conditions in our simulation. Using this simulation, we could reproduce the correct kinetic pathway of the epitaxial OOT, i.e. the nucleation and growth process of the C domains. We found the difference in the stability of the G domains to the shear gradient direction due to the different velocities of the shear flow imposed on the three domains meeting at a three fold junction as shown in Figures 12 and 13. The G structure is stable under the shear flow with the velocity gradient in the $X$ direction. There has been no answer to the question why the complex G phase with three dimensional bicontinuous structure is generated under a shear flow. Our result demonstrates that the G structure is actually stable under a shear flow. Although the detail of the transition process is complex, we understand that the three fold junctions with domains perpendicular to the [111] direction of the G unit cell do not play an important role in the transformation from G to C. Three fold junctions are simply disconnected and vanish during the phase transition. This observation does not agree with the model of the epitaxial transition proposed by Matsen, where a three fold junction is connected to one of the nearest neighbor three fold junctions to form a five fold junction. In our observation, three fold junctions are stable and they are not connected to any other junctions. Here, we propose a model of the kinetic pathway. Figure 14(a) shows a projection of the G unit cell along the [111] direction, where the bold triangles are the projections of consecutive three domains. Figure 14(b) is the same structure as Figure 14(a) observed from a different direction. We can confirm that the triangles in Figure 14(a) are formed by consecutive three domains (shown in black) connected by three fold junctions. When a shear flow is imposed, these black domains in Figure 14(b) are elongated and form cylinders as shown in Figure 14(c). These cylinders are rearranged to form a hexagonally packed cylindrical structure whose lattice spacings satisfy the epitaxial relations G \{220\} $\rightarrow$ C \{10\} as shown in Figure 14(d). This model of the kinetic pathway can be verified in Figures 8(d)-(e) and in Figures 10(b)-(c). \section{Conclusion} The epitaxial OOT G $\rightarrow$ C was studied using the real space dynamical SCF technique with the SSO method. With such an SSO method, we succeeded in reproducing the realistic kinetic pathway of the first order phase transition of G $\rightarrow$ C. On the other hand, in the absence of the SSO, we found that the kinetic pathway is very different from what we observe with SSO. We also found that the G structure shows different responses to different directions of the velocity gradient of the shear flow. Using this technique, we studied the kinetic pathway of the G $\rightarrow$ C transition induced by a shear flow in the [111] direction of the unit cell of the G structure. We observed the following kinetic pathway: The G domains perpendicularly oriented to the [111] direction of the G unit cell do not contribute to the formation of the cylindrical domains. They are disconnected and vanish during the transition. On the other hand, the other G domains are elongated by the shear flow and transform into the cylindrical domains. Such deformations occur locally, and the cylindrical domains are rearranged to form a hexagonally close-packed C structure. The most important result of our simulations with SSO is that we can observe a nucleation and growth of the C phase in the matrix of the G phase, i.e. the nature of the first order phase transition, which was not observed in the previous simulations done in the Fourier space. Under a steady shear flow, we observed that the G domains around the nucleus of the C phase deform and gradually join the C phase. We also observed that the domain spacing satisfies the epitaxial relationship G \{220\} $\rightarrow$ C \{10\} as was proposed in the experimental work \cite{M. F. Schulz F. S. Bates K. Almdal K. Mortensen}. We could not observe the transformation process of the domains from three fold junctions to five fold junctions as was previously proposed \cite{Matsen Gyroid}. We found that the dynamical SCF theory with the SSO method in real space is very useful and reliable to trace the OOTs and ODTs between the microdomain structures of block copolymer melts. \newline \\ {\Large\bf Acknowledgment} \\ T. H. thanks Dr. H. Kodama and Dr. R. Hasegawa for the fruitful collaborations in coding SUSHI. The authors thank Prof. M. Doi (Tokyo University) and the members of the OCTA project for many helpful comments and discussions. This study is executed under the national project, which has been entrusted to the Japan Chemical Innovation Institute by the New Energy and Industrial Technology Development Organization (NEDO) under METI's Program for the Scientific Technology Development for Industries that Creates New Industries. This work is partially supported by Grant-in-Aid for Science from the Ministry of Education, Culture, Sports, Science and Technology, Japan. The computation was in part performed at the Super Computer Center of the Institute of Solid State Physics, University of Tokyo. \newpage
proofpile-arXiv_065-3165
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Perseus cluster, A\,426, is the X-ray brightest cluster in the Sky and has therefore been well studied by all X-ray telescopes. The X-ray emission is due to thermal bremsstrahlung and line radiation from the hot IntraCluster Medium (ICM) and is sharply peaked on the cluster core, centred on the cD galaxy NGC\,1275. Jets from the nucleus of that galaxy have inflated bubbles to the immediate N and S, displacing the ICM (B\"ohringer et al 1993; Fabian et al 2000). Ghost bubbles devoid of radio-emitting electrons, presumably from past activity, are seen to the NW and S. The radiative cooling time of the gas in the inner few tens of kpc is 2--3 hundred Myr, leading to a cooling flow of a few $100\hbox{$\Msun\yr^{-1}\,$}} \def\pcm{\hbox{$\cm^{-3}\,$}$ if there is no balancing heat input. Energy from the bubbles or the bubble inflation process is a likely source of heat but the energy transport and dissipation mechanisms have been uncertain. \begin{figure} \includegraphics[width=\columnwidth]{rgb_acf4.jpg.eps} \caption{Colour image assembled from separate images in the 0.3--1.2 (red), 1.2--2 (green) and 2--7 keV (blue) bands.} \end{figure} We have previously observed the Perseus cluster with the \emph{Chandra} Observatory for 25~ks (Fabian et al 2000; Schmidt et al 2002; Fabian et al 2002), 200~ks (Fabian et al 2003a,b, Sanders et al 2004, 2005) and now present here the first results from a further 800~ks of observation. The total good exposure time is 900~ks. In the earlier work we discovered both cool gas and shocks surrounding the inner bubbles as well as quasi-circular ripples in the surrounding gas which we interpreted as sound waves generated by the cyclical bubbling of the central radio source. Related features have been seen in the Virgo cluster (Forman et al 2003). The NW ghost bubble has a horseshoe-shaped optical H$\alpha$ filament trailing it which we interpret as showing the streamlines in the ICM. On this basis we concluded that the ICM is not highly turbulent and thus that viscosity is high enough to dissipate the energy carried by the sound waves (Fabian et al 2003a,b). Such an energy transport and dissipation mechanism is roughly isotropic and can thereby provide the required gently distributed heat source required by observation of this and other similarly X-ray peaked clusters (Ruszkowski et al 2004a,b,2005; Reynolds et al 2005; Fabian et al 2005). Our goal in the present work is to determine the temperature and pressure of the ICM accurately so that we can study the processes taking place there in more detail. We indeed confirm that the pressure jumps at the weak shock surrounding the inner bubbles and also that the ripples represent significant ripples in pressure. The temperature does not jump at the shock, however, which may be due to the action of efficient thermal conduction. The energy from the bubbles propagates through isothermal sound waves and conduction in the inner regions. If this is a common property of such regions then some of the otherwise puzzling behaviour can be understood. The redshift of the Perseus cluster is 0.0183, which for a Hubble constant of $71\hbox{$\kmps\Mpc^{-1}$}$ corresponds to a luminosity distance of 78.4~Mpc and an angular scale of 367~pc per arcsec. \section{The Data} The \emph{Chandra} observations used for the analysis presented in this paper are listed in Table~\ref{tab:obs}. The total exposure time of just over 1~Ms is reduced after removing periods containing flares to 890~ks. To filter the datasets we examined the lightcurve between 2.5 and 7~keV on the ACIS-S1 CCD. The S1 CCD is back-illuminated like the S3 and so it is best CCD to search for flares, as the Perseus cluster emission is dominant over flares on the S3 CCD. The \textsc{ciao 3.3.2} \textsc{lc\_clean} tool was used to remove periods from the dataset which deviated away from the median count rate of all the observations. Observations 3209 and 4289 did not use the S1 CCD and did not show any flares on the S3 CCD, and so were left unfiltered. \begin{table*} \begin{tabular}{lllllll} Obs. ID & Sequence & Observation date & Exposure (ks) & Nominal roll (deg) & Pointing RA & Pointing Dec \\ \hline 3209 & 800209 & 2002-08-08 & 95.8 & 101.2 & 3:19:46.86 & +41:31:51.3 \\ 4289 & 800209 & 2002-08-10 & 95.4 & 101.2 & 3:19:46.86 & +41:31:51.3 \\ 6139 & 800397 & 2004-10-04 & 51.6 & 125.9 & 3:19:45.54 & +41:31:33.9 \\ 4946 & 800397 & 2004-10-06 & 22.7 & 127.2 & 3:19:45.44 & +41:31:33.2 \\ 4948 & 800398 & 2004-10-09 & 107.5 & 128.9 & 3:19:44.75 & +41:31:40.1 \\ 4947 & 800397 & 2004-10-11 & 28.7 & 130.6 & 3:19:45.17 & +41:31:31.3 \\ 4949 & 800398 & 2004-10-12 & 28.8 & 130.9 & 3:19:44.57 & +41:31:38.7 \\ 4950 & 800399 & 2004-10-12 & 73.4 & 131.1 & 3:19:43.97 & +41:31:46.1 \\ 4952 & 800400 & 2004-10-14 & 143.2 & 132.6 & 3:19:43.22 & +41:31:52.2 \\ 4951 & 800399 & 2004-10-17 & 91.4 & 135.2 & 3:19:43.57 & +41:31:42.6 \\ 4953 & 800400 & 2004-10-18 & 29.3 & 136.2 & 3:19:42.83 & +41:31:48.5 \\ 6145 & 800397 & 2004-10-19 & 83.1 & 137.7 & 3:19:44.66 & +41:31:26.7 \\ 6146 & 800398 & 2004-10-20 & 39.2 & 138.7 & 3:19:43.92 & +41:31:32.7 \\ \end{tabular} \caption{The \emph{Chandra} observations included in this The exposure given is the time remaining after filtering the lightcurve for flares. The observations were taken with the aimpoint on the ACIS-S3 CCD. Positions are in J2000 coordinates.} \label{tab:obs} \end{table*} The level 1 event files were reprocessed using the PSU CTI corrector (Charge Transfer Inefficiency; Townsley et al 2002a, 2002b). Level 2 event files were produced by removing standard grades and bad time intervals. Each of the event files was then reprojected to match the coordinates of the 04952 observation. Images of the data in this paper were produced by summing all the images from the individual datasets. To correct for exposure variation we created exposure maps for each of the CCDs for each of the datasets and for each of the bands. The summed images were then divided by the summed exposure maps. We have produced unsharp-mask images by subtracting images which have been smoothed on two lengthscales. Fig.~2 (top) shows the result after using Gaussian smoothing of 2 and 20 pixels. The ripples are very clear, out to radii of 3--4 arcmin (60--80~kpc) from the nucleus. An arclike step in surface brightness occurs $\sim1.5$~arcmin S of the nucleus. A cold front is seen to the SE (we verify that the pressure is approximately continuous across the sharp surface brightness change in Section 4). Such features were first seen in \emph{Chandra} images of clusters by Markevich et al (2000). There is a major difference with the feature seen here, however, since it is concave and cannot be due to the core moving through a wider hotter gas. It does however appear to connect the `bay' to the S of the nucleus which connects in towards the nucleus along a narrow channel emerging to the SSW from the inner regions. This corresponds spatially to a weak outer H$\alpha$ filament (Sanders et al 2005) although extends much further than any optical emission. X-ray emission is also associated with a much more dominant long radial H$\alpha$ filament seen to the N of the nucleus (see e.g. Conselice et al 2001). The X-ray feature appears to break beyond the ripples and is labelled as the H$\alpha$ fountain. As will be discussed later, we suspect that the radial features are due to cold and cooler gas dragged out from the centre by rising buoyant bubbles. They represent the main axis along which most of the bubbles rise. The S cold front could then be the edge of a giant hotter bubble either produced by a past major outburst of the nucleus (cf. McNamara et al 2005 for Hydra~A) or where the hot gas accumulates due to the interior entropy of the bubbles matching the external value there. \begin{figure*} \includegraphics[width=2\columnwidth]{unsharp_labelled_twin.jpg.eps} \caption{Unsharp mask image made from the whole 0.3-7 keV band by subtracting an image smoothed with a Gaussian of dispersion 10 arcsec from one smoothed by 1 arcsec and dividing by the sum of the two images. Various features are labelled on the lower contrast image at the left. } \end{figure*} \begin{figure*} \includegraphics[width=2\columnwidth]{rgb_unsharp.jpg.eps} \caption{Colour image made from the 0.3-1.2 (red), 1.2-2. (green), 2.-7 keV (blue) bands. A 10 arcsec smoothed image has been scaled to 80 per cent of its intensity and then subtracted in order to bring out fainter features lost in the high intensity range of raw images. The blue structure to the N of the nucleus is caused by absorption in the infalling high velocity system, projected at least 60~kpc in front of the nucleus of NGC\,1275 (Gillmon et al 2004).} \end{figure*} \section{Temperature and Pressure maps} The total of $\sim70$ million counts in the final all band image from the ACIS-S3 chip means that we can measure spectral properties on unprecedented small scales. In order to proceed we have divided the image into bins with approximately the same number of counts and used \textsc{xspec} 11.3.2 (Arnaud 1996) fitting with \textsc{mekal} models (Mewe, Gronenschild \& van den Oord 1985; Liedahl, Osterheld \& Goldstein 1995) to obtain spectral parameters, fitting between 0.5 and 7~keV. The temperature map shown in Fig.~\ref{fig:tmap} was derived in this way using a contour binning approach (Sanders in preparation) with 625 or greater ct per spectrum. In each fit the metallicity (in Solar ratios; Anders \& Grevesse 1989) and absorption column density were fixed at values measured when fitting spectra from bins containing $10^4$ ct per bin or greater, except in the region around the High Velocity System where the absorbing column density was allowed to be free. The results for these parameters is broadly similar to those found in our earlier work (Sanders et al 2004). Details will be given in a later paper. For the present work we concentrate on the temperature and emission measure distributions. We used standard blank sky observations to act as backgrounds for the spectral fitting. The background observations were split into sections to match the ratio of exposure time between each foreground observation. These datasets were then reprojected to match the foreground observations, and then reprojected to the 4952 observation. The exposure time of the backgrounds were altered to ensure the same rate of counts between 9 and 12~keV as their respective foregrounds, in order to correct for the variation of background with time. To create a total spectrum, the spectra from each of the individual observations were added together, excluding observations which did not have any counts in the region examined. The background spectra were added together similarly. The standard PSU CTI corrector response was used. Ancillary responses for each dataset and region were produced using the \textsc{ciao} \textsc{mkwarf} tool, weighting CCD regions using the number of counts between 0.5 and 7~keV. These ancillary responses were averaged for each region, weighting according to the number of counts between 0.5 and 7~keV for a particular dataset. \begin{figure} \includegraphics[width=\columnwidth]{Tmap4.jpg.eps} \caption{Temperature map calculated by fitting spectra with approximately 625 counts or greater. The uncertainties of the individual fits range from 8~per~cent in the coolest regions to 20~per~cent in the hottest parts, ignoring the uncertainty on the metallicity and absorbing column density.} \label{fig:tmap} \end{figure} The temperature map (Fig.~4) shows in great detail the `swirl' around NGC\,1275 (Churazov et al 2000). Whether the swirl is really a single connected structure or an outer ring partially opened on the E and connected to the rim of the inner N bubble (Dunn et al 2005b) remains unclear. Some `fountaining' can be seen to the N of this N bubble. This is associated with the N optical H$\alpha$ filaments which are surrounded by gas at about $1\keV$ (see Fig.~3). A disruption in the outer ring is seen to the SE of the nucleus coincident with the optical `blue loop'; this is discussed in Section 6. We now focus on measurements of entropy $S$ and particularly the pressure $P$ of the gas. A simple method for obtaining these quantities is to assume that the density $n$ is proportional to the square root of the X-ray surface brightness and then use $P=nkT$ and $S=Tn^{-2/3}$. Here we use a slightly better approach based on the emission measure, $A$, obtained from spectral fits. This is proportional to $n^2 V$ where $V$ is the volume along the line of sight. Since the emission is strongly peaked we ignore $V$ at this stage and produced `projected' entropy and pressure maps (Fig.~5). The entropy map (Fig.~\ref{fig:entropy_pressure} left) emphasises where gas may have cooled and resembles the temperature map. The pressure map (Fig.~\ref{fig:entropy_pressure} right) on the other hand shows clearly a thick band around the inner radio bubbles and little sign of azimuthal asymmetry. As found by Sanders et al (2004) the pressure distribution is reasonably circularly symmetric, as expected for gas close to hydrostatic equilibrium. This is not just a consequence of our volume assumption, since we see that the `swirl' in temperature has completely disappeared, as has the arc noted in Fig.~2. A thick, higher pressure band surrounds the radio-filled cavities or bubbles (Fig.~6). This presumably is shocked gas produced by the inflation of the bubbles. It is remarkable that we see it as two mostly complete rings in the projected pressure map. This means that the two bubbles cannot lie in the plane of the Sky but must be arranged so that one is nearer us than the other. Since the nearer radio jet is the S one based on VLBI radio data, we suppose that the nearer bubble is the S one. \begin{figure*} \includegraphics[width=2\columnwidth]{proj_S_P_col2.jpg.eps} \caption{Entropy (left) and pressure (right) maps. The entropy map was calculated using $kT \: A^{-1/3}$, in units of keV~cm${}^{5/3}$~arcsec${}^{2/3}$, where $A$ is the \textsc{mekal} normalisation per square arcsecond. The pressure map was calculated using $kT \: A^{1/2}$, in units of keV~cm${}^{-5/2}$~arcsec${}^{-1}$. These maps were generated by fitting regions containing approximately 625 counts or greater.} \label{fig:entropy_pressure} \end{figure*} There is some azimuthal asymmetry in the pressure map, mostly associated with the bubbles. In order to see this we have subtracted the mean pressure at each radius to produce the pressure difference map (Figs \ref{fig:deltaP_radio} and \ref{fig:deltaP}). There are clearly some lower pressure regions to the N and S, probably associated with older, outer bubbles. The region to the SSW has a higher metallicity likely due to older bubbles dragging metal-rich gas there (Sanders et al 2005). To the south we see two further tangential arclike pressure minima beyond the outer S bubble. These coincide with the high abundance shell reported by Sanders et al (2005). To the N we also see a large arclike pressure minimum. We suspect that these arclike pressure minima are old bubbles. The large size of these bubbles could indicate that the activity was much stronger in the past, so blowing larger bubbles, or may just be due to bubbles merging. Surrounding gas may leak into the bubbles so making them less buoyant, or magnetic fields and structures may be important. \begin{figure} \includegraphics[width=\columnwidth]{radio_press.jpg.eps} \caption{1.4~GHz Radio map in blue superimposed on the pressure difference map in red, where the average pressure at each radius has been subtracted. In this map the temperatures and normalisations were measured using regions containing approximately $10^4$ counts or greater.} \label{fig:deltaP_radio} \end{figure} No pressure jump is associated with the concave structure to the S, confirming that it is part of a cold front. \begin{figure} \includegraphics[width=\columnwidth]{deltaP_col.jpg.eps} \caption{Thermal pressure map where the mean pressure at each radius has been subtracted. In this map the temperatures and normalisations were measured using regions containing approximately $10^4$ counts or greater. Note that the 'channel' caused by a sequence of 4 thermal pressure dips running to the S of the nucleus. The outer ones are assumed to old ghost bubbles and the missing pressure is assumed to be due to relativistic plasma. A twisted channel is also seen to the N. } \label{fig:deltaP} \end{figure} \section{The shock and ripples} \begin{figure*} \includegraphics[width=.9\columnwidth]{profilex_ne.eps} \includegraphics[width=.9\columnwidth]{profilex_e.eps} \includegraphics[width=.9\columnwidth]{profilex_s.eps} \includegraphics[width=.9\columnwidth]{profilex_nw.eps} \caption{Temperature, density, pressure and pressure variation profiles. The red line shows the ripples from the unsharp mask image. The top left figure shows the profile in the NE direction, top right shows E direction, bottom left shows S direction and bottom right shows NW direction. The dashed line in the NE profiles indicates the position of the shock front.} \end{figure*} We now discuss the detailed behaviour of the temperature, density and pressure around the shock surrounding the inner bubbles and the ripples. Sectors have been defined to the NE, E, S and NW of the nucleus and spectra extracted from bins spaced 5.4~arcsec in radius (Table~\ref{tab:sectors}). \begin{table} \begin{tabular}{lllll} Name & Centre RA & Centre Dec & Start angle & Stop angle \\ \hline North-east & 03:19:48.11 & +41:30:41.22 & 22.5 & 52.9 \\ East & 03:19:48.11 & +41:30:41.22 & 91 & 106 \\ South & 03:19:45.92 & +41:30:19.58 & 136.9 & 164.2 \\ North-west & 03:19:48.11 & +41:30:41.22 & 294.4 & 334.2 \\ \end{tabular} \caption{Sectors used to generate temperature, density and pressure profiles. Angles are measured from North in the eastern direction. Coordinates shown are J2000.} \label{tab:sectors} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{ratio_4_5.eps} \includegraphics[width=\columnwidth]{sector.jpg.eps} \caption{Top: The ratio of the spectrum inside the shock to that outside. The spectra have been normalised to have the same number of counts. The solid line shows the ratio is the abrupt density jump corresponds to a weak shock with $\gamma=5/3$. Bottom: Regions used in the above spectral analysis, shown superimposed on an unsharp-masked image. The lower region is that within the shock, the upper one is outside it. } \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{tempdist.eps} \caption{Distribution of temperature components within and outside the shock, using the same regions as shown in Fig.~9. The regions have been fitted with models having temperature components at 0.25, 0.5, 1. 2. 4. 8 and 16~keV. Note that both the 4 and 2~keV components increase within the shock, with the 2~keV one increasing proportionately more. Only upper limits are obtained for the 0.25, 1 and 8~keV components (the ends of the error bars are shown as the vertical lines).} \end{figure} The projected temperature profiles are shown in the top panels in Fig.~8. We see generally that, apart from where the inner and ghost bubbles lie, the temperature profiles are smoothly increasing from the inside out. We have explored deprojected temperatures using the \textsc{xspec} \textsc{projct} routine but find them unstable with the temperatures depending on which bins are used. Dropping bins from either the outside or the inside can have large unpredictable effects on the results found for bins at intermediate radii. This may be due to the real geometry being obviously different from the spherical or ellipsoidal geometry assumed by the routine or to the gas being multiphase. We therefore proceed to combine the projected temperatures with approximately deprojected densities from the emission measures to obtain pressure profiles. The deprojected densities are calculated by subtracting the contribution to the fitted normalisation at each radius by that expected from outer shells, assuming projection in a spherical geometry. The density and thus pressure profiles show variations (Fig.~8) corresponding to the ripples seen in the unsharp mask image (Fig.~2). The pressure residuals from a smooth power-law fitted from 20 to 70~kpc are shown in the lowest panel of the plots, together with the residuals predicted from the unsharp mask image. Ripples in pressure ranging from $\pm 5$ to $\pm 10$ per cent in pressure are seen out to 50~kpc or more. Such pressure variations cannot be static. They resemble sound waves so, following our earlier work (Fabian et al 2003a), we interpret them as sound waves\footnote{Or at least they operate as sound waves in a mixed thermal/relativistic plasma} produced by the cyclic bubbling behaviour, or at least major expansion episodes, of the inner bubbles. In the inner region they are high pressure regions fronted by a weak shock, further out the shocks weaken and are not distinguished from the overall pressure disturbance or ripple. A simple calculation serves to show the potential for the ripples to heat the gas provided that viscosity is high enough to dissipate their energy. Let us consider the region within 50~kpc where the ripples are most clearly seen. If the ripples move at $1000\hbox{$\km\s^{-1}\,$}$ then they cross this region in $5\times 10^7\yr$. They cause the pressure to oscillate with an amplitude of 5--10 per cent, which we conservatively take as 5 per cent of the thermal energy there. Consequently they can balance cooling provided that the cooling time (roughly for the gas to lose all its thermal energy) is 20 times the crossing time or a Gyr. This condition is well met since the cooling time of the gas at 50~kpc is 2--3 Gyr dropping to about $2\times 10^8\yr$ near the centre (for the hotter gas). The waves therefore need to dissipate about half their energy by 50~kpc. The vertical dashed line in the upper panel of Fig.~8 corresponds to the abrupt edge to the thick pressure band around the N bubble. It is most clearly seen in this direction because there is little lower energy emission superimposed upon it as appears to happen at other azimuths. Our results show it to be associated with a jump in density and this pressure and should be a shock front. However the projected temperature hardly changes, even dropping slightly postshock (Fig.~8), whereas it should rise on the basis of the factor 1.39 density jump, from 4.04~keV to about 5.1~keV if the gas is adiabatic (see also Fabian et al 2003a). The very deep image enables us to now examine the exact temperature structure across this shock front. Because of the geometrical restrictions to any extended deprojection approach, mentioned above, we compare spectra either side of the shock. All methods, including single and two temperature, or using the outer emission as background for the inner, show no sign of any hotter temperature component within the shock in the range of 4--6~keV. The ratio of the spectra post and preshock, plotted in Fig.~9, clearly shows that the emission inside the shock is softer and inconsistent with the predicted rise, if $\gamma=5/3.$ All the spectral fits have been carried out assuming that the gas is in collisional and ionization equilibrium and homogeneous within each region. We have also carried out a multi-temperature analysis of the emission both inside and outside the shock (Fig.~10). 56 per cent of the counts per unit area across the jump going from out to inside is in an increased 4~keV component and 28 per cent in a 2 keV one. There is good evidence that the gas is multiphase. The 4~keV component, which is the volume-filling one, shows an increase of density across the shock but no increase in temperature. The soft X-ray emission from the outer radial, N optical H$\alpha$ filament stops abruptly at the shock front. It is possible that the 2~keV component arises from shock heating of this cooler (1~keV) gas. The pressure jump at the shock front indicates that this is a weak shock so most of the heating is just $P \: \mathrm{d}V$ compression. Why the 4~keV component does not change is puzzling if the gas is adiabatic; much of the heating in a weak shock is just compressional $P \: \mathrm{d}V$ heating. A fair approximation for the temperature behaviour expected in a shock where the density jumps by the observed factor of 1.39 (Fig.~8) from 4.04~keV is that the postshock temperature $T=2.5 + 1.56\gamma\keV$. The observed behaviour of the gas is therefore explained if the 4~keV component is isothermal with $\gamma\sim1$. A simple explanation for this would be that thermal conduction is operating on this volume-filling gas phase. The electrons, moving faster than the ions, can go ahead of the shock (see e.g. Zel'dovich \& Raizer 1966; Borkowski, Shull \& McKee 1989). If the magnetic field in this region is mostly radial, then conduction can eliminate temperature differences on a timescale \begin{equation} t_{\rm cond}\approx n k \ell^2/\kappa = 2.3\times 10^6 n\ell^2 T^{-5/2} \yr \end{equation} which compares with the timescale for distance $\ell$ of matter to accumulate behind the shock (in the rest frame of that matter) \begin{equation} t_{\rm shock}=4.8\times10^6 \ell \yr \end{equation} where $\ell$ is the length inside the shock in units of 3~kpc (the bins in Figs.~8 and 9 are 2~kpc apart). The full Spitzer (1956) rate for conduction is assumed here and the total postshock density is in units of the observed value of $1\hbox{$\cm^{-3}\,$}} \def\pcmcuK{\hbox{$\cm^{-3}\K\,$}$ (Fig.~8), the postshock temperature (assuming an adiabatic gas) is in units of the expected one of $T=5.1\keV= 6.3\times 10^7{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}$. Magnetically-isolated blobs such as may comprise the lower temperature component are compressed adiabatically. The post-shock timescale for electron-ion equilibration is comparable to the above time ($\sim 2\times 10^6\yr$). This enhances the effect of conduction (halving it) since only the electron energy needs to be conducted ahead of the shock. We envisage that the ion temperature jumps at the shock front but that the electron temperature varies smoothly through this region, with a hotter precursor extending into the unshocked gas (see Fig.~7.19 on p519 of Zel'dovich \& Raizer 1966); both the electron and ion densities jump at the shock front. This result introduces the possibility that thermal conduction is effective in parts of the innermost regions of clusters. It has been proposed and tested as a means for heating the gas from the outside, but found to be inadequate for clusters and regions below 5~keV (Kim \& Narayan 2003; Voigt \& Fabian 2004). What is needed in the Perseus cluster is for thermal conduction to operate throughout much of the inner hotter volume-filling phase. The ripples would therefore be {\it isothermal} sound waves (see Fabian et al 2005 for a comment on this possibility). Both sound waves and conduction are then effective in distributing the $P \: \mathrm{d}V$ work done by the bubbles into the surrounding gas. Repeated bubbling in the central region may have ordered the magnetic field into a roughly radial structure. Cooler and/or cooling temperature components embedded in the hotter gas behind the shock can damp any temperature rise behind the shock if they mix with the hotter gas. The mass fraction of cooler gas required (approx 30 per cent) appears not to be high enough (see Fig.~10) for this process to be important. It remains possible that mixing takes place with larger masses of unseen cold gas which radiates much of the thermal energy in yet unseen bands. An issue which could be very important for shock propagation in the inner intracluster medium is the presence of a relativistic plasma (cosmic rays and magnetic field) in the inner core of the Perseus cluster. This is evident here from the synchrotron emission seen as the radio `mini-halo' (Pedlar et al 1990; Gitti et al 2003) and the inverse Compton emission seen as a hard X-ray flux component (Sanders et al 2005; it appears as the 16~keV component in Fig.~10). In the collisionless conditions relevant to the shock it may be possible that the relativistic plasma soaks up the energy, leaving the gas isothermal. Indeed it could be repeated shocks from the bubbles which reaccelerates the relativistic particles. They could redistribute the energy to larger radii, serving to transport some of the energy and creating a distributed heat source for the gas. We note that the electron temperature observed behind the strong shocks in young supernova remnants do not always fit expectations for simple hydrodynamical shocks, probably due to particle acceleration (Rakowski 2005 and references therein). Although promising as a mechanism, there are many uncertainties as to how it could operate and why there is no sharp rise in either the synchrotron emission seen in radio maps nor in the inverse Compton emission at the position of the shock. Moreover it does not explain how the electrons avoid compressional heating. The isothermal nature of the inner gas raises the possibility that the bubbles expand much faster than previously suspected from observations. Initial models for the action of a central radio source on the ICM by Heinz et al (1998) predicted that the bubbles would be surrounded by shocks but \emph{Chandra} showed no evidence for shock-heated gas. Efficient thermal conduction will however eliminate shock heating as a diagnostic. Consequently the bubbles may expand, at times, faster than inferred, even supersonically. The likely behaviour given the variability of radio sources is that they expand in fits and starts, with each rapid expansion phase giving rise to a sound wave. The observation from the Perseus cluster of only one set of ghost bubbles within 50~kpc radius yet 3 or more ripples allows for each bubbles to generate several ripples before becoming buoyant enough to separate and rise. This means that any estimate of bubbling power based simply on buoyancy times (e.g. Birzan et al 2004; Dunn et al 2004, 2005a) is a lower limit. A further issue with regard to the energy injected by the bubbles is the thickness of the postshock gas. This is very similar to the radius of the bubbles, so has a volume about 7 times that of the bubbles themselves. The pressure in the shocked gas is 30 per cent above the outer unshocked gas, $P$, (Fig.~8) so the energy content of the postshock gas is more than twice that obtained by assuming it is just $PV$ where $V$ is the volume of a bubble. The work done ($\int P\:\mathrm{d}V$) will be yet higher if some has been transported away by conduction or relativistic particles. \section{The multiphase nature of the gas} Figs.~1 and 3 clearly show filamentary soft X-ray emission which is closely associated with the optical H$\alpha$ filaments (Fabian et al 2003b). This soft emission has a temperature of between 0.5 and 1~keV and would appear much brighter if the Galactic column density to the Perseus cluster were not as high as the observed value of $\sim 1.3\times 10^{21}\hbox{$\cm^{-2}\,$}} \def\pcmK{\hbox{$\cm^{-3}\K$}$. The Doppler velocities determined for the filaments are $100-200\hbox{$\km\s^{-1}\,$}$ and coherent over many kpc (Hatch et al 2005b) so, given their large radial extent and likely origin as being pulled out from the centre by rising bubbles (Fabian et al 2003b), the lifetimes of the filaments are several tens million yr, or even longer. In order to survive in the surrounding hot gas they must be insulated from it or thermal evaporation would have caused them to disappear within a million yr (equation 1). Conduction must therefore be highly suppressed, by at least a factor of 100, probably due to magnetic fields along their length (conduction is suppressed perpendicular to the field direction). As already mentioned, the filaments stop at the shock which is probably disrupting them there. The filaments coincident with the shock to the SE are probably just projected in front of the shock and are not within it. Such magnetically isolated regions need not completely vanish once they are disrupted and may survive as higher temperature blobs maintaining their isolation. The gas can therefore be multiphase, not due to a thermal cooling instability, but to the forced mixing of different components. Whether there is then slow conductive evaporation or radiative condensation (see e.g. B\"ohringer \& Fabian 1989) or turbulent mixing (e.g. Begelman \& Fabian 1990; Loewenstein \& Fabian 1990) remains to be seen. \begin{figure} \includegraphics[width=\columnwidth]{mass_tempdist.eps} \caption{Distribution of mass at the fixed temperatures from within the innermost 1.5~arcmin radius. For comparison the expected result from a constant pressure cooling flow is shown.} \end{figure} \begin{figure*} \includegraphics[width=2\columnwidth]{ha_05mass_box.jpg.eps} \includegraphics[width=2\columnwidth]{10_20_40_mass_box.jpg.eps} \caption{The H$\alpha$ filaments from the optical narrow-band image of Conselice et al (2000) is shown for comparison with maps of mass in the 0.5, 1, 2 and 4~keV components. } \end{figure*} We have therefore conducted a multi-temperature determination of the gas distribution in the Perseus core. The individual spectra generated from regions chosen to contain $10^4$ counts or greater have been fitted with a multi-temperature model consisting of gas at 0.5, 1, 2, 4, 8 and 16~keV (see also Sanders et al 2004 for similar fits to the 200~ks data). The results have been mapped in terms of mass, determined from the emission measure $n^2V$ divided by the density $n$ relevant for the pressure at that radius and measure temperature of the component. Of course the volume filling factor of the gas significantly different from the mean temperature found from the single temperature fits (Fig.~4) is small. The widely differing mass distributions on the Sky show that the gas is genuinely multiphase (i.e. having different temperatures at the same radius) and we are not mapping a mere projection effect. We see a striking similarity in the 0.5~keV map to the optical filaments with a total mass in the hot gas much larger than typically found from an estimate of $3\times 10^7\Msun$ based on the total luminosity of $H\alpha$ (Heckman et al 1989), a temperature of 5000{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}\ for the gas and the surrounding pressure found here for the outer filaments. The continuing pressure rise to the centre will reduce this estimate and addition of molecular hydrogen, seen in the infrared (e.g. Hatch et al 2005b and references therein) will tend to increase it, so it should be a reasonable estimate. The mass maps at the various temperatures are plotted in Fig.~12 and the total mass distribution, determined from the masses within 1.5~arcmin of the nucleus, is shown in Fig.~11. The different points at each temperature show the total mass including only those which are significant to 1, 2 and 3-$\sigma$. Noise will be a strong contaminant to the lowest significance point. For comparison, the mass distribution expected for a steady cooling flow of $300\hbox{$\Msun\yr^{-1}\,$}} \def\pcm{\hbox{$\cm^{-3}\,$}$ is superimposed. Interestingly, we find that there is a large drop-off in mass at 1~keV but a recovery at around 0.5~keV. This rise is of course due to the filament region. Until we know the fate of such material, in terms of whether it is being heated or cooled by radiation or mixing, we cannot say whether the bulk of the cooler gas, which lies in an E-W extended clump around the nucleus, is the residual of a cooling flow or not. We note that Bregman et al (2005) find OVI emission (characteristic of gas at $5\times 10^5{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}$) in a 30 arcsec \emph{FUSE} aperture centred on the nucleus consistent with a mass cooling rate of about $50\hbox{$\Msun\yr^{-1}\,$}} \def\pcm{\hbox{$\cm^{-3}\,$}$. This is comparable to the rate inferred from our mass determination from gas at 0.5~keV (i.e. $\sim 5\times 10^6{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}$), since the radiative cooling time of gas between 0.5 and 2~keV in the inner parts of the cluster is about $10^8 T^2\yr$, where $T$ is in keV. Peterson (2005; private communication) finds a limit of only $20\hbox{$\Msun\yr^{-1}\,$}} \def\pcm{\hbox{$\cm^{-3}\,$}$ from a search for FeXVII emission in \emph{XMM-Newton} RGS spectra of the inner 30 arcsec radius. The fact that we see less gas at 1~keV could be the consequence of cooling due to mixing, rather than radiation, dominating in that temperature range. Such a possibility has been discussed by Fabian et al (2002) and Soker, Blanton \& Sarazin (2002). The energy of the hotter gas could in part go to heating the cooler gas at $\sim 10^4{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}$ where there has long been a heating and excitation problem (Johnstone, Fabian \& Nulsen 1987; Heckman et al 1989; Sabra et al 2002). Indeed a mixing solution similar to a turbulent, radiative mixing layer seems inevitable given the much lower mass in cold gas below $10^4{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}$ then at 0.5~keV. A final inter-relationship between the hotter X-ray emitting gas and the optical filaments is shown in Fig.~13. There is a partial ring structure to the SE in the temperature swirl, resembling a letter `C' written backwards. It coincides with some bright optical filaments and in particularly with the `blue loop', first remarked on by Sandage (1972) and seen well in many recent images (e.g. the blue band Jacobus Kapteyn Telescope, JKT, image of Fig.~12). We presume that gas in the swirl at this location collapsed and formed the stars in the astronomically recent past. \begin{figure} \includegraphics[width=\columnwidth]{blueloophax.jpg.eps} \caption{The blue loop is shown in X-ray temperature (left), H$\alpha$ (from Conselice et al 2001, centre) and blue light (from JKT; right).} \end{figure} Heinz \& Churazov (2005) have proposed that the relativistic component discussed in Section 4 could exist in small blobs which could help to dissipate wound waves. We see no obvious signs of small holes in the X-ray emission larger than a few 100~pc in size. How well the relativistic and thermal components are mixed is of importance for transport processes in the region. \section{Discussion} We have found that the shock seen in our 200~ks image is isothermal. The ripples seen beyond the shock are therefore likely to be isothermal waves. Their energy is then dissipated by viscosity. Conduction and sound waves can act together to dissipate and distribute the energy from the radio source, and ultimately the central massive accreting black hole. An isothermal shock allows energetic bubbling to occur at the centre without overheating the innermost region, a problem noted by Fujita \& Suzaki (2005) and Mathews et al (2005). In the work of Fujita \& Suzaki (2005) it is assumed that all wave dissipation occurs at the shock front and does not include any later dissipation via viscosity as the (observed) waves propagate further. In one model they include conduction at 20 per cent of the Spitzer rate and find agreement with the shape of the temperature and density profile. However most of the energy in their model is supplied by thermal conduction from the hotter outer gas; the AGN only dominates over the region from 20--30~kpc where the shock occur. As they remark, a double heating model with the AGN heating the inner regions and conduction the outer was proposed earlier by Ruszkowski \& Begelman (2002). It is clear from the temperature profile shown in the top left panel of Fig.~8 that conduction of heat from the outer hotter gas is not important within the inner 60~kpc in the Perseus cluster, since the temperature profile is so flat. Indeed from 40--55~kpc the gradient acts in exactly the wrong direction. As discussed in Section 4, the observed ripples (which are strong sound waves or weak shocks) have more than sufficient energy to heat the inner 50~kpc and so it is not clear that any thermal conduction of heat from the outer gas is required. What our analysis has shown is that thermal conduction acting in the inner regions can account for the observed isothermal nature of the shock and so prevent the problem of an accumulation of hot shocked gas. The conduction merely acts to mediate the shock and redistribute the energy from the central AGN. The magnetic configuration of the field in the core is crucial to the conductive behaviour. We require an approximately radial field across the shock, which is not understood. One possibility is that it arises as a consequence of cooling and compression of the inner gas in the past which leads to the frozen-in magnetic field being predominately radial (Soker \& Sarazin 1990). Nearby we have low temperature H$\alpha$ emitting filaments which must be many 10s million yr old and so magnetically isolated. We also find evidence for multi-temperature, presumably multiphase, gas. The magnetic connectedness is crucial to how the gas behaves. It raises the possibility that the swirl seen in the temperature and entropy maps is magnetically separate from the rest of the gas. Perhaps it is a fossil from the merger of a galaxy with NGC\,1275 where the incoming (less massive) galaxy `combed' the field into the apparent swirl. The 2 keV gas immediately around the rim of the N inner bubble is presumably protected from evaporation by a tangential field there. Such speculation may eventually be testable when it is possible with the Expanded Very Large Array to carry out Faraday Rotation studies in this region at higher frequencies and greater sensitivity than currently feasible. Preliminary indications from high resolution studies of the nucleus with the Very Long Baseline Array indicate fairly extreme Rotation Measures of up to ~7000 radian m$^{2}$ (Taylor et al. 2005, in preparation). We have also found a roughly N-S channel in the pressure difference map which demonstrates the passage of a sequence of radio bubbles. The outer ones are large and could be where they accumulate or just represent a past, more energetic, period of activity. We also see part of an unusual cold front to the S. This region is seen clearly in the unsharp mask images (Fig.~2) and in one generated from data from all chips (Fig.~13). This structure appears to be connected to a region to the SW of the nucleus where the channel appears. It could represent gas associated with subcluster merging in the cluster. Most likely given the relationship with the bubble channel, the gas could be evidence of past energetic bubbles. The bubble channel is good evidence that the bubbles are not easily disrupted, presumably due to the magnetic structure (De Young et al 2003) and/or viscosity in the surrounding gas (Reynolds et al 2005). We assume that the pressure dips in the channel because there is unseen buoyant relativistic plasma there from the radio outbursts. An overall picture of the region is shown in the image of Fig.~14, where data from all chips has been used. The structure of the inner regions can be seen together with the outer S bay, embedded within the more extended peak of cluster X-ray emission. The upper part of the H$\alpha$ fountain (Fig.~2) can also be more clearly seen. \section{Summary} Using a very deep, 900~ks \emph{Chandra} image of the core of the Perseus cluster, we have found new outer features 50--80~kpc from the nucleus and measured the detailed properties of gas near the centre. The features are in the form of a concave cold front and baylike region of hot gas which is in approximate pressure equilibrium. This could be the result of an energetic past outburst from the nucleus, or where bubbles accumulate. The inner radio bubbles are surrounded by complete higher pressure bands of gas behind a sharp front. The gas temperature does not change across the shock front, probably indicating that thermal conduction operates efficiently there, or that co-existing relativistic plasma mediates the shock. Pressure variations coincident with ripples previously found in unsharp mask images reveal the presence of isothermal sound waves. The isothermal nature of the innermost gas means that a simple temperature estimate there does not reveal the expansion velocity of the bubbles. We suspect that they expand in rapid steps associated with outbursts of activity from the central radio source. Provided that the energy in the ripples is dissipated by viscosity, then the present heating rate in the ripples is sufficient to balance radiative cooling. Larger pressure variations are seen along a N-S channel, suggesting a sequence of bubbles, revealing the activity of the central radio source for the past $10^8\yr$. The gas in the centre is significantly multiphase with a large mass of gas ($\sim 10^9\Msun$) associated with the optical H$\alpha$ filamentary nebula, with ten times more mass in 0.5~keV gas than that radiating the optical emission lines. Mixing is likely occurring between the hot ICM and the cold filamentary gas, with much radiative cooling probably taking place below $10^6{\rm\thinspace K}} \def\keV{{\rm\thinspace keV}$. Cluster cores are complicated with the behaviour dependent on the bubbling of a central radio source and on microphysical transport processes. These in turn depend on the magnetic field structure, which itself may be a consequence of past cooling and bubbling. \begin{figure*} \includegraphics[width=2\columnwidth]{unsharp_full.jpg.eps} \caption{Total 0.5--7~keV image from all \emph{Chandra} CCD chips. } \end{figure*} \section{Acknowledgements} We thank the referee (Y. Fujita) for comments. CSC and ACF thank the Royal Society for support. GBT acknowledges support for this work from the National Aeronautics and Space Administration through Chandra Award Number GO4-5134X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory on behalf of the National Aeronautics and Space Administration under contract NAS8-03060. The work of SWA is supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515.
proofpile-arXiv_065-3173
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Star clusters in the dwarf irregular galaxy NGC 6822 have been first reported by \cite{hubble25} and systematically investigated by \cite{hodge77}. The most recent study by \cite{KH04} used the available HST Archive data and cataloged all the star clusters identified including one genuine globular cluster Hubble VII. However, no star cluster survey in the outer halo of NGC 6822 has ever been tried before, although the existence of a rather large stellar halo around the galaxy was suggested by \cite{letarte02} from their Carbon star survey. \section{Discovery} Visual inspection of the wide field survey data around NGC 6822 has revealed three new star clusters. The locations and morphologies of these clusters are shown in Figure~\ref{fig}. From morphological and photometric studies, two clusters, SC1 and SC2, are regarded genuine old globular clusters with ages more than 3 Gyrs (see \cite[Hwang et al. 2005]{hwang05} for details). One noteworthy point is that the new star clusters are distributed to the very remote places (\textit{Left} panel of Figure~\ref{fig}). The projected distance from the NGC 6822 center to SC1, the most remote cluster, is about 12 kpc. For comparison, NGC 1841, the outermost star cluster in the LMC, is located at about 13 kpc from the LMC center. Another important point in the images is that all three clusters are extended and are clearly resolved into stars, whereas Hubble VII is not resolved at all in our data (\textit{Right} panel of Figure~\ref{fig}). Further investigation shows that the half-light radii of these new clusters are larger than 10 pc and even larger than 20 pc for SC1. \section{Implications} The existence of the newly discovered star clusters suggests that the underlying halo has a different structure from a giant HI disk-like cloud which is extended along NW-SE direction (\cite[de Blok \& Walter 2000]{BW2000}). These clusters also provide a proof that the halo of NGC 6822 is quite larger than previously expected (see \cite[Lee \& Hwang 2005]{mglee05} in this volume). The extended structures of these new clusters are very unusual features. The SC1, among these, is found to be as extended as new star clusters recently discovered in the halo of M31 (\cite[Huxor et al. 2005]{hux05}; \cite[Lee et al. 2005]{mglee05b}). The formation mechanism of these extended star clusters, including the correlation with evolutionary history of the host galaxies, is not clearly understood yet, requiring further studies. \begin{figure} \includegraphics[scale=.30]{nhwang.fig1a.eps} \includegraphics[scale=.31]{nhwang.fig1b.eps} \caption{\textit{Left}: Locations of the new star clusters in NGC 6822 (stellar symbols). The SC1 is about 12 kpc away from the galaxy center. Marks (triangles) inside and around the ellipse are previously known star clusters. \textit{Right}: Sloan \textit{i} band images of the three new star clusters and a known globular cluster Hubble VII in NGC 6822 (upper right; marked by an arrow). Note the resolved member stars of new star clusters. Each image is $37'' \times 37''$ wide.} \label{fig} \end{figure} \begin{acknowledgments} N.Hwang was supported in part by the BK21 program. M.G.Lee was supported in part by the ABRL(R14-2002-058-010000-0). \end{acknowledgments}
proofpile-arXiv_065-3178
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}The connection between accretion flow parameters and radio jet production is a mysterious one. It has been argued in \citet{wan04} that the jet kinetic luminosity, $Q$, is correlated with the bolometric luminosity of the thermal emission, $L_{bol}$, produced by the accretion flow in blazar type AGN. However, using virtually identical techniques as \citet{wan04}, \citet{cel97} came to the opposite conclusion. In order to shed some light on this issue, we explore this question from a different perspective for the particular case of quasars. The vast majority ($\sim 90\%$) of quasars are radio quiet whether their $L_{bol}$ lies just above the Seyfert 1/ quasar dividing line or if they are at the other extreme, $L_{bol}>10^{47}\mathrm{ergs/s}$. This observation indicates that there are additional parameters, beyond $L_{bol}$ that $L_{bol}/L_{Edd}$, that control the power of the radio jet. We note that the very high $Q$, FRII radio source, Cygnus A, $Q \approx 1.6\times 10^{46}\mathrm{erg/s}$ (according to (1.1) of this article), harbors a hidden quasar with $L_{bol}$ just above the Seyfert 1/ quasar dividing line and has a low Eddington rate, the ratio of $L_{bol}$ to the Eddington accretion rate, $L_{bol}/L_{Edd}\sim 0.01$ \citet{smi02,tad03}. Cygnus A is an extremely powerful FR II radio source even when compared with low frequency selected samples at high redshift \citep{wil99}. It has two orders of magnitude higher $Q$ than most FR II quasars (see Chapter 10 of \citet{pun01} and references therein). Thus, Cygnus A provides a well studied "standard" candle for an extremely powerful FR II source. This motivated us to explore the opposite extreme in the quasar family, the very powerful quasar PKS~0743$-$67, which is luminous in all frequency bands and seemed to be a likely candidate for extremely high $Q$ jets. In sections 3 and 4, it is shown that it has a has a powerful accretion luminosity, $L_{bol}> 2\times 10^{47} \mathrm{ergs/sec}$, $L_{bol}/L_{Edd}\approx 1$, a strong unresolved VLBI radio core and prominent radio lobes. Even though the quasar is at a redshift of $z=1.511$ \citep{bec02}, both the radio core and the radio lobe flux densities are $\sim 1Jy$. \par In section 5, it is demonstrated that the high $Q$ for these two extreme ends of the quasar range, Cygnus A and PKS 0743-067, are not out of line with the properties of the quasar population as a whole. By studying a sample of quasars from \citet{wan04}, we find that $Q$ is not correlated with $L_{bol}$ for radio loud quasars that possess blazar cores. Secondly, we demonstrate that the inverse correlation claimed between $Q/L_{bol}$ and $L_{bol}/L_{Edd}$ in \citet{wan04}, although true, is a trivial consequence of the fact that $Q$ is not correlated with $L_{bol}$ in quasars. The primary conclusion of this study is that the intrinsic power of a quasar jet is not, to first order, controlled by the accretion rate. \par We have performed deep radio observations with the Australia Telescope National Facility (ATCA) in order to understand the radio structure of PKS~0743$-$673; the lobe emission alone would qualify it for the 3C catalogue if the source were in the Northern Hemisphere, our observations indicate that the jet kinetic luminosity, $Q$, is far more powerful than that in Cygnus A. \section{The Radio Observations}Previously, \citet{ray02} imaged the radio structure of PKS~0743$-$673 at 4.8 GHz with the ATCA. We performed deep observations at 2.496, 4.800 and 8.640 GHz in order to image the source structure and obtain higher resolution as well as spectral and polarization data. It is essential to obtain both higher resolution and accurate spectral data to assess the energy content of the extended structure. Our 8.640 GHz map is shown in Figure 1. \begin{figure} \begin{center} \includegraphics[scale=0.8]{f1.eps} \end{center} \caption{PKS 0743-673 at 8640 MHz. The peak intensity in the image is 1.25 Jy/beam. The beam-size is 0.9"$\times$1.1" at a position angle of $-78.8^{\circ}$. Contour levels for the Stokes I emission are 0.0125 Jy/beam $\times$ ($-0.125$, 0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64). The peak fractional polarization is 52.1\%. The vector lengths represent the electric field with 25.4\% fractional polarization per arcsecond. The beam ellipse is plotted in the lower left hand corner of the figure} \end{figure} The data from our ATCA observations are presented in Table 1. Quasars with a strong flat spectrum core often have Doppler enhanced kpc scale jets \citep{pun95}. Thus, an estimate of $Q$ in PKS~0743$-$67 requires an analysis of the data in Table 1 in order to determine if the jet and lobe emission to the east of the nucleus is Doppler enhanced or not. The components E1, E2, E3 denote the eastern jet/lobe components in Figure 1, numbered consecutively from west to east. \begin{table} \begin{center} \begin{tabular}{ccccc} \hline \hline $\nu$ & Comp.& $S$ & $m$ & $\alpha$ \\ (GHz) & & (Jy) & \% & \\ \hline \hline 2.496 & W & 0.17 & 11 &...\\ & C & 1.34 & 7 &...\\ & E1 & ... & ... &...\\ & E2 & 0.53 & 11 &...\\ & E3 & 0.75 & 14 &...\\ \hline 4.800 & W & 0.08 & ... &1.16\\ & C & 1.17 & 8 &0.21\\ & E1 & 0.02 & 28 &...\\ & E2 & 0.31 & 22 &0.82\\ & E3 & 0.45 & 15 &0.78\\ \hline 8.640 & W & 0.04 & ... &1.17\\ & C & 1.25 & 5 &0.06\\ & E1 & 0.01 & 52 &...\\ & E2 & 0.16 & 26 &0.98\\ & E3 & 0.24 & 15 &0.93\\ \hline \hline \end{tabular}\\ \caption{ATCA radio data for PKS~0743$-$673. Column 1: $\nu$, radio frequency. Column 2: Component identification from Figure 1. Column 3: $S$, total flux density at $\nu$. Column 4: $m$, percentage polarization at $\nu$. Column 5: $\alpha$, two point spectral index between $\nu$ and 2.496 GHz ($S\propto\nu^{-\alpha}$).} \end{center} \end{table} In Figure 1, the magnetic field (perpendicular to the electric field vectors plotted) at the core is parallel to the jet direction and remains parallel to the jet direction along the length of the jet, even though the eastern jet goes through a large apparent bend. At the end of the eastern jet, the magnetic field switches to being perpendicular to the jet direction, typical of a radio galaxy hot spot. \section{Estimating the Jet Kinetic Luminosity} In order to avoid the ambiguities associated with Doppler enhancement, we estimate the jet kinetic luminosity from the isotropic extended emission, applying a method that allows one to convert 151 MHz flux densities, $F_{151}$, measured in Jy, into estimates of kinetic luminosity, $Q$, from \citet{wil99,blu00} by means of the formula derived in \citet{pun05}: \begin{eqnarray} && Q \approx 1.1\times 10^{45}\left[(1+z)^{1+\alpha}Z^{2}F_{151}\right]^{\frac{6}{7}}\mathrm{ergs/sec}\;,\\ && Z \equiv 3.31-(3.65) \nonumber \\ &&\times\left(\left[(1+z)^{4}-0.203(1+z)^{3}+0.749(1+z)^{2} +0.444(1+z)+0.205\right]^{-0.125}\right)\;, \end{eqnarray} where $F_{151}$ is the total optically thin flux density from the lobes (i.e., no contribution from Doppler boosted jets or radio cores). We assume a cosmology with $H_{0}$=70 km/s/Mpc, $\Omega_{\Lambda}=0.7$ and $\Omega_{m}=0.3$. In order to implement this technique, one needs to determine which components are optically thin and which are Doppler enhanced. There are two possible interpretations of the data that one can use to calculate $Q$. The most straightforward approach is to note that all of the emission is optically thin and the large angular size of the source, $\approx 250\,\mathrm{kpc}$, argues against significant Doppler enhancement of the large-scale structures. However, we choose the most conservative approach: assume that all of the eastern emission is part of a jetted system and it is all Doppler enhanced, even the hot spot to some extent (this would explain why the eastern hot spot is more luminous than the western hot spot). If the source were symmetric and viewed in the sky plane then an upper limit to the total flux would be twice the observed flux from the western hotspot, 340 mJy at 2.496 GHz. Extrapolating this to 151 MHz, yields a lobe flux of 8.8 Jy. Inserting this value into (3.1) yields $Q= 4.1 \times 10^{46}$ ergs/sec. This equates to 2.5 times the kinetic luminosity of Cygnus A computed by the same method. If the eastern lobe is not Doppler enhanced then the kinetic luminosity is even larger. We note that no 151 MHz observations of PKS~0743$-$673 have been made. However, \citet{lar81} measured the total flux density at 408 MHz to be 8.6 Jy. This measurement will be dominated by the extended emission of the source, making an estimate of 8.8 Jy at 151 MHz for the unbeamed emission conservative. \par A 2.3 GHz VLBI measurement of PKS 0743-67 was made in \citet{pre89}. A secondary unresolved radio structure, presumably a strong knot in a jet, is directed to the east of the core towards the base of the kpc jet seen in figure 1. The VLBI emission is dominated by an unresolved core on the 10 milliarcsecond scale with 1.2 Jy. Not only is the time averaged $Q$ from PKS 0743-67 enormous, but the powerful parsec scale core indicates that the source is still likely to be highly energetic at the current time. \section{Estimating the Eddington Ratio}One can estimate $L_{bol}$ as in \citet{lao98}, $L_{bol}\approx 8.3\nu L_{\nu}(3000\AA)$, a method that has been applied to both radio quiet and radio loud quasars. We apply this formula to the flux density at $3000\AA$ from the spectrum of PKS 0743-67 in \citet{ali94}, yielding $L_{bol}\approx 4.7\times 10^{47} \mathrm{ergs/s}$. When making an estimate of the accretion flow luminosity, the strong radio core might raise some concern about contamination of the optical emission via a high frequency synchrotron spectrum associated with the base of the jet. Thus, alternatively, one could get an estimate of $L_{bol}$ using the method of \citet{wan04} that depends on line luminosity. Following the discussion in section 3 of \cite{wan04}, the CIV/Ly$\alpha$ line strength ratio of the composite quasar spectra in \citet{fra91}, and eqn(1) of \citet{wan04} implies that the total broad line luminosity is $L_{BLR}=8.83L_{CIV}$, where $L_{CIV}$ is the CIV line strength. Secondly, \citet{wan04} estimate $L_{bol}\approx 10 L_{BLR}$, thus $L_{bol}\approx 88.3 L_{CIV}$. Using the CIV line strength from \citet{ali94} this implies $L_{bol}\approx 2.91\times 10^{47} \mathrm{ergs/sec}$, in close agreement with the estimate above from the spectrum directly. \par One can estimate $L_{bol}/L_{Edd}$ using the $L_{bol}$ value above in conjunction with a mass estimate of the black hole mass, $M_{bh}$, from the same CIV emission line. The estimator of $M_{bh}$ of \citet{ves02} requires the luminosity at $1350\AA$, $\lambda L_{\lambda}(1350\AA)$. To be consistent with the philosophy of not using the continuum spectrum, one can instead estimate $\lambda L_{\lambda}(1350\AA)$ from the $L_{bol}$ that is derived from the CIV line strength above with the aid of the relation from \citet{lao98}, $L_{bol}\approx 8.3\nu L_{\nu}(3000\AA)$ and assuming a typical quasar optical spectral index of 0.7 as was done in \citet{wan04} (the spectrum in \citet{ali94} yields a similar value, 0.75). One finds a central black hole mass of $M_{bh}=1.62\times 10^{9} M_{\odot}$ and $L_{bol}/L_{Edd}=0.99$. One can check this result independently using the H$\alpha$ line of PKS 0743-67 measured in \citet{esp89} and the estimators in \citet{gre05}, $M_{bh}=1.41\times 10^{9} M_{\odot}$ and $\lambda L_{\lambda}(5100\AA)= 2.27\times 10^{46}\mathrm{ergs/s}$. Converting the line luminosity to $L_{bol}$ as for the CIV estimate above, one finds $L_{bol}= 2.21\times 10^{47}\mathrm{ergs/s}$ and $L_{bol}/L_{Edd}=0.87$. \begin{figure} \epsscale{1.05} \plottwo{f2a.eps}{f2b.eps}\\ \epsscale{1.10} \plottwo{f2c.eps}{f2d.eps}\caption{Figure 2a (upper left hand corner) is a scatter plot of the logarithm of the kinetic luminosity, $Q$, of the radio jets of in the quasar subsample of \cite{wan04} versus the logarithm of the line luminosity which they use as a measure of $L_{bol}$. The extreme estimate for $Q$ of 4C 52.27 noted in the text is omitted in this plot. The correlation is very weak. The estimates of $Q$ for Cygnus A and PKS 0743-67 are added for sake of comparison and are based upon the isotropic methods of this paper, unlike the \citet{wan04} sample. A best fit line to the \citet{wan04} data is indicated. Figure 2b (upper right hand corner) is a scatter plot of the logarithm of $L_{bol}$ versus the logarithm of $L_{bol}/L_{Edd}$. Figure 2c (lower left hand corner) is a scatter plot of the logarithm of $Q/L_{bol}$ versus the logarithm of $L_{bol}/L_{Edd}$. Figure 2d (lower right hand corner) is a scatter plot of the logarithm of $Q$ versus the logarithm of $L_{bol}/L_{Edd}$.} \end{figure} \section{Comparison With Other Results} Ostensibly, the existence of a high $L_{bol}/L_{Edd}$ and high $Q$ source such as 0743-67 appears at odds with the result of \citet{wan04}, $Q/L_{bol}$ is inversely correlated with $L_{bol}/L_{Edd}$. The large Q of Cygnus A appears at odds with the other conclusion of \cite{wan04}, Q is positively correlated with $L_{bol}$. However, closer inspection of the raw data used in \citet{wan04} indicates that this is not actually the case. \par The virtue of the estimates in \citet{wan04} is that they use the parsec scale jet emission to estimate $Q$ contemporaneously with the estimate of $L_{bol}$. However, we warn the reader that such estimates are very sensitive to the uncertain Doppler factor. The method that \citet{wan04} adopted from \citet{cel97} assumes that the X-ray energy emission is from synchrotron self Compton emission (SSC), however \citet{der93} showed that external Compton (ECS) scattering of quasar disk photons or broad line region photons by energetic particles in the jet will usually dominate the high energy quasar spectrum, since ECS emission is enhanced by the jet Lorentz factor to the sixth power. This type of estimator can lead to enormous errors in the estimated values of $Q$. As an example, \citet{wan04} estimate for 4C 52.27 (1317+520), $Q>250Q_{\mathrm{cygA}}$, where $Q_{\mathrm{cygA}}$ is the kinetic luminosity of Cygnus A. By contrast, using the radio maps in \citet{hin83} and the isotropic estimator in (3.1), we find a more reasonable value of $Q\approx 0.35 Q_{\mathrm{cygA}}$. \par First of all, \citet{wan04} present data in their log-log plot in figure 1a indicating that $L_{bol}$ and $Q$ have a strong linear correlation (note that they assume that $L_{bol}\approx 10 L_{BLR}$). However, if one removes the BL-Lacs from the sample and fit a line to just the quasars on a log-log plot that is otherwise identical to figure 1a of \cite{wan04} then the squared multiple regression correlation coefficient, $R^{2}=0.12$. If one removes the extreme estimate associated 4C 52.27 that was given above, the linear fit is even worse, $R^{2}=0.04$ and this result is displayed in figure 2a. This corresponds to a correlation coefficient, $r=0.2$ and the probability of getting this by chance is $P=0.174$. The data in figure 2 is lifted directly from \citet{wan04}, so all the estimates are identical. The data of \citet{wan04} actually shows that $Q$ and $L_{bol}$ are very weakly correlated in quasars. \par The other result of \cite{wan04} that $Q/L_{bol}$ and $L_{bol}/L_{Edd}$ are inversely correlated actually follows trivially as a consequence of the fact that $Q$ is uncorrelated with $L_{bol}$ for quasars and $L_{bol}/L_{Edd}$ is strongly correlated with $L_{bol}$. This latter correlation is not surprising; it is the strongest correlation amongst quasar parameters in \citet{wan04}, the best linear fit is $\log L_{bol}=0.9309\log[L_{bol}/L_{Edd}]+47.121$ (see figure 2b). The correlation coefficient is $r=0.820$ and $P<10^{-4}$. Since $Q$ is uncorrelated with $L_{bol}$, it follows that $Q/L_{bol}\sim\sim L_{bol}^{-1}$ (where we have introduced the symbol $\sim\sim$ to represent correlation) and $L_{bol}/L_{Edd}\sim\sim L_{bol}$ from figure 2b. Combining the two relations, it follows that $L_{bol}/L_{Edd}\sim\sim L_{bol}/Q$, i.e., $Q/L_{bol}$ and $L_{bol}/L_{Edd}$ are inversely correlated as shown in figure 2c. The best linear fit is $\log[Q/ L_{bol}]=-0.814\log[L_{bol}/L_{Edd}]-0.673$ and the correlation coefficient for $L_{bol}/L_{Edd}$ and $Q/L_{bol}$ is $r=-0.654$ for the subsample of quasars, with $P=3\times 10^{-4}$. The anti-correlation of $L_{bol}/L_{Edd}$ and $Q/L_{bol}$ is spurious: there is no direct causal link between the these two variables as expressed statistically by small value of the partial correlation coefficient of $Q/L_{bol}$ versus $L_{bol}/L_{Edd}$ with $L_{bol}$ held fixed, -0.030. Finally, we note that this result does not imply that there is the potentially interesting correlation between $Q$ and $L_{bol}/L_{Edd}$ as evidenced by figure 2d. The correlation is very weak, $r=0.1489$ and $P=0.244$. \section{Conclusion} PKS~0743$-$67 is an example of a quasar that has an ultra-luminous accretion flow, $L_{bol}>2\times 10^{47}\mathrm{ergs/s}$, a very high Eddington rate, $L_{bol}/L_{Edd}\approx 1$ with $Q>2.5Q_{CygA}$, and is presently active as evidenced by the powerful unresolved VLBI radio core. By contrast, the high $Q$ source Cygnus A lies at the low end of the quasar range of $L_{bol}$ and has a small $L_{bol}/L_{Edd}$ and is also presently active as evidenced by the jet extending from the lobes to within a few light years of the central black hole (see figure 1.10 of \citet{pun01}). Using a large sample of quasars in figure 2, it was shown that $L_{bol}$ is uncorrelated with $Q$. Hence, the diverse values of $Q/L_{bol}$ in Cygnus A and PKS 0743-67 should not be unexpected. It appears that to first order, the parameters $L_{bol}/L_{Edd}$ and $L_{bol}$ are unrelated to the intrinsic quasar jet power. This is consistent with the observation that $\approx90\%$ of quasars are radio quiet, from the most luminous quasars down to the quasar/Sevfert 1 dividing line. Consider the wide range of $L_{bol}$ in quasars that are associated with very powerful jets. It is argued in \citet{sem04} and \citet{pun01} that a significant large scale magnetic flux near a rapidly spinning black hole is the missing ingredient and is the primary determinant of FRII quasar jet power, not the accretion flow.
proofpile-arXiv_065-3179
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} Organic field-effect transistors (OFETs) with high charge-carrier mobilities are essential components for high-speed organic electronic applications. For this reason, it is of crucial importance to unravel the origin of charge transport in these devices. Several experimental studies have shown that the difference in the charge-carrier transport observed in bulk organic semiconductors and within the quasi-two dimensional channel near the gate-insulator interface in OFETs is associated with the effects of disorder and interfacial traps \cite{PMB04,DML04,KSB05,DMN05}. More intriguing is a growing body of evidence demonstrating the strong influence of the dielectric permittivity of the gate-insulator on the charge-carrier mobility in OFETs \cite{VOL03,SBI04}. Veres et al. \cite{VOL03} have studied the case of triarylamine polymer transistors in which measured mobilities well below $10^{-2}\text{ cm}^{2}/\text{V s }$can be unambiguously attributed to hopping between localized states. On the other hand, Stassen et al. \cite{SBI04} obtained much higher values, from $1$ to $20\text{ cm}^{2}/\text{V s}$, in single-crystalline rubrene transistors in which conduction is more intrinsic. A surprising finding in both cases is the drastic decrease of the mobilities as the dielectric constant of the gate-insulator is increased systematically using different dielectric materials. Such observation in devices governed by vastly different charge-transport mechanisms strongly suggests an effect due to the interactions of the charge carriers with the gate-dielectric. These interactions can lead to the renormalization of the bare mass of the charge carriers in the conducting channel of OFETs, providing an explanation for the observed dielectric effects on the mobility as reported in Refs. \cite{VOL03,SBI04}. In this work, we present a theoretical calculation of the renormalized band mass of charge carriers in the conducting channel of OFETs by looking at two important interactions experienced by charges at the interface of an organic semiconductor with a dielectric material. The first one, purely electronic in origin, is the image force due to the polarization discontinuity at the gate-dielectric interface. The second involves the Coulomb interaction of the charge carriers with surface polar phonons of the dielectric. To illustrate the basic concepts of our calculations and their general applicability to any organic semiconductor, we choose pentacene as a model system since its lattice structure, bandwidth, polarizabilities, and Huang-Rys factors are well-known compared with other organic semiconductors. We find that, in pentacene, the cumulative effect of the electronic and surface-lattice polaronic interactions is to reduce the effective bandwidth by a factor of two as the relative static dielectric constant of the gate materials is varied from $1$ to $25$. Within the tight-binding model, such a decrease in the transfer integral can be viewed as a concomitant increase in the effective mass by the same factor. We will suggest that such renormalization of the band properties triggers the effects reported in Refs. \cite{VOL03,SBI04}. \section{TIME SCALES} The non-interacting band properties of a perfect pentacene crystal along all crystallographic directions have been calculated by Cheng et al. \cite{CSS03}. For the bare transfer integral, $J$, between molecules along the direction of easy propagation in the $\left( a,b\right) $-plane, they obtain $J=100\text{ meV from which one gets }h/J\simeq 4\times 10^{-14}$ s$\text{ }$as the characteristic time for in-plane Bloch-wave formation. The corresponding transfer time in the perpendicular $c$-axis direction, $h/J_{\bot }$, is thirty times longer than in the plane. Thus, in the presence of scattering which substantially reduces the Bloch-wave lifetime, the carrier motion is essentially two-dimensional. The above considerations allow for the classification of the various interactions experienced by charge carriers in organic semiconductors. For fast interactions with characteristic times shorter than $h/J$, the charge can be assumed to be located on a single molecular site. In pentacene, this is the situation encountered during the interaction of the carrier with the electronic polarizability of the medium or in intramolecular charge-transfer as well as the coupling with intramolecular carbon stretching vibrations with frequencies around $1360\text{ cm}^{-1}$. Since fast interactions arise prior to the formation of the Bloch-wave, they have the effect of dressing the charge with a polarization cloud or a lattice deformation cloud. Slow interactions, on the other hand, have characteristic times much longer than $h/J$. They act directly on the Bloch-wave or the localized state. Such is the case for interaction of the charge carrier with low-energy intermolecular thermal phonons and librations which, in many cases, can be considered as static with respect to the two-dimensional band motion. These interactions scatter the Bloch-wave or localize the electronic states when the disorder they introduce is large enough. The interaction of the charge with the surface polar phonons of the gate insulator occurs in the intermediate time scale regime. An interesting discussion of time scales can also be found in the first chapter of the book by Silinsh and \v{C}\'{a}pek \cite{Silinsh94}. Because they dress the charge with a polarization cloud or lattice deformation, fast processes lead to a renormalization of the bare transfer integrals $J$ and $J_{\bot }$ and consequently increase the effective mass along all crystal directions. The case involving electron-phonon interactions has been discussed by several authors, including Appel \cite{Appel} and Davydov \cite{Davydov}. The purely electronic effects were treated by three of us in an earlier work \cite{BPZ04} in which we calculated the renormalization effect due to the electronic polarizability in the bulk of the organic semiconductor. In this work, these calculations will be extended to the situation encountered by carriers at the gate-dielectric interface in OFETs. We shall treat both electronic and lattice effects. The Fr\"ohlich surface polaron at the oxide surface was already studied by Kirova and Bussac \cite{KB03} for an isotropic organic crystal. The entire problem will be revisited here for the two-dimensional layer of the anisotropic crystal. The slow processes involving low-energy phonons, librations and other quasi-static or static sources of scattering and localization will be discussed and treated elsewhere. We have to emphasize that, because they are faster than $10^{-14}\text{ s}$, the polarization processes studied here involve only the high-frequency dielectric response of the materials constituting the interface and not the usual low frequency (static) permittivity. For the sake of clarity and to highlight the important aspects of this work, we summarize the major results of our calculations in the next section. The details of these calculations will be given in separate appendices. \section{RESULTS} In rubrene- or acene-based organic field-effect transistors, the interface with the oxide or polymer gate-insulator involves the highly conducting $\left(a,b\right)$-plane of the organic semiconductor. Our calculation of the variation of the effective transfer integral, $J_{IV}$, for conduction in this plane is shown in Fig.\ref{fig:1} as a function of the static dielectric constant of the gate-insulator. In the results of Fig.\ref{fig:1}, the charge-carrier is assumed to be located on the first monolayer close to the gate-interface. In the case where the charge is on the second monolayer, we find that there is basically no effect of the dielectric on the transfer integral which then takes the bulk value. These results are obtained from the combined effects of four different interactions which have been treated separately according to their time scales from the fastest to the slowest as discussed below. \begin{enumerate} \item \textit{Electronic polarization }$\left( J^{I}\right)$. The dynamical renormalization of the carrier motion due to electronic polarization is different in the bulk and at the interface. At an interface, an image field is generated which is attractive when the high-frequency dielectric constant, $\epsilon _{\infty }$, of the gate-insulator is greater than that of the organic semiconductor as is the case for oxides, and is repulsive otherwise as in the case of polymers or air-gap insulators. This image field is typically of the same order of magnitude as the applied gate fields to which it is added or subtracted. The magnitude of the image potential under usual experimental conditions is displayed in Fig.\ref{fig:2} as a function of the distance $z$ from the interface. At large distances, the classical expression for the image field holds and the image potential is written as, \begin{equation} E_{p}\left( z\right) -E_{p}\left( \infty \right) =-\frac{e^{2}}{16\pi \epsilon _{0}z}\left( \frac{\epsilon _{\infty ,2}-\epsilon _{\infty ,1}}{% \epsilon _{\infty ,1}\left( \epsilon _{\infty ,2}+\epsilon _{\infty ,1}\right) }\right) \end{equation} where $\epsilon _{\infty ,1}$ and $\epsilon _{\infty ,2}$ are the high-frequency dielectric constants of the semiconductor and the gate-insulator, respectively. However, corrections to this expression associated with lattice effects show up close to the interface. The increase in the carrier effective mass due to the electronic polarization cloud is slightly different in the bulk or when it crosses the interface. The details of these calculations are given in Appendix \ref{appen:A}. They lead to a renormalized intermolecular charge-transfer integral $J^{I}<J$. \item \textit{Electronic displacement }$\left( J^{II}\right)$. The strong dependence of the mobility and the effective mass with the dielectric permittivity of the gate-insulator seen in experiments, as well as large corrections to the bulk electronic polarization energy near the interface shown in Fig.\ref{fig:2}, suggest that the first two monolayers next to the dielectric interface dominate the charge transport particularly in the presence of a significant gate field \cite{DML04}. At even higher fields, one may expect that not only is the charge localized on the first monolayer but its electronic wavefunction is squeezed towards the part of the molecule closer to the insulator. This displacement of the charge distribution on the molecule is also a fast process, controlled by the transfer integral $t_{/\!/}\sim 1\text{ eV}$ within the molecule which is more than an order of magnitude larger than $J^{I}$. This fast process decreases further the transfer integral to $J^{II}$. The recent semi-empirical quantum-chemistry calculation performed by Sancho-Garc\'{\i}a et al. \cite{SHB03} suggests that this effect is completely negligible in pentacene where the charge-carrier distribution remains perfectly centered even at very high fields ($100\text{ MV/cm}$). In this case $J_{II}=J_{I}$. The final result of our calculation presented in Fig. \ref{fig:1} was established in the frame of this hypothesis that we consider as the most reliable at the moment. Nevertheless, in Appendix \ref{appen:B} another approach is presented which shows that much larger effects would be expected if the charge-carrier distribution on the molecule is allowed to be displaced by values of a few angstroems at high gate fields of $10\text{ MV/cm}$. \item \textit{Intramolecular vibrations} $\left( J^{III}\right) $. Intramolecular vibrations close to $1360\text{ cm}^{-1}$ in pentacene are strongly coupled to the carrier because they change the $\pi $-alternation typical of conjugated molecules. Because these atomic motions are faster than the electronic polaron motion defined by the renormalization transfer integral $J^{II}$, they also contribute to a further reduction of $J^{II}$ by a constant factor of $0.75$, independent of the distance from the interface as well as the applied field $\left( J^{III}\simeq 0.75J^{II}\right) $. Appendix \ref{appen:C} reviews how this factor is calculated. \item \textit{Fr\"ohlich polaron at the oxide surface }$\left( J^{IV}\right)$. Oxides are polar materials. Thus, the infrared-active phonon modes which modulate the metal-oxide bonds are strongly coupled to charge carriers sitting at their surface. In aluminum oxide, for instance, the most active mode of this kind is situated at $46\text{ meV }$\cite{STH00}. This value is of the same order of magnitude as the effective, in-plane, transfer integral renormalized upon corrections due to the above-mentioned interactions. The construction of a lattice deformation cloud in the oxide is the object of Appendix \ref{appen:D}. The polarization interaction energy of the charge with this cloud causes further attraction of the charge to the surface and to the subsequent increase of the effective mass. The calculation is performed in the intermediate coupling regime because here the coupling parameter which controls the process, $\alpha _{\text{eff}}\left( z\right)$, defined in Appendix \ref{appen:D}, is of the order of unity in the first monolayer, and of the order of $0.1$ in the second one. \end{enumerate} The total binding energy of the carrier in the presence of a gate-insulator includes both the electronic and surface polaron effects arising from the electronic image force potential and the lattice deformation potential at the interface associated with the above interactions. Within a tight-binding model, these effects are incorporated into a renormalized effective transfer integral $J_{IV}$ given by, \begin{equation} J_\textrm{IV}=J\left( \frac{J^{I}}{J}\right) \left( \frac{J^{II}}{J^{I}}% \right) \left( \frac{J^{III}}{J^{II}}\right) \left( \frac{J^{IV}}{J^{III}}% \right) \end{equation} Table \ref{tab:3} provides a summary of all these factors in the order of increasing time-scale characterizing each interaction. \section{CONCLUSION} The experimental results have clearly shown that the mobilities obtained in organic field-effect transistors are much larger in devices built with an air-gap or a polymer gate-insulator \cite{PMB04,VOL03,SBI04} than those using high-permittivity oxide gate dielectrics. Our theoretical results discussed above provide some insight into the origin of these effects. Given that the bandwidth is four times the transfer integral as suggested in the quantum chemistry calculations of Ref. \cite{SKB05} and starting from a value of $390\text{ meV }$\cite{CSS03}, the effective bandwidth becomes $231\text{ meV in bulk pentacene }$\cite{BPZ04}. It is reduced to $\text{155 meV}$ in an OFET with an aluminum oxide gate insulator $\left( \text{Al}_{2}\text{O}_{3}\right) $, to $\text{146 meV}$ close to a $\text{Ta}_{2}\text{O}_{5}$ interface and $\text{144 meV}$ close to a $\text{Ti}\text{O}_{2}$ interface, while the bulk value of $231\text{ meV}$ is recovered close to a parylene gate insulator. For a given disorder potential in the organic semiconductor, localization effects roughly scale with the reciprocal bandwidth. Consequently, important bandwidth reductions enhance all localization effects and consequently decrease the mobility. It is important to note that in the present work the existence of a lattice polaron in the bulk of pentacene was considered as unlikely. Intramolecular vibrations are too fast to produce a polaron in a perfect pentacene crystal and their effect has been studied in Appendix \ref{appen:C}. Intermolecular phonons and librations with energies $\hbar \omega$ of the order of $10\text{ meV}$ are not coupled enough to the carrier to produce a well-defined lattice polaron, with a renormalized transfer integral of the order of $50\text{ meV}$ originating from the initial calculation of Cheng et al. \cite{CSS03}. However, when the transfer integral experiences a further reduction close to the gate interface as shown in Fig. \ref{fig:1}, then the intrinsic lattice polaron can be excited, as suggested in a recent work \cite{HB04}. A calculation is in progress to clarify this point. Moreover, an interface is rarely perfect from a structural point of view, and the fact that, due to significant internal field present at the interface, the carrier is constrained to probe more closely all the interface disorder, enhancing the localization effects. Even "perfect" interfaces can be "intrinsic" sources of localization. In general, the electric dipole lattice induced by a charge carrier in the organic semiconductor is incommensurate with the dipole lattice induced in the gate material. In a perfect structure, the incommensurability of the electronic potential can open gaps in the semiconductor density of states. When the gate dielectric is not a single-crystal but a disordered structure, the same effect creates an electronic disorder which can be one of the sources of localization. The effective bandwidth enters the mobility law but does not define it entirely. In disordered polymer channels, Veres et al. \cite{VOL03} have established a link between the effective width of the Gaussian distribution of electronic states and the mobility of the carrier which jumps from state to state according to the Gaussian Disorder Model \cite{Bassler93}. In more ordered systems such as single-crystalline rubrene OFETs, a theoretical work is in progress to establish a quantitative link between the transfer integral and the mobility. It will elucidate the role of thermal, low-energy phonons which are, on the one side, sources of localization and on the other the key to the adiabatic diffusion of the carriers in the channel. \begin{acknowledgments} The authors acknowledge the financial support of the Swiss Federal Science Foundation under contract number 200020-105156. \end{acknowledgments}
proofpile-arXiv_065-3184
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \section{Introduction} Parallel computing is not a new discipline, so it is surprising that few astronomers resort to parallelism when solving standard problems in data analysis. To quantify this assertion relative to the X-ray community, in late summer of 2005 we conducted several full text searches of the NASA ADS digital library (Kurtz et al 1993), as follows:\\ \begin{tabular}{cc} Keywords & Number of Hits \\ \hline parallel AND pvm & 38 \\ message AND passing AND mpi & 21 \\ xspec & 832 \\ xspec AND parallel AND pvm & 0 \\ xspec AND message AND passing AND mpi & 0 \\\\ \end{tabular} \noindent Extra keywords were included with PVM and MPI so as to cull false matches (e.g. with the Max Planck Institute). The keyword {\tt xspec} refers to the software program of the same name (Arnaud 1996), which is generally regarded as the most widely used application for modeling X-ray spectra. Queries in ADS on other modeling tools, or with other search engines such as Google, all yield similar trends: astronomers and astrophysicists do employ parallel computing, but mainly for highly customized, large-scale problems in simulation, image processing, or data reduction. Virtually no one is using parallelism for fitting models within established software systems, especially in the interactive context, even though a majority of papers published in observational astronomy result from exactly this form of analysis. \section{ISIS, S-Lang, PVM, and SLIRP} To exploit this opportunity we've extended ISIS, the Interactive Spectral Interpretation System (Houck 2002), with a dynamically importable module that provides scriptable access to the Parallel Virtual Machine (Geist et al 1994). PVM was selected (e.g. over MPI) for its robust fault tolerance in a networked environment. ISIS, in brief, was originally conceived as a tool for analyzing Chandra grating spectra, but quickly grew into a general-purpose analysis system. It provides a superset of the XSpec models and, by embedding the S-Lang interpreter, a powerful scripting environment complete with fast array-based mathematical capabilities rivaling commercial packages such as MatLab or IDL. Custom user models may be loaded into ISIS as either scripts \footnote{Usually in S-Lang, but Python may also be used by simply importing the PySL module.} or compiled code, without any recompilation of ISIS itself; because of the fast array manipulation native to S-Lang, scripted models suffer no needless performance penalties, while the SLIRP code generator (Noble 2003) can render the use of compiled C, C++, and FORTRAN models a nearly instantaneous, turnkey process. \section{Parallel Modeling} Using the PVM module we've parallelized a number of the numerical modeling tasks in which astronomers engage daily, and summarize them here as a series of case studies. Many of the scientific results stemming from these efforts are already appearing elsewhere in the literature. \subsection{Kerr Disk Line} Relativistic Kerr disk models are computationally expensive. Historically, implementors have opted to use precomputed tables to gain speed at the expense of limiting flexibility in searching parameter space. However, by recognizing that contributions from individual radii may be computed independently we've parallelized the model to avoid this tradeoff. To gauge the performance benefits \footnote{A more complete and rigorous analysis will be presented in a future journal paper.} we tested the sequential execution of a single model evaluation, using a small, faked test dataset, on our fastest CPU (a 2Ghz AMD Opteron), yielding a median runtime of 33.86 seconds. Farming the same computation out to 14 CPUs on our network reduced the median runtime to 8.16s, yielding a speedup of 4.15. While 30\% efficiency seems unimpressive at first glance, this result actually represents 67\% of the peak speedup of 6.16 predicted by Amdahl's Law (5.5 of the 33.86 seconds runtime on 1 CPU was not parallelizable in the current implementation), on CPUs of mixed speeds and during normal working hours. Reducing the model evaluation time to 8 seconds brings it into the realm of interactive use, with the result that fits requiring 3-4 hours to converge (on "real" datasets such as the long XMM-Newton observation of MCG--6-30-15 by Fabian) may now be done in less than 1 hour. The model evaluation is initiated in ISIS through the S-Lang hook function \begin{verbatim} public define pkerr_fit (lo, hi, par) { variable klo, khi; (klo, khi) = _A(lo, hi); return par[0] * reverse ( master (klo, khi, par)); } \end{verbatim} where {\tt lo} and {\tt hi} are arrays (of roughly 800 elements) representing the left and right edges of each bin within the model grid, and {\tt par} is a 10 element array of the Kerr model parameters. Use of the PVM module is hidden within the {\tt master} call (which partitions the disk radii computation into slave tasks), allowing ISIS to remain unaware that the model has even been parallelized. This is an important point: {\itshape parallel models are installed and later invoked using precisely the same mechanisms employed for sequential models.} \footnote{This also makes it easy for ISIS to employ an MPI module for parallelism, if desired.} For each task the slaves invoke a FORTRAN {\tt kerr} model implementation, by Laura Breneman at the University of Maryland, wrapped by SLIRP as follows: \begin{verbatim} linu Starter make file generated to kerr.mf linu \end{verbatim} \subsection{Confidence Contours and Error Bars} Error analysis is ripe for exploitation with parallel methods. In the 1D case, an independent search of $\chi^2$ space may be made for each of the {\tt I} model parameters, using {\tt N=I}\ slaves, with each treating one parameter as thawed and {\tt I-1} as fixed. Note that superlinear speedups are possible here, since a slave finding a lower $\chi^2$ value can immediately terminate its {\tt N-1} brethren and restart them with updated parameters values. Parallelism in the 2D case is achieved by a straightforward partition of the parameter value grid into {\tt J} independently-evaluated rectangles, where {\tt J} $>>$ {\tt N} (again, the number of slaves) is typical on our cluster. Our group and collaborators have already published several results utilizing this technique. For example, Allen et al 2004 describes joint X-ray, radio, and $\gamma$-ray fits of SN1006, containing a synchrotron radiation component modeled as \begin{displaymath} \frac{dn}{dkdt} = \frac{\sqrt{3}e^{3} B}{hmc^{2}k} \int{dpN(p)R \left (\frac{k}{k_0\gamma^2} \right )} \end{displaymath} \noindent The physics of this integral is not important here; what matters is that the cost of evaluating it over a 2D grid is prohibitive (even though symmetry and precomputed tables have reduced the integral from 3D to 1D), since it must be computed once per spectral bin, hundreds of times per model evaluation, and potentially millions of times per confidence grid. A 170x150 contour grid (of electron spectrum exponential cutoff energy versus magnetic field strength) required 10 days to compute on 20-30 CPUs (the fault tolerance of PVM is critical here), and would scale linearly to a 6-10 month job on a single workstation. \subsection{Temperature Mapping} Temperature mapping is another problem that is straightforward to parallelize and for which we have already published results. For instance, Wise \& Houck 2004 provides a map of heating in the intracluster medium of Perseus, computed from 10,000 spectral extractions and fits on 20+ CPUs in just several hours. \section{Going Forward} It is important to note that in the two previous studies {\itshape the models themselves were not parallelized}, so the usual entry barrier of converting serial codes to parallel does not apply. One consequence is that the community should no longer feel compelled to compute error analyses or temperature maps serially. Another consequence is that the independence between partitions of the data and the computation being performed, which makes the use of sequential models possible in the parallel context, also lurks within other areas of the modeling problem. In principle it should be possible to evaluate an arbitrary sequential model in parallel by partitioning the model grid over which it's evaluated, or by evaluating over each dataset independently (when multiple datasets are fit), or in certain cases even by evaluating non-tied components in parallel. We are implementing these techniques with an eye towards rendering their use as transparent as possible for the non-expert. With simple models or small datasets these measures may be not be necessary, but the days of simple models and small datasets are numbered. Reduced datasets have already hit the gigabyte scale, and multi-wavelength analysis such as we describe above is fast becoming the norm. These trends will only accelerate as newer instruments are deployed and the Virtual Observatory is more widely utilized, motivating scientists to tackle more ambitious analysis problems that may have been shunned in the past due to their computational expense. \acknowledgments This work was supported by NASA through the AISRP grant NNG05GC23G and Smithsonian Astrophysical Observatory contract SV3-73016 for the Chandra X-Ray Center. \scriptsize
proofpile-arXiv_065-3187
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In diffusive normal metal / superconductor (DN/S) junctions, the DN acquires induced superconductivity, i.e. Cooper pairs penetrate into the DN. This proximity effect has been studied since the BCS theory was established. The proximity induced Meissner demagnetization in DN/S junctions was measured experimentally by Oda and Nagano\cite{Oda} and Mota et al.\cite{Mota1}. It has $T^{-1/2}$ dependence in the dirty limit. The quasiclassical Green's function theory was used earlier to study the Meissner effect in proximity structures. The quasiclassical Green's function theory was developed by Eilenberger~\cite{Eilenberger} and was generalized by Eliashberg~\cite{Eliashberg}, Larkin and Ovchinnikov~\cite {Larkin} in order to study the nonequilibrium state. This theory was applied by Zaikin\cite{Zaikin} and Kieselmann\cite{Kieselmann} to studing the Meissner effect in DN/S junctions. Narikiyo and Fukuyama~\cite{Narikiyo} calculated the Meissner screening length in a semi-infinite system containing an Anderson impurity. Higashitani and Nagai studied the Meissner effect in the clean limit~\cite{Higashitani}. Belzig et al.~\cite{Bel1,Bel2} have considered more realistic systems by assuming a perfectly transparent N/S interface. Up to now the boundary conditions derived by Kupriyanov and Lukichev (KL) \cite{KL} were widely used to study proximity effect in DN/S structures. A more general boundary conditions was derived by Nazarov \cite{Nazarov2} based on the Keldysh-Nambu Green's function formalism \cite{Zaitsev} within the framework of the Landauer-B\"{u}ttiker scattering formalism. The merit of this boundary condition is that the BTK theory\cite{BTK} is reproduced in the ballistic limit while in the diffusive limit with a low transmissivity of the interface, the KL boundary condition is reproduced. Although almost all previous papers on Meissner effect in mesoscopic NS junctions are either based on the KL boundary conditions or on the BTK model, in the actual junctions, the transparency of the junction is not always small and the impurity scattering in the DN cannot be neglected. Tanaka et al.\cite{TGK} and Yokoyama et al.\cite{Yoko} calculated tunneling conductance by using the Nazarov's boundary condition. It is well known in $d$-wave superconductors that the midgap Andreev resonant states (MARS) are formed at the interface of $d$-wave superconductor. The MARS crucially influence various physical quantities \cite{TK}. One of the authors (Y.T.) recently generalized the boundary condition of the Keldysh-Nambu Green's function formalism to unconventional superconductor junctions~\cite{TNGK,pwave}. It is revealed that in DN/$d$-wave superconductor junctions the proximity effect and the MARS strongly compete with each other~\cite{TNGK}, while they coexist in DN/triplet superconductor junctions. The newly obtained boundary conditions expressed in the Keldysh-Nambu Green's function are useful for the calculation of various physical quantities. The timely problem is to study theoretically the Meissner effect in DN / $d$-wave S junctions using the new boundary conditions \cite{TNGK}. In the present paper, we calculate the susceptibility of the DN layer in DN/ $d$-wave S junctions for various junction parameters such as the height of the insulating barrier at the interface and the angle between the normal to the interface and the crystal axis of a $d$-wave superconductor. \par The organization of the paper is as follows. In section 2, we will provide the derivation of the expression for the susceptibility of the DN. In section 3, the results of calculation are presented for various types of junction. In section 4, the summary of the obtained results is given. In the present paper we set $c=k_B=\hbar=1$. \section{Formulation} In this section, we introduce the model and the formalism. We consider a junction consisting of vacuum (VAC) and superconducting reservoirs connected by a quasi-one-dimensional diffusive conductor (DN) with a length $L$ much larger than the mean free path. We assume that the interface between the DN conductor and the S electrode at $x=L$ has a resistance $R_{b}$, the DN/VAC interface at $x=0$ is specular, and we apply the generalized boundary conditions by Tanaka \cite{TNGK} to treat the interface between DN and S. A weak external magnetic field $H$ is applied in $z$-direction (see Fig. 1). The vector potential can be chosen to have only the $y$ component which depends on $x$. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=18.0cm,clip]{figa.eps}} \end{center} \par \caption{Schematic illustration of the model.} \end{figure} We describe the insulating barrier between DN and S by using the $\delta$-function (i.e., $U(x)=H\delta(x-L)$), which provides the transparency of the junction $T_{m}=4\cos ^{2}\phi /(4\cos ^{2}\phi +Z^{2})$, where $Z=2H/v_{F}$ is a dimensionless constant, $\phi $ is the injection angle of a quasiparticle measured from the interface normal to the junction and $v_{F}$ is Fermi velocity. In the following, we solve the Usadel equations \cite{Usadel} with using the standard $\theta$-parameterization. The parameter $\theta (x)$ is a measure of the proximity effect in DN and obey the following equation \begin{equation} D\frac{\partial ^{2}}{\partial x^{2}}\theta (x)-2\omega_n\sin [\theta (x)]=0, \label{Usa1} \end{equation} where $D$ and $\omega_n$ denote the diffusion constant and the Matsubara frequency, respectively. The boundary condition for $\theta(x)$ at the DN/S interface is given in Ref.~\cite{TNGK}. The interface resistance $R_{b}$ is given by \begin{equation} R_{b}=R_{0} \frac{2} {\int_{-\pi/2}^{\pi/2} d\phi T(\phi)\cos\phi} \end{equation} with $ T(\phi)=4\cos ^{2}\phi /(4\cos ^{2}\phi +Z^{2})$. Here $R_{0}$ is Sharvin resistance $R_{0}^{-1}=e^{2}k_{F}^2S_c/(4\pi^{2})$, where $k_{F}$ is the Fermi wave number and $S_c$ is the constriction area. The current distribution is given by \begin{equation} j(x) = - 8\pi e^2 N\left( 0 \right)DT\sum\limits_{\omega _n > 0} {\sin ^2 \theta \left( x \right)} A\left( x \right), \end{equation} where $A(x)$, $N(0)$ and $T$ denote the vector potential, the density of states at the Fermi energy and the temperature of the system respectively. The Maxwell equation reads \begin{equation} \frac{{d^2 }}{{dx^2 }}A\left( x \right) = - 4\pi j\left( x \right). \end{equation} The boundary conditions for $A(x)$ are given by \begin{eqnarray} \frac{d}{{dx}}A\left( 0 \right) = H, \qquad A\left( L \right) = 0, \end{eqnarray} where we have neglected the penetration of magnetic fields into the superconductor by assuming a small penetration depth in S. Finally we obtain the expression of the susceptibility, \begin{equation} - 4\pi \chi = 1 + \frac{{A\left( 0 \right)}}{{HL}}. \end{equation} The $d$-wave pair potentials in directional space are given by $\Delta_{\pm} = \Delta(T)\cos2(\phi \mp \alpha)$, where $\Delta(T)$ is the magnitude of pair potential at a given temperature $T$ and $\alpha$ denotes the angle between the normal to the interface and the crystal axis of a $d$-wave superconductor. \section{Results} In the following, we focus on the magnitude of the diamagnetic susceptibility $\chi$ induced by the proximity effect. Figs. 2 and 3 show the susceptibility for $Z=10$ and $Z=0$ respectively where $K =16\pi e^2 N\left( 0 \right)D^2$. For $\alpha=0$, the temperature dependencies of $-4\pi\chi$ are not much different. For $\alpha=0.125\pi$, the magnitude of $\chi$ for $Z=10$ is much stronger suppressed than that for $Z=0$. At the same time, we find that the magnitude of $\chi$ decreases with increasing $\alpha$. We note in the case of $\alpha=0.25\pi$ that the susceptibility completely vanishes, (i.e., $-4\pi\chi=0$). This is because the proximity effect is absent in diffusive metals due to angular averaging\cite{TNGK}. The absence of the proximity effect is a significant feature specific for junctions containing unconventional superconductors. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=16.0cm,clip]{fig7.eps}} \end{center} \par \caption{Susceptibility for low transparent junctions with $Z=10$.} \end{figure} \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=16.0cm,clip]{fig8.eps}} \end{center} \par \caption{ Susceptibility for high transparent junctions with $Z=0$.} \end{figure} We also plot the $\alpha$ dependencies of the susceptibility at $T/T_C=0.01$ and $T/T_C=0.1$ in Fig. 4. For all cases, $\chi$ is a decreasing function of $\alpha$. At $T/T_C=0.01$, the magnitude of $\chi$ for $Z=10$ rapidly decreases with the increase of $\alpha$. The results imply that the MARS suppresses the proximity effect in low transparent junctions and low temperatures. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=16.0cm,clip]{fig9.eps}} \end{center} \par \caption{ $\alpha$ dependences of the susceptibility at $T/T_C=0.01$(upper panel) and $T/T_C=0.1$(lower panel).} \end{figure} \section{Conclusions} In the present paper, we have calculated the induced Meissner effect by the proximity effect in DN region of DN/$d$-wave superconductor junctions. We have solved the Usadel equation under a general boundary condition \cite{TNGK} in which the formation of the MARS is fully taken into account~\cite{TK}. The magnitude of $\chi$ decreases with the increase of $\alpha$ up to $0.25\pi$. At $\alpha=0.25\pi$, where all quasiparticles feel MARS, the $\chi$ becomes zero. It might be interesting to check experimentally such an anomalous proximity effect in DN. Another future problem is a similar calculation of the induced Meissner effect with a $p$-wave triplet superconductor instead of a $d$-wave one, since dramatic new phenomena are recently predicted in DN/ triplet junctions \cite{pwave}. The authors appreciate useful and fruitful discussions with Yu. Nazarov and H. Itoh. This work was supported by the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Corporation (JST). The computational aspect of this work has been performed at the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and the Computer Center.
proofpile-arXiv_065-3196
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Several work on bar instability in stellar disks has been done using N-body spherical halos (\citet{Sel81}; \citet{atha87}; \citet{Deb00}; \citet{Pat00}; \citet{May04}).\\ \citet{Cu99} were the first to emphasise the role of both the geometry and of the dynamical state of a live dark matter (DM) halo in enhancing the bar formation. Progressive efforts for improving models of the halo have been made in the last years (\citet{Ma01}; \citet{athami02}), taking into account also the informations coming from the cosmological hierarchical clustering scenario of structures formation about density distribution and concentration of DM halos. In the meanwhile, the ever-growing computing power available to the community has made possible to start simulations of formation and evolution of galaxies in a fully cosmological context. First works devoted to deep our understanding of disk galaxies in such a scenario (\citet{Gov02}; \citet{Aba03}) have shown that is very difficult to obtain pure disk galaxies mainly because of the high angular momentum loss of the gaseous component. Even with a careful choice of the hosting DM halo, the simulated galaxies appear to have over--massive bulges compared to their disks. In a recent paper, \citet{Spri04} claim to have overcome most problems, however the bar instability has not yet been analysed in a cosmological framework. Furthermore, the high CPU cost of such simulations does not yet allow to explore the role of several parameters, most related to the phenomenological treatment of the star formation rate and feedbacks, on the morphologies of the generated galaxies (\citet{MaCu03}; \citet{Ma03}).\\ In this work we present the first attempt to analyse the growth of bar instability in a fully consistent cosmological scenario. We embed a pure stellar disk inside a cosmological halo selected in a suitable slice of Universe and follow its evolution inside a cosmological framework. We want to explore how the bar instability behaves and what is the role of such a scenario. In particular we want to address, besides the role played by the disk-to-halo mass ratio, that of the dynamical state of such an halo as given by its substructure and infall, or more generally by its evolution. Our model cannot be viewed as a general, {}``all-purpose{}'' galaxy evolution model, since the \emph{gradual} formation and growth of the stellar disk is a fundamental component of the galaxy evolution itself. However our approach allows to vary parameters like the disk-to-halo mass ratio and the disk temperature, as given by {\it Q} parameter, to analyse the growth of the bar instability and its dependence on such parameters for the first time in a self-consistent cosmological framework. We analyse further the influence of the cosmological environment by comparing these results with those in an isolated scenario with the same halo.\\ The plan of the paper is the following: Section 2 and 3 describe technical details, in particular the recipe for the initial $disk+halo$ system and our framework, focusing on the cosmological evolution and on the properties of the halo. In Section 4 we present the whole set of our disk+halo simulations; in Section 5 we point out our results in the cosmological context and the comparison with isolate runs. In Section 6 is our discussion and in Section 7 our conclusions. In the Appendix we analyse the robustness of our results, checking for particle resolution and softening length effects. \section{Numerical method} Our galaxy model consists of a truncated exponential disk \citep{Cu99}, self-consistently embedded in a suitable DM halo extracted from a cosmological simulation. To select the DM halo, we perform a low-resolution (128\( ^{3} \) particles) simulation of a {}``concordance{}'' \( \Lambda \)CDM cosmological model: \( \Omega _{m} \) =0.3, \( \Omega _{\Lambda } \)=0.7, $\sigma_8 = 0.9$, \( h \)=0.7, where \( \Omega _{m} \) is the total matter of the Universe, \( \Omega _{\Lambda } \) the cosmological constant, $\sigma_8$ the normalisation of the power spectrum, and $ h $ the value of the Hubble constant in units of $100 \, h^{-1}\,$ km\, s$^{-1}$\, Mpc$^{-1}$. The box size of our simulation is $ 25 h^{-1}$ Mpc, which allows us an adequate cosmological tidal field and no boundary effects on our disk. The initial redshift is 20. We employ the public parallel N-body treecode GADGET \citep{Spri01}. Our initial condition code has been adapted from the setup code of ART (\citet{Kra97}; \citet{Kra99}; courtesy of A. Klypin).\\ From this simulation we identify the DM halos at {\it z}=0 in the mass \footnote{In the following, we will refer to the mass as the virial mass i.e. that enclosed in a sphere with overdensity $ \delta = \rho /\rho _{crit}=178\cdot \Omega _{m}^{0.44}$ \citep{Nav00}.} range 0.5- 5\( \cdot \)10\( ^{11} h ^{-1} \) M\( _{\odot },\) with a standard friends-of-friends algorithm. We discard the halos belonging or near to overdense regions (see Sect. 3). Then we follow back the simulation and discard those which suffer significant mergers after a redshift of \( \sim \)5. So we select one suitable DM halo with a mass M\( \sim \)10\( ^{11} \)\( h^{-1} \) M\( _{\odot } \) (at {\it z}=0). We resample it with the multi-mass technique described in \citet{Kly01}. The particles of the DM halo, and those belonging to a sphere with a radius $4 h^{-1}$ Mpc, are followed to their Lagrangian position and resampled to an equivalent resolution of 1024 \(^{3} \) particles. The total number of DM particles in the high resolution region is $1216512$ which corresponds to a DM mass resolution of \( 1.21\cdot 10^{6} h^{-1}\)M\( _{\odot } \). The needed high frequency power is added without modifying the low-frequency Fourier phases of the CDM power spectrum in our low resolution run. The high resolution zone is surrounded by three shells with lower and lower resolution, the lowest one including all the remaining (not resampled) particles among the initial 128\( ^{3} \) set.\\ The size of the initial Lagrangian region is large enough to resolve with high resolution not only the DM halo, but also its accreting sub--halos. The high-resolution DM halo is followed to the redshift {\it z}=0. We checked that \emph{no} lower resolution particles (intruders) are ever present at a radius lower than \( \sim \) 2 \( h^{-1} \)Mpc from its centre. This corresponds to the particle with the minimum gravitational energy.\\ Our approach allows us to account for the cosmological tidal field acting on the DM halo and to accurately follow the evolution of the selected halo in a self-consistent way.\\ We carried out two sets of simulations embedding the galactic disk in the halo at the redshifts $z=2$ and $z=1$ respectively. The first choice corresponds to 10.24 Gyr down to $z=0$ in our chosen cosmology, the second one to 7.71 Gyr.\\ Details of our model disk are presented elsewhere \citep[e.g.][]{Cu99}. Here we summarise the main features of the disk. The spatial distribution of the star particles follows the exponential surface density law: \( \rho _{stars}=\rho _{0}\exp -(r/r_{0}) \) where \( r_{0} \) is the disk scale length, \( r_{0}=4h^{-1} \)kpc, and \( \rho _{0} \) is the surface central density. The disk is truncated at five scale lengths with a radius: \( R_{disk}=20h^{-1} \)kpc. To obtain each disk particle's position according to the assumed density distribution, we used the rejection method \citep{Pre86}. The vertical coordinate is extracted from a Gaussian distribution with a standard deviation equal to 1\% of the disk radius. Circular velocities are assigned analytically to disk particles accounting for the global (disk+cosmological halo) potential, $\Phi$. The radial velocity dispersion ${\sigma}_R$ is assigned through a Toomre parameter {\it Q}. {\it Q} is initially constant at all disk radii and is it defined as ${Q}= {{ {\sigma}_R \,\kappa } \over{ 3.36 \,G \,\Sigma }}$, where $\kappa$ is the epicyclic frequency, and ${\Sigma}$ the surface density of the disk. According to the isothermal sheet approximation, the ratio of radial to vertical dispersion is fixed and constant through the disk, moreover the azimuthal dispersion is linked to the radial dispersion via the epicyclic approximation \citep{hern93}. The final velocity distributions are Gaussian, with the dispersions given above.\\ Assigning a constant initial {\it Q}, we can classify easily our disks on the basis of the initial temperature. We explore two values of {\it Q}: 1.5, which corresponds to a {\it warm} disk, and 0.5, to a {\it cold} disk. The average {\it Q} value of stars in the Milky Way is estimated between 1 and 3 \citet{bintre87}, however the evolution of such a parameter starting from high {\it z} is not known. \\ Our model of galaxy is very simplified. Neither gas nor star formation are introduced since we aim to focus on the \emph{gravitational} effect of the halo on the disk and to have hints on the \emph{gravitational} feedback of the disk itself on the halo. Moreover, our technique is such that the CPU cost of one simulation, while large, is still much lower than the cost of galaxy formation simulation like that by \citet{Aba03}, even if our force and mass resolution are comparable. Thus our work could give insights into self--consistent galaxy formation scenario.\\ In the following we summarise the main steps of our approach:\\ i) the halo is identified at redshift $z=0$ ;\\ ii) its particles are tracked back to the selected redshift (i.e. $z=1$ and $z=2$), and the minimum of their potential well is calculated;\\ iii) a \emph{sphere} of radius R\( _{sphere} \)=3R\( _{disk} \) is extracted from the high resolution simulation; its bulk velocity and the position of its centre are recorded. R\( _{sphere} \) is chosen to ease the comparison of our results with previous numerical work on disk stability, e.g. \citet{Cu99} and \citet{Ma01}. Note that $ R _{sphere} \geq R _{vir} $ and $ M_{halo}^{vir} \)\( \geq M_{halo}^{sphere}$;\\ iv) the vector angular momentum \( \overrightarrow{J} \) and the gravitational potential \( \Phi \) are calculated, inside $R_{sphere}$;\\ v) the disk, in gravitational equilibrium with the potential \( \Phi \) and rotating in a plane perpendicular to \( \overrightarrow{J} \), is generated;\\ vi) we embed the disk in the high resolution simulation, at the chosen redshift, with its centre of mass in the minimum potential well of the DM halo;\\ vii) the bulk velocity of the halo is added to the star particles.\\ The cosmological simulation is evolved then, in comoving coordinates, to the final redshift, $z=0$. \section{The DM halo} After selecting the halo and resampling at the higher resolution the corresponding Lagrangian region, we run the DM simulation, to extract the halo properties in absence of any embedded stellar disk. The mass of our halo at $z=0$, \(1.03\cdot 10^{11}h^{-1} \) M\( _{\odot } \), corresponds to a radius, \( R_{vir} = 94.7h^{-1}\)kpc, which entails 84720 halo particles. The nearest DM halo \footnote{ Halos have been identified using the friends of friends algorithm with a linking length $l = 0.15$, mean interparticle distances, with more than 8 particles.} more massive than $10^{10} h^{-1} M_\odot$ is $\sim 1900 h^{-1}$ kpc away from the centre of our halo; the less massive one, having mass of $4.6\cdot 10^7 h^{-1} M_{\odot}$, is $\sim 215 h^{-1}$ kpc away. Moreover, the behaviour of the density contrast, $\delta$, is monotonically decreasing with the radius, and $\delta$ falls below the unity value at $\sim ~550h^{-1}, ~450h^{-1}, ~350 h^{-1}$ physical kpc away from the centre of our halo at $z=0, z=1$, and $z=2$ respectively. Therefore, we conclude that the selected halo is living in an under-dense environment. \begin{figure*} \centering \includegraphics[angle=-90,width=10cm]{f1a.eps} \includegraphics[angle=-90,width=10cm] {f1b.eps} \caption{Accretion history of our selected DM halo as a function of the redshift: top panel from redshift $z=40$ to $z=0$, bottom panel from $z=5$ to $z=0$; solid (red) line shows the mass of the most massive progenitor of the halo, long-dashed (green) line that of the second most massive progenitor, dotted (magenta) line the total mass of progenitors (the most massive excluded), and short-dashed (blue) line the total mass of {\it field } particles.} \label{accrhist} \end{figure*} \begin{table*} \caption{ DM halo properties} \label{halotable} \begin{tabular}{c c c c c c c c } \hline\hline $z$ & $C_{vir}$ & $M_{vir}$ & $R_{vir}$ & $N_{part}$ & $\lambda$ & $\tau_{60}$ & $\tau_{20}$\\ \hline 0 & 18.0 & 1.03 $\cdot 10^{11}$ & 94.7 & 84720 & 0.01 & 0.60 & 0.80 \\ 1 & 13.2 & 7.7 $\cdot 10^{10}$ & 51.6 & 63886 & 0.02 & 0.78 & 0.92 \\ 2 & 8.3 & 5.2 $\cdot 10^{10}$ & 30.9 & 42660 & 0.04 & 0.92 & 0.90 \\ \hline \end{tabular} \\ I col: redshift\\ II col: concentration parameter $C_{vir}$\\ III col: virial mass, in $h^{-1} M_{\odot} $\\ IV col: virial radius, in $h^{-1}$ physical kpc \\ V col: number of DM particles within the virial radius\\ VI col: spin parameter\\ VII col: triaxiality parameter within a sphere of radius $60 h^{-1}$physical kpc\\ VIII col: triaxiality parameter within a sphere of radius $20 h^{-1}$physical kpc \\ \end{table*} The accretion history of our halo has been calculated as follows. Starting from redshift $z_1=0$, we identified DM halos using the public halo finder SKID \footnote{http://www-hpcc.astro.washington.edu/tools/skid.html} \citep{Sta01} at the redshift $z_2=z_1+dz_1$, corresponding to our previous simulation output. We then define as {\it progenitor} of our halo a SKID group at the redshift $z_2$, if at least a fraction $f=30$\% of its particles come from the halo at the redshift $z_1$. We also identify as {\it accreting field particles} all the DM particles not belonging to any SKID group but belonging to the halo. We then iterate the procedure, using the simulation output corresponding to the redshift $z_3=z_2+dz_2$; the progenitors are now all the groups which have at least a fraction $f$ of particles coming from progenitors at $z_1$ {\it or} from the accreting field particles, and so on for the earlier redshifts. We check that the qualitative behaviour of the accretion history is not dependent on the value of $f$ (we also tested $f$=20\% and $f$=50\%) and on the parameters used in SKID (we use a typical object size $\tau = 5 h^{-1}$kpc but also the effect of 3$h^{-1}$\,kpc and $6h^{-1}$\,kpc have been explored). From Fig. \ref{accrhist}, we can note that the halo suffers its last major merger (i.e. a merger between two progenitors whose masses have a ratio which is not larger than 3) at $z=9$. After $z \sim 5$, the most important contribution to its mass comes from accreting field particles. This contribution declines after $z \sim 2$ becoming less and less important. At $z \sim 0.9$, the total accreting mass is smaller than the mass of the larger sub--halo. Thus we conclude that or halo has no significant merger during the time hosting our stellar disk, nor immediately before.\\ The properties of the selected halo at three relevant redshifts are listen in Table \ref{halotable}. Its density profile is well--fitted by a NFW form (\citet{Nav96}; \citet{Nav97}) at $z \leq 2$. The concentration, $C_{vir}$ \footnote{ We note however that $C_{NFW}$ is defined against $R_{200}$, the radius enclosing a sphere with overdensity equal to 200 times the critical density of the Universe, and not against $R_{vir}$ as here; therefore, in our cosmological model, it is always $C_{NFW}<C_{vir}$. At $z=0$ our halo has $C_{NFW} \sim 14$.} here defined as $R_{vir}/R_s$, takes an high value, 18.1, confirming that this halo does ``form'' at quite high redshift (see \citet[e.g.][]{Wec02} for a discussion about the link between concentration and assembly history of the halo). The dimensionless spin parameter of the halo is defined as: $\lambda = { J \over \sqrt{2} MVR }$ \citep{bul01} where $J$ is the angular momentum inside a sphere of radius $R$ and $V$ is the halo circular velocity, $V^2=GM/ R$. Its values in Table \ref{halotable} are near to the average ones for our cosmological model \citep[$\lambda = 0.035$;][]{Mal02} \section{Disk simulations} We performed seven simulations of the disk+halo system as described here below (Sect. 4.1). By comparing results of such a set of 7 simulations (Sect. 5) with the DM-only, we disentangle the effect of the stellar disk on the halo evolution in the cosmological framework. Several simulations of the disk+halo isolated system are also run, to disentangle the effect of the cosmological environment (Sect. 4.2 and 5.2). We used 56000 star particles to describe our disk; the (Plummer-equivalent) softening length, the same for DM and star particles, is 0.5$\,h^{-1}\,$kpc in comoving coordinates \footnote{Note that, since the disk is modelled in physical coordinates and embedded in the cosmological halo at redshifts $z=2$ and $z=1$, its thickness is larger than the value of the Plummer softening we use.}. We used a time-step criterion based on the local dynamical time (criterion {}``3{}'' of the GADGET code), which provides $2-6 \times 10^4$ time-steps from $z=2$ to $z=0$ (except one case, simulation 5 of Table 2 which needs only $\sim 7000$ time steps). The most CPU--expensive of our simulations needed $\sim 5000$ CPU hours to be completed on the SP4 computer (CINECA computing center). \subsection{Cosmological cases} \begin{table*} \caption{ Simulations: initial values.} \label{cosmsimtable} \begin{tabular}{c c c c c c c c c} \hline\hline $N$ & $Q$ & $M_{disk}$ & $ {\it z} $ & $M_{DM}$ & $R_{DM}$ & $ \alpha \, r_m$ & ${v_m}\over{{(\alpha G M_{disk})}^{1/2}}$ & halo \\ \hline c1 & 0.5 & 1 & 2 & 0.64 & 0.64 & 1.9 & 0.67 & ${}$ \\ c2 & 0.5 & 0.33 & 2 & 0.64 & 1.94 & 1. & 1.08 & ${}$ \\ c3 & 0.5 & 0.1 & 2 & 0.64 & 6.4 & 0.9 & 1.68 & ${}$ \\ c4 & 1.5 & 0.33& 2 & 0.64 & 1.94 & 1. & 1.08 & ${}$ \\ c5 & 1.5 & 0.1 & 2 & 0.64 & 6.4 & 0.9 & 1.68 & ${}$ \\ c6 & 0.5 & 0.33 & 1 & 0.67 & 2.0 & 1.05 & 1.05 & ${}$\\ c7 & 0.5 & 0.1 & 1 & 0.67 & 6.7 & 1. & 1.6 & ${}$ \\ i1 & 1.5 & 0.33 & ${}$ & 0.64 & 1.94 & 1 & 1.08 & cosm\\ i2 & 1.5 & 0.1 & ${}$ & 0.64 & 6.4 & 0.9 & 1.68 & cosm\\ i3 & 1.5 & 0.33 & ${}$ & 0.64 & 1.94 & 1 & 1.08 & cosm/frozen disk\\ i4 & 1.5 & 0.33 & $ {} $ & 0.95 & 2.87 & 0.85 & 1.5 & NFW \\ i5 & 1.5 & 0.1 & ${} $ & 0.95 & 9.5& 1.27 & 1.25 & NFW \\ \hline \end{tabular} \\ I col: simulation number and simulation type (c: cosmological simulations, i: isolated simulations) \\ II col: {\it Q} initial value of the disk\\ III col: mass of the disk in $5.9\times 10^{10}\, M\odot$\\ IV col: initial redshift (for the cosmological cases)\\ V col: initial DM mass inside the disk radius\\ VI col: initial halo-to-disk mass ratio inside the disk radius\\ VII and VIII cols: \citet{Efs82} parameters, where: $\alpha={r_0}^{-1}$, $v_m$ is the maximum rotational velocity, and $r_m$ the corresponding radius.\\ IX col: type of halo used (for the isolated cases) \end{table*} \begin{table*} \caption{ Simulations: final results} \label{cosmsimtable_fin} \begin{tabular}{c c c c c c c c } \hline\hline $N$ & $M_{DM}$ & $R_{DM}$ & $S_m$ & $Q_t$ & $a_{max}$ & bulge & bars in bars\\ \hline c1 & 0.79 & 0.8 & 0.42 & 0.38 & 7 & y & n\\ c2 & 0.77 & 2.39 & 0.33 & 0.44 & 8 & y & n\\ c3 & 0.73 & 7.41 & 0.8 & 0.07 & 3.8 & n & y\\ c4 & 0.78 & 2.40 & 0.48 & 0.37 & 5 & weak & n\\ c5 & 0.73 & 7.41 & 0.70 & 0.08 & 6.5 & n & y\\ c6 & 0.79 & 2.43 & 0.35 & 0.42 & 6.8 & weak & n\\ c7 & 0.77 & 7.73 & 0.58 & 0.16 & 5.0 & n & y\\ i1 & 0.73 & 2.21 & 0.25 & 0.4 & 9.5 & y & n \\ i2 & 0.51 & 5.1 & 0.68 & 0.1 & 8 & n & y \\ i3 & 0.7 & 2.12 & 0.3 & 0.39 & 10 & y & n \\ i4 & 1. & 3.03 & 0.33 & 0.42 & 6 & y & n \\ 15 & 0.95 & 9.5 & 0 & 0 & 0 & n & no bar \\ \hline \end{tabular} \\ I col: simulation number and simulation type\\ II col: DM mass inside the disk radius in $5.9\times 10^{10}\, M\odot$\\ III col: halo-to disk mass ratio inside the disk radius\\ IV col: maximum bar strength at $z=0$: strong bar \citet{Ma01} require $S_m\le 0.6$\\ V col: bar strength evaluated according to \citet{Comb81}, stronger bar corresponds to higher values of $Q_t$\\ VI col: major axis (physical kpc) corresponding to the maximum bar strength\\ VII col: morphology of the inner region of the disk\\ VIII col: peculiar features inside the disk \end{table*} The main parameters and the initial properties of this set of simulations are listed in Table \ref{cosmsimtable}.\\ A global stability criteria for bar instability in a disk galaxy is the one analysed in \citet{Efs82}. In such paper the parameters $\alpha \, r_m$ and $v_m\over{{(\alpha M G)}^{1/2}} $ (where $v_m$ is the maximum value of the disk rotational curve, $r_m$ the corresponding radius, ${\alpha} = {{r_0} ^{-1}}$ and $M$ is the disk mass) have been defined. \citet{Efs82} stated the criterion ${v_m\over{{(\alpha M G)}^{1/2}}} \geq {1.1} $ over the range $0.1 \leq {\alpha \, r_m} \leq 1.3$ for a disk model being stable to bar formation. The values of these parameters are reported in Table \ref{cosmsimtable}.\\ Simulations c1, c2 and c3 in Table \ref{cosmsimtable} refer to a {\it cold} disk ($Q=0.5$). In simulation c1, at the final time (i.e. $z=0$) the baryon fraction inside $R_{vir}$, $f_{b}=M_{disk}/M_{disk+DM} \sim 0.34$, is 44\% less than its initial value, 0.53. The final baryon fraction of simulation c2 is $\simeq 0.16$, compared with its initial value, 0.28. Simulation c3 provides $f_{b}\simeq 0.05$ at $z=0$, 50\% less than its initial value. Simulations c4 and c5 provide the same final baryon's fractions as the corresponding simulations with the lower Toomre's parameter.\\ Simulation c6 and c7, which explore the role of the initial redshift on the bar instability, provide quite the same final values of the baryon's fraction as the corresponding simulations c2 and c3. Neither the Toomre parameter nor the initial redshift affect the evolution of this ratio which is driven by the mass of the stellar disk. While the baryon fraction of simulation c1 is too high to be consistent with the cosmological value 0.166 \citep{Ett03}, all the other simulations give baryon fractions in the allowed range. We however emphasise that the aim of the current work is not to build a realistic galaxy model, but to study the effect of different halo-to-disk mass ratios on the onset of the bar instability. We verify that the inclusion of the disk does not result in significant changes in the accretion history of the DM halo.\\ \subsection{Isolated cases} We also performed several simulations of the isolated disk+halo system using the same halo as extracted from our cosmological simulations at $z=2$ (Sect. 5.2 and Appendix). By comparing results of this set of simulations with the previous ones we aim to disentangle the effect of the large scale cosmological structure and of cosmological expansion on the system evolution. Moreover, such results are directly comparable both with our previous works (\citet{Cu99}; \citet{ Ma01}) and with those in literature \citep{atha87}. The initial and final values for these simulations are listed in Tables \ref{cosmsimtable} and \ref{cosmsimtable_fin}. We stress that our isolated halo is produced by a non dissipative collapse in a cosmological scenario. As a consequence its mass distribution is not spherically symmetric. Moreover such a halo is anisotropic and endowed with a spin parameter and substructure. Therefore it is different from the standard isolated halos used in literature to study bar instabilities, since it keeps a relic cosmological signature. When the halo is extracted from its cosmological environment, the large scale structure, the continuing matter infall and the expansion of the Universe no longer influence its evolution. Such a halo {\it cannot} be in gravitational equilibrium, because its evolution has not yet completed neither at $z=2$ nor at $z=1$, as shown in Fig. \ref{accrhist}. For this reason, the results presented below, concerning the behaviour of the disk embedded in such a ``isolated'' halo have to be compared with similar cases, in which non--equilibrium DM halos are used, as e. g. in \citet{Cu99}; \citet{ Ma01}. After subtracting the CM velocity and embedding the disk, as described in Sect. 2 items i-v, we integrated the system in {\it physical} coordinates (the effect of the cosmological expansion is therefore ruled out in these models) . A further difference is that the softening length is now in physical units. We have at least 10000 time steps from the initial time to $t=10.24$ Gyr corresponding to redshift 0.\\ Finally, in order to disentangle the effect of the geometry and of the spin of an isolated halo we also performed two simulations using a Navarro Frenk and White (NFW) halo having the same virial radius and mass as our cosmological one. The initial and final values of these two simulations are listed in the two last lines of Tables \ref{cosmsimtable} and \ref{cosmsimtable_fin}. \\ \section{Results} In this section we present the evolution of isodensity contours of the different cosmological simulations. From these contours we evaluate the final bar strengths which are reported in Table \ref{cosmsimtable_fin}. Spatial resolution of the maps is always 0.5$h^{-1}$ {\it physical \,} kpc and the box size is 40 times the spatial resolution. Contours are computed at 11 fixed levels ranging from $2\times 10^{-4}$ to $0.015$ in term of fraction of stars/spatial resolution within the total number--density of stars in the map. Following \citet{Cu99}, we define, as a measure of the bar strength, the maximum value of the axial ratio, $S_m=b/a$ (Table \ref{cosmsimtable_fin}): a strong bar corresponds to $S_m\leq0.6$ or to an ellipticity, $\epsilon=(1-b/a)$, larger than 0.4. \subsection{Morphologies of the stellar disk in the cosmological framework} Fig.s \ref{dens1}, \ref{dens2} and \ref{dens3} show the evolution of isodensity contours of simulations c1, c2 and c3. More massive {\it cold} disks suffer stronger lopsided instability (m=1) from the beginning of their evolution which degenerates in the m=2 instability, i.e. the bar instability, later on. The less massive {\it cold} disks, i. e. DM dominated cases, show a weaker m=1 instability. So bar instability develops before than in the corresponding more massive cases and the disk attempts to re-arrange before the end of the simulation. \begin{figure*} \centering \includegraphics[width=10cm]{f3.eps} \caption{ Evolution of isodensity contours of simulation c1 at 11 fixed levels (see Sect. 5). The size of all the frames is $20 h^{-1}$ physical kpc, here and in all the following figures in which isodensity contours are shown.} \label{dens1} \end{figure*} \begin{figure*} \centering \includegraphics[width=10cm]{f4.eps} \caption{Evolution of density contours of simulation c2 as described in Fig. \ref{dens1}.} \label{dens2} \end{figure*} \begin{figure*} \centering \includegraphics[width=10cm]{f5.eps} \caption{Evolution of isodensity contours of simulation c3 as described in Fig. \ref{dens1}} \label{dens3} \end{figure*} Fig.s \ref{densxz} and \ref{densyz} compare face-on, side-on and edge-on isodensity contours of simulations c1 and c2 at $z=0$. We point out that our {\it cold} intermediate mass case shows {\it peanut-shape} in the side-on view and {\it bulge-like} contours in the edge-on view. Therefore, in this case, a bulge could be mis-identified due to the bar feature. However in the {\it warm} analogous case (Fig.\ref{densz0}) this feature does not arise. On the other hand, our more massive {\it cold} disk shows edge-on isodensity contours with a less defined inner bulge and quite ticker disk-like contours in the outer regions. Its side-on view corresponds to a boxy image without an extreme peanut feature.\\ Therefore the halo-to-disk ratio has a significant influence on the stellar disk at $z=0$,\\ The {\it Q} parameter does not influence the final ($z=0$) morphologies of our less massive disks, showing always disk-like shapes (Fig.\ref{densz00}). This suggests to regard the {\it cold} intermediate mass case as a peculiar one as far as the peanut shape, is concerned. Such a feature has been recognized by \citet{Comb81} as caused by vertical orbital resonances. Fig. \ref{dens4} shows the isodensity contours of simulation c4. In this simulation the higher value of the {\it Q} parameter stabilises the disk against the local Jeans instability and the bar appears later than in the corresponding {\it cold} case (simulation c2).\\ \begin{figure*} \centering \centering \includegraphics[width=10cm]{f6.eps} \caption{Face-on, side-on and edge-on view of isodensity contours of simulation c1 at $z=0$} \label{densxz} \end{figure*} \begin{figure*} \centering \includegraphics[width=10cm]{f7.eps} \caption{Face-on, side-on and edge-on view of isodensity contours of simulation c2 at $z=0$} \label{densyz} \end{figure*} \begin{figure*} \centering \includegraphics[width=10cm]{f8.eps} \caption{Face-on, side-on and edge-on isodensity contours at $z=0$ of simulation c4 in Table \ref{cosmsimtable}.} \label{densz0} \end{figure*} \begin{figure*} \centering \includegraphics[width=10cm]{f9a.eps} \includegraphics[width=10cm]{f9b.eps} \caption{Face-on, side-on and edge-on isodensity contours at $z=0$ of simulation c3 (top panels) and c5 (bottom panels) in Table \ref{cosmsimtable}.} \label{densz00} \end{figure*} Therefore {\it warmer} disks are more stable against lopsided instability than the corresponding {\it cold} cases. Inside {\it warmer} and less massive disks, bars in bars, namely bar features at different isodensity levels, nested with twisting major axes, are also shown. \begin{figure*} \centering \includegraphics[width=10cm]{f10.eps} \caption{Evolution of isodensity contours of simulation c4 as described in Fig. \ref{dens1}} \label{dens4} \end{figure*} \begin{figure*} \centering \includegraphics[width=10cm]{f11.eps} \caption{Face on, side-on and edge-on isodensity contours of simulation c6 at $z=0$} \label{dens6} \end{figure*} Isodensity contours of simulation c6 (Fig. \ref{dens6}), are quite similar to those of the corresponding simulation c2 (Fig. \ref{densyz}), which however starts at $z=2$. Morphologies of both simulations c6 and c7 show thinner disks than simulations c4 and c5 given their shorter evolutionary time ($\approx 7.7 $ Gyr instead of $\approx 10.24 $ Gyr). \begin{figure*} \centering \includegraphics[width=10cm]{f12.eps} \caption{Isodensity contours, in particular face-on, side-on and edge-on views of the inner bar of simulation c7 at $z=0$} \label{dens7} \end{figure*} \subsubsection{The bar strength} A variety of quantitative parameters have been suggested to evaluate the strength of the bar (see \citet{BuBlo01} for a review). Firstly we quantify the growth of the bar instability studying the time evolution of the ellipticity of our isodensity contours as a function of their major axis, $a$. The strength of the bar depends on the density contrast accounted for, and it varies with the distance from the centre; different choices can change its value but not the trend outlined in Table \ref{cosmsimtable_fin}.\\ Fig. \ref{ell1} shows that in simulation c1 the strength of the bar increases with time. The length of the bar depends on the redshift too: it grows until $z=0.5$, and then shrinks to $z=0$.\\ By comparing Fig. \ref{ell2} and Fig. \ref{ell4}, which show the ellipticity profiles of simulations c2 and c4 respectively, \begin{figure*} \centering \includegraphics[width=12cm]{f13.eps} \caption{Ellipticity as a function of the major axis, a (in physical kpc), of simulation c1 at different redshifts: $z=0$ continuous line, $z=0.25$ long-dashed line, $z=0.5$ long and short dashed line, $z=0.75$ dot-dashed line, $z=1$ short dashed line, $z=1.25$ dotted line. } \label{ell1} \end{figure*} \begin{figure*} \centering \includegraphics[width=12cm]{f14.eps} \caption{Ellipticity as a function of major axis, a, of simulation c2 at different redshifts; symbols are as in Fig.\ref{ell1}} \label{ell2} \end{figure*} \begin{figure*} \centering \includegraphics[width=12cm]{f15.eps} \caption{Ellipticity as function of the major axis, a, for simulation c4 at different redshifts; symbols are as in Fig.\ref{ell1}} \label{ell4} \end{figure*} we point out that a greater {\it Q}, in intermediate mass disks, directly reflects on the bar strength: a stronger local gravitational instability, corresponding to a lower {\it Q} value, triggers a stronger bar (Table \ref{cosmsimtable_fin}). For the less massive disks, {\it Q} is poorly influent on the bar strength. In these cases, the local Jeans instability has a small impact on the bar formation and evolution, which is dominated instead by the dynamics of the DM halo.\\ The embedding redshift does not have a major impact on the bar strength. Its more important effect is the change of the bar {\it length} which can be connected with the larger time span of simulation c2 (or c3) with respect to simulation c6 (or c7). Therefore, the halo evolution between $z=2$ and $z=1$ does not seriously affect the disk instability, at least for the {\it cold} disk cases.\\ \citet{Comb81} have defined the bar strength at radius R by using the parameter: $Q_t= {F_T^{max}(R)\over{<F_R(R)>}}$ where $F_T^{max}=[{\partial \Phi(R,\theta)}/{\partial \theta}]_{max}$ is the maximum amplitude of tangential force at radius R and $<F_R(R)>=R({\partial \Phi_0}/{\partial r})$ is the mean axisymmetric radial force, at the same R, derived from the $m=0$ component of the gravitational potential. We evaluated the components of the gravitational force on a suitable two dimensional grid using the method described by \citet{BuBlo01}. However information provided by such approach could be affected by spiral arm torques and by some asymmetry in the bar itself \citep{BuBlo01}. Nevertheless, we succeeded in monitoring the behaviour of such a parameter for simulations c2, c4 and c6 (Fig. \ref{Qg}). The {\it{cold}} cases end up with almost the same value of $Q_t$ even if their evolution starts from different redshifts. The {\it{warmer}} case, instead, maintains a smaller value of the bar strength during all the evolution, in agreement with results obtained by using ellipticity parameter. Table \ref{cosmsimtable_fin} shows that the final values (i.e. at $z=0$) of the bar strength evaluated with both these methods are consistent. \begin{figure*} \centering \includegraphics[width=12cm]{f16.eps} \caption{Evolution of bar strength after $z=1$ evaluated using the gravitational torque (see text) for the simulations c2 (full line), c4 (dotted line), and c6 (dashed line) } \label{Qg} \end{figure*} According to the classification of \citet{BuBlo01}, we assign class 1 to our less massive barred galaxies if their evolution starts from $z=2$ (i.e. simulations c3 and c5), class 2 if they evolve from $z=1$ (i.e. simulation c7), and class 4 to all the other ones (i.e. simulations c1, c2, c4 and c6). \subsection{Comparisons with isolated cases} In order to investigate the role of the cosmological framework on the bar instability, we perform isolated simulations using the same halo and the same disk-to-halo mass ratios as in our cosmological setting (Sect. 4.2). We plug Q=1.5 as stability parameter in the disk. Our results show that the less massive disks do not show important differences as far as the bar feature is concerned: both the bar strength is the same and the same bar in bar features arise. In Fig. \ref{profcomp} we compare the halo radial density profiles of simulations c5 and i2. The density of the halo evolving in isolation becomes initially steeper, then it gradually flattens in the centre. In the outer regions, where the support of the cosmological environment is now lacking, the halo is slowly losing matter toward bigger scales and the profile is steadily steepening. On the other hand, the halo evolving in the cosmological environment, continues to accrete mass and small substructures from larger scales. Such accretion is still significant up to redshift $z \approx 0.5$ at least (Fig. \ref{accrhist}). Even if the dynamical evolution of the halo is different in cosmological and isolated simulations, the bar in the disk does form and evolves in a similar way. Thus we make the hypothesis that the common features of the two numerical experiments, namely the dynamical evolution and the anisotropy of the mass distribution, are the main engine for the bar instability. \begin{figure*} \centering \includegraphics[width=12cm]{f17.eps} \caption{ Radial density profiles of the DM halo in simulations c5 (dashed lines) and i2 (solid lines) at redshifts $z=0, 0.5, 1.0, 1.5$ from top to bottom for simulation c5 and at the equivalent evolutionary times for simulation i2. The three couples of profiles below have been divided by $10^2, 10^4, 10^6$ for clarity. We also show a NFW density profile having a concentration parameter $c \approx 23$ (dotted line), obtained as a two parameters best--fit of the density profile of simulation c5 at $z=0$. Length units are in physical kpc.} \label{profcomp} \end{figure*} The large scale cosmological environment becomes a second order effect in the less massive disks. However the material accreting on the halo, which has been cut off with the halo segregation in a isolated system, plays a crucial role on the degree of the disk instability if the disk is not completely DM dominated. We conclude that the use of isolated halos in gravitational equilibrium for the study of the bar instability can give misleading results.\\ Taking into account our previous works in such isolated non cosmological framework (\citet{Cu99}; \citet{Ma01}), we derive that live {\it unrelaxed} halos correspond to the most ''realistic'' approach available to simplify the picture. Even if the caveat outlined above cannot be forgotten, the dynamical state of the halo, as outlined in our works for the first time, plays a fundamental role in triggering and fuelling such a instability.\\ In order to disentangle the role of the halo's cosmological features like the prolate geometry and the spin on the instability, and to test the resolution effect, we produced an isolated halo with the same virial mass, radius and number of particles as our cosmological halo at $z=0$, but with an isotropic NFW radial density profile. The procedure is described by \citet{hern93}. We used a rejection technique to sample the density profile and we then assign a velocity to each particle following a local Maxwellian velocity dispersion. We checked that after 7 Gyr of evolution, the radial density profile of the halo is not changed, except for the ``evaporation'' of some particles dwelling in its outskirts. We embedded then a disk having same mass, radius and {\it Q} as in our simulation c5 and c3. These two simulations are labelled in Table \ref{cosmsimtable} and Table \ref{cosmsimtable_fin} as i5 and i4. According to the classical theory (Sect. 4.1), in simulation i5 the bar instability would be inhibited. We successfully reproduced this result with our live NFW halo (Fig. \ref{NFWhalo}). Therefore the bar instability in simulation c5 is a genuine effect of the cosmological evolution and there is no evidence for a role of numerical noise. Moreover we note that in simulation i5 the reaction of the DM halo to the disk immersion has {\it not} triggered a long-lived bar instability (see Fig. \ref{NFWhalo}). \begin{figure*} \centering \includegraphics[width=10cm]{f21c.eps} \includegraphics[width=10cm]{f21b.eps} \includegraphics[width=10cm]{f21a.eps} \caption{Evolution of isodensity contours of the simulation i5 at different evolutionary times: from top to bottom, $t=5\,$Gyr, $t=7.5\,$Gyr and $t=10\,$Gyr; xy, yz and xz projections from left to right (see text (Sect. 7) for more details).} \label{NFWhalo} \end{figure*}\\ \section{Discussion} In this work we investigate the issue of the bar instability in stellar exponential disks embedded in a DM halo self-consistently evolving in a cosmological context. We aim to disentangle the effect of few well-defined disk parameters on this instability. We run also isolated simulations using the same cosmological halo to analyse the effect of the whole cosmological framework on the results. This paper is a re-visitation in such a cosmological scenario of the work by \citet{Cu99}. To compare our results with such paper we use the $R_{DM}$ ratio in Table \ref{cosmsimtable}. Critical threshold values against $m=1$ and $m=2$ instability for this ratio have been calibrated also by \citet{atha87}. They claim for a value around $0.81$ for the $R_{DM}$ ratio to inhibit the lopsided instability (i.e. $m=1$) and around $2.2$ to suppress the $m=2$ swing amplification instability. Even if these values are derived in a very simply framework, i.e. a isolated spherical and analytical DM halo, they are widely used in literature (\citet{bot03}; \citet{Elm03}), therefore we will refer to this parameter to analyse our initial condition, in addition to the Efstathiou et al. parameter introduced in Sect. 4.1. \\ Looking at Table \ref{cosmsimtable}, we point out that simulations c1, c2, c4 and c6, which develop strong final bars (Table \ref{cosmsimtable_fin}), are in the instability region for both these criteria. In particular in simulation c1, which is below the threshold of lopsided instability too \citep{atha87}, the signatures of such a instability are clearly shown in the first phases of its evolution (Fig. \ref{dens1}). On the other hand simulations c3, c5 and c7 are stable according to both the criteria above. Nevertheless a weaker bar appears and lasts until the end of such simulations. Therefore the classical parameters are not good markers of the onset of the bar instability. In particular, when the self-gravity of the disk is negligible, i.e. the disk is DM dominated, the halo structure generated by the cosmology plays a crucial role in triggering such a instability. \\ Our findings agree with results of \citet{May04} in the isolated framework. They found that stellar systems with disk-to-halo mass ratios 0.1 become bar unstable, regardless of the halo concentration and the {\it Q} value, inside halos built up with suitable structural parameters derived from $\Lambda$-CDM cosmology, like their circular velocity at $R_{vir}$, $V_{vir}$, the NFW density profile and the spin parameter (0.06 and 0.1). We point out that \citet{May04} do not take into account cosmological evolution for their halos. With the same disk-to-halo mass ratio \citet{atha02} found that such a instability is totally inhibited inside isotropic, non rotating halos with different density profiles \citep[eq. 1 of][]{atha02}, in agreement with the result of our simulation i5 (Sect. 7). This last result, together with those of simulations performed with different number of disk particles and with different softening length (Sect. 7) suggests that the development of long-living bars seen in our simulations is a genuine physical effect and not a numerical artifact.\\ Bar instability in the DM dominated cases is strongly affected by the halo models. Moreover structural details of the halo, related to the cosmological framework, drive morphological features of the stellar disk. {\citet{May04} found indeed a central bulge after 7 Gyr which does not appear in our corresponding case. However such a bulge shows up in our intermediate self-gravitating case (simulation i1 in Table \ref{cosmsimtable}). This feature is also emphasised in the work by \citet{athami02}) and \citet{atha03} for a disk-to-halo mass ratio 0.2, using the same halo presented in \citet{atha02} with the higher halo concentration.\\ Our results here are in good qualitative agreement with those by \citet{Cu99} concerning their simulations 3, 4, 7 and 8, with the same disk-to-halo mass ratio as in simulation c1 of Table \ref{cosmsimtable}, and also with their simulations 5 and 6 which correspond to a disk-to-halo mass ratio 0.2. However, we remark that their simulations 3 and 4 correspond to a {\it relaxed} halo, whereas 5, 6, 7 and 8 to a {\it unrelaxed} halo. In particular the $R_{DM}$ initial values of their simulations 5 and 6 are respectively above and very near the 2.2 threshold value of bar instability, nevertheless the bar lasts until the end of their simulations ($\simeq\,1.5$\,Gyr). Their simulations 1 and 2, which correspond to a {\it relaxed} dynamical state of a halo with disk-to-halo mass ratio 0.2, emphasise however a very different behaviour as far as the bar instability is concerned: the bar forms initially but degenerates then in a dense nucleus. Thus we argue that an {\it unrelaxed} dynamical state for isolated halo systems is more suitable to mimic a realistic {}``cosmological{}'' halo, characterised by evolution, substructure and in-fall. This finding is important, since a vast majority of the work on the bar instability assumes a \emph{gravitationally stable} halo. \section {Conclusions} In this work we present the first attempt to analyse the growth of bar instability in a fully consistent cosmological framework. We investigate such a issue in stellar disks embedded in a DM halo self-consistently evolving in a cosmological context. We aim to disentangle the effect of few well-defined disk parameters on this instability. We run also isolated simulations using the same cosmological halo to analyse the effect of the cosmological framework. Our results show that:\begin{itemize} \item{ stellar disks of different properties, i.e. mass and {\it Q} parameter, embedded in the same halo and evolving in a fully consistent cosmological scenario, develop long living bars lasting up to redshift 0.} \item{ The classical criteria to account for bar instability cannot be validated in a cosmological framework where a bar always develops, due the halo evolution.} \item{ The strength of the bar at $z=0$ is weakly depending on the {\it Q} parameter, for a given disk mass. However for the same disk-to-halo mass ratio, colder disks show stronger and longer bars. Thus the less massive {\it warm} disks entail the weakest bars, moreover bar in bar is a common feature in their face-on morphology.} \item{ Simulations performed embedding different disks in the same halo, extracted at $z=2$ from the cosmological framework, show that the effects of the large scale structures are negligible in the less massive, DM dominated disks.} \end{itemize} Moreover by comparing results in this work with our previous paper \citep{Cu99}, we point out that live {\it unrelaxed} halos are the more suitable approach to mimic cosmological halos and to analyse bar instability in the less massive disks.\\ The mass anisotropy and the dynamical evolution of the DM halo have a crucial effect in enhancing and fuelling the bar instability, also in cases where {\it ad hoc} halo models provided stability predictions, \citep[e.g.][]{atha03}. The large--scale effects, such as the continuous matter infall on the halo and the infall of substructures during the whole time--span of the simulation, influence the bar strength and the details of its structure.\\ {\bf Acknowledgements} Simulations have been performed on the CINECA IBM SP4 computer (Bo, Italy), thanks to the INAF-CINECA grants cnato43a/inato003 ``Evolution of disk galaxies in cosmological contexts'', and on the Linux PC Cluster of the Osservatorio Astronomico di Torino. We wish to thank for useful discussions: T. Abel, S. Bonometto, A. Burkert, E. D'Onghia, F. Governato, A. Klypin \& V. Springel.
proofpile-arXiv_065-3206
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In 1970 I was in England, where my wife and I stayed for five months with my parents in Essex. It was largely holiday, as we were on our way back to Australia after two years in Boston, where I had been introduced to the six-vertex models and the Bethe ansatz by Elliott Lieb. However, I did visit Cyril Domb's group at King's College, London, and it was there that I first interacted with Tony Guttmann, who was also visiting the department: he was an invaluable aid to navigating the labyrinthine corridors and staircases that linked the department's quarters in Surrey Street with the main part of the College. Tony's natural enthusiasm for statistical mechanics must have been infectious, for it was at this time that I realised that the transfer matrices of the six-vertex model commuted - a vital first step in the subsequent solution of the eight-vertex model. This led to the solution of a number of other two-dimensional lattice models. One that has proved particularly challenging is the chiral Potts model. Here I wish to discuss some of the insights that led to the recent derivation of its order parameters. The chiral Potts model is a two-dimensional classical lattice model in statistical mechanics, where spins live on sites of a lattice and each spin takes $N$ values $0,1, \ldots, N-1$, and adjacent spins interact with Boltzmann weight functions $W, \overline{W}$. We consider only the case when the model is ``solvable'', by which we mean that $W, \overline{W}$ satisfy the star-triangle (``Yang-Baxter'') relations \cite{BPAuY88}. The free energy of the infinite lattice was first obtained in 1988 by using the invariance properties of the free energy and its derivatives.\cite{RJB88} Then in 1990 the functional transfer matrix relations of Bazhanov and Stroganov \cite{BazStrog90} were used to calculate the free energy more explicitly as a double integral.\cite{BBP90, RJB90, RJB91} The model has a critical temperature, below which the system exhibits ferromagnetic order. The next step was to calculate the order parameters ${\cal M}_1, \ldots , {\cal M}_{N-1}$ (defined below). These depend on a constant $k$ which decreases from one to zero as the temperature increases from zero to criticality. In 1989 Albertini {\it et al} \cite{AMPT89} made the elegant conjecture, based on the available series expansions, that \begin{equation} \label{conj} {\cal M}_r \; = \; k^{r(N-r)/N^2} \; \; , \; \; 0 \leq r \leq N \;\; . \end{equation} It might have been expected that a proof of such a simple formula would not have been long in coming, but in fact it proved to be a remarkably difficult problem. Order parameters (spontaneous magnetizations) are notoriously more difficult to calculate than free energies. For the Ising model (to which the chiral Potts model reduces when $N=2$), the free energy was calculated by Onsager in 1944 \cite{Onsager44}, but it was five years later when at a conference in Florence he announced his result for the spontaneous magnetization, and not till 1952 that the first published proof was given by Yang\cite{Yang52,Onsager71}. Similarly, the free energy of the eight-vertex model was calculated in 1971.\cite{Baxter71} The spontaneous magnetization and polarization were conjectured in 1973 and 1974, respectively\cite{BarberBax73, BaxKelland74}, but it was not till 1982 that a proof of the first of these conjectures were published\cite{book82}. A proof of the second had to wait until 1993\cite{JMN93}! By then three separate methods had been used. The Onsager-Yang calculation was based on the particular free-fermion/spinor/pfaffian/Clifford algebra structure of the Ising model\cite{MPW63}. As far as the auther is aware, this has never been extended to the other models: it would be very significant if it could be. The eight-vertex and subsequent hard-hexagon calculation was made using the corner transfer matrix method, which had been discovered in 1976\cite{Baxter76}. This worked readily for the magnetization (a single-site correlation), but not for the polarization (a single-edge correlation). This problem was remedied by the ``broken rapidity line'' technique discovered by Jimbo {\it et al} \cite{JMN93}. For all the two-dimensional solvable models, the Boltzmann weight functions $W, \overline{W}$ depend on parameters $p$ and $q$. These parameters are known as {\em rapidities} and are associated with lines (the dotted lines of Figure \ref{sqlattice}) that run through the midpoints of the edges of the lattice. In general these are complex numbers, or sets of related complex numbers. In all of the models we have mentioned, with the notable exception of the $N > 2$ chiral Potts model, these parameters can be chosen so that $W, \overline{W}$ depend only on the {\em rapidity difference} (spectral parameter) $p - q$. This property seems to be an essential element in the corner transfer matrix method: the star-triangle relation ensures that the corner transfer matrices factor, but the difference property is then needed to show that the factors commute with one another and are exponentials in the rapidities. The difference property is {\em not} possessed by the $N > 3$ chiral Potts model and one is unable to proceed. At first the author thought this would prove to be merely a technical complication and embarked on a low-temperature numerical calculation\cite{Baxter93} in the hope this would reveal the kind of simplifications that happen with the other models. This hope was not realised. I then looked at the technique of Jimbo {\it et al} and in 1998 applied it to the chiral Potts model. One could write down functional relations satisfied by the generalized order parameter ratio function $G_{pq}(r)$, and for $N=2$ these were sufficient (together with an assumed but very plausible analyticity property) to solve the problem. However, for $N > 2$ there was still a difficulty. Then $p$, $q$ are points on an algebraic curve of genus $> 1$ and there is no obvious uniformizing substitution. The functional relations themselves do not define $G_{pq}(r)$: one needs some additional analyticity information, and that seems hard to come by. The calculation of the free energy of the chiral Potts model \cite{RJB90, RJB91, RJB03} proceeds in two stages. First one considers a related ``$\tau_2 (t_q)$'' model.\cite{RJB04} This is intimately connected with the superintegrable case of the chiral Potts model.\cite{RJB89} It is much simpler than the chiral Potts model in that its Boltzmann weights depend on the horizontal rapidity $q$ only via a single parameter $t_q$, and are linear in $t_q$. Its row-to-row transfer matrix is the product of two chiral Potts transfer matrices, one with horizontal rapidity $q$, the other with a related rapidity $r = V R q$ defined by eqn. (\ref{autos}) of section 2. For a finite lattice, the partition function $Z$ of the $\tau_2 (t_q)$ model is therefore a polynomial in $t_p$. The free energy is the logarithm of $Z^{1/M}$, where $M$ is the number of sites of the lattice, evaluated in the thermodynamic limit when the lattice becomes infinitely big. This limiting function of course may have singularities in the complex $t_q$ plane. {\it A priori}, one might expect it to have $N$ branch cuts, each running though one of the $N$ roots of unity. However, one can argue that in fact it only has one such cut. As a result the free energy (i.e. the maximum eigenvalue of the transfer matrix) can be calculated by a Wiener-Hopf factorization. The second stage is to factor this free energy to obtain that of the chiral Potts model. It was not until 2004 that I realised that : (1) If one takes $p$, $q$ to be related by eqn. (\ref{spcase}) below, then $G_{pq}(r)$ can be expressed in terms of partition functions that involve $p, q$ only via the Boltzmann weights of the $\tau_2 (t_{p'})$ model, with $p' = R^{-1} p$. (2) It is {\em not} necessary to obtain $G_{pq}(r)$ for arbitrary $p$ and $q$. To verify the conjecture (\ref{conj}) it is sufficient to obtain it under the restriction (\ref{spcase}). I indicate the working in the following sections: a fuller account is given in Ref. \cite{RJB05b}. The calculation of $G_{pq}(r)$ for general $p$, $q$ remains an unsolved problem: still interesting, but not necessary for the derivation of the order parameters ${\cal M}_r$. \section{Chiral Potts model} We use the notation of \cite{BPAuY88, BBP90, RJB98}. Let $k, k'$ be two real variables in the range $(0,1)$, satisfying \begin{equation} k^2 + {k'}^2 = 1 \;\; . \end{equation} Consider four parameters $x_p, y_p, \mu_p, t_p$ satisfying the relations \begin{equation} \label{xymu} k x_p^N = 1-k'/\mu_p^N \;\; , \;\; k y_p^N = 1-k'\mu_p^N \;\; , \;\; t_p = x_p y_p \;\; . \end{equation} Let $p$ denote the set $\{x_p, y_p, \mu_p, t_p \}$. Similarly, let $q$ denote the set $\{x_q, y_q, \mu_q, t_q \}$. We call $p$ and $q$ ``rapidity'' variables. Each has one free parameter and is a point on an algebraic curve. Define Boltzmann weight functions $W_{pq}(n), \overline{W} _{pq}(n)$ by \addtocounter{equation}{1} \setcounter{storeeqn}{\value{equation}} \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{storeeqn}\alph{equation}} \begin{eqnarray} \label{WWba} W_{pq}(n) & = & (\mu_p/\mu_q)^n \prod_{j=1}^n \frac{y_q - \omega^j x_p} {y_p - \omega^j x_q} \;\; , \\ \label{WWbb} \overline{W}_{pq}(n) & = & (\mu_p \mu_q)^n \prod_{j=1}^n \frac{\omega x_p - \omega^j x_q} {y_q - \omega^j y_p} \;\; , \end{eqnarray} where \begin{displaymath} \omega \; = \; {\rm e}^{2\pi \i/N} \;\; . \end{displaymath} They satisfy the periodicity conditions \begin{displaymath} W_{pq}(n + N) = W_{pq}(n) \;\; , \;\; \overline{W}_{pq}(n + N) = \overline{W}_{pq}(n) \;\; . \end{displaymath} \setcounter{equation}{\value{storeeqn}} \renewcommand{\theequation}{\arabic{equation}} \setlength{\unitlength}{1pt} \begin{figure}[hbt] \begin{picture}(420,260) (0,0) \multiput(30,15)(5,0){73}{.} \multiput(30,75)(5,0){32}{\bf .} \multiput(31,75)(5,0){32}{\bf .} \multiput(202,75)(5,0){35}{\bf .} \multiput(203,75)(5,0){35}{\bf .} \multiput(30,135)(5,0){73}{.} \multiput(30,195)(5,0){73}{.} \put (190,72) {\line(0,1) {8}} \put (200,72) {\line(0,1) {8}} \thicklines \put (69,72) {\large $< $} \put (70,72) {\large $< $} \put (71,72) {\large $< $} \put (308,12) {\large $< $} \put (309,12) {\large $< $} \put (310,12) {\large $< $} \put (308,72) {\large $< $} \put (309,72) {\large $< $} \put (310,72) {\large $< $} \put (308,132) {\large $< $} \put (309,132) {\large $< $} \put (310,132) {\large $< $} \put (308,192) {\large $< $} \put (309,192) {\large $< $} \put (310,192) {\large $< $} \put (42,230) {\large $\wedge$} \put (42,229) {\large $\wedge$} \put (42,228) {\large $\wedge$} \put (102,230) {\large $\wedge$} \put (102,229) {\large $\wedge$} \put (102,228) {\large $\wedge$} \put (162,230) {\large $\wedge$} \put (162,229) {\large $\wedge$} \put (162,228) {\large $\wedge$} \put (222,230) {\large $\wedge$} \put (222,229) {\large $\wedge$} \put (222,228) {\large $\wedge$} \put (282,230) {\large $\wedge$} \put (282,229) {\large $\wedge$} \put (282,228) {\large $\wedge$} \put (342,230) {\large $\wedge$} \put (342,229) {\large $\wedge$} \put (342,228) {\large $\wedge$} \thinlines \put (176,102) {{\Large \it a}} \put (320,60) {{\Large \it q}} \put (83,60) {{\Large \it p}} \put (380,-2) {{\Large \it h}} \put (380,118) {{\Large \it h}} \put (380,178) {{\Large \it h}} \put (195,105) {\circle{7}} \put (16,45) {\line(1,-1) {60}} \put (16,165) {\line(1,-1) {180}} \put (76,225) {\line(1,-1) {117}} \put (198,103) {\line(1,-1) {117}} \put (196,225) {\line(1,-1) {180}} \put (316,225) {\line(1,-1) {60}} \put (16,165) {\line(1,1) {60}} \put (16,45) {\line(1,1) {180}} \put (76,-15) {\line(1,1) {117}} \put (198,107) {\line(1,1) {118}} \put (196,-15) {\line(1,1) {180}} \put (316,-15) {\line(1,1) {60}} \put (75,105) {\circle*{7}} \put (315,105) {\circle*{7}} \put (75,-15) {\circle*{7}} \put (195,-15) {\circle*{7}} \put (315,-15) {\circle*{7}} \put (15,45) {\circle*{7}} \put (135,45) {\circle*{7}} \put (255,45) {\circle*{7}} \put (375,45) {\circle*{7}} \put (15,165) {\circle*{7}} \put (135,165) {\circle*{7}} \put (255,165) {\circle*{7}} \put (375,165) {\circle*{7}} \put (75,225) {\circle*{7}} \put (195,225) {\circle*{7}} \put (315,225) {\circle*{7}} \put (42,-40) {{\Large \it v}} \put (102,-40) {{\Large \it v}} \put (162,-40) {{\Large \it v}} \put (222,-40) {{\Large \it v}} \put (282,-40) {{\Large \it v}} \put (342,-40) {{\Large \it v}} \multiput(45,-25)(0,5){52}{.} \multiput(105,-25)(0,5){52}{.} \multiput(165,-25)(0,5){52}{.} \multiput(225,-25)(0,5){52}{.} \multiput(285,-25)(0,5){52}{.} \multiput(345,-25)(0,5){52}{.} \end{picture} \vspace{1.5cm} \caption{\footnotesize The square lattice (solid lines, drawn diagonally), and the associated rapidity lines (broken or dotted).} \label{sqlattice} \end{figure} Now consider the square lattice $\cal L$, drawn diagonally as in Figure \ref{sqlattice}, with a total of $M$ sites. On each site $i$ place a spin $\sigma_i$, which can take any one of the $N$ values $0, 1, \ldots, N-1$. The solid lines in Figure \ref{sqlattice} are the edges of $\cal L$. Through each such edge there pass two dotted or broken lines - a vertical line denoted $v$ and a horizontal line denoted $h$ (or $p$ or $q$). These $v, h, p, q$ are rapidity variables, as defined above. We refer to each dotted line as a ``rapidity line''. With each SW - NE edge $(i,j)$ (with $i$ below $j$) associate an edge weight $W_{vh}(\sigma_i - \sigma_j)$. Similarly, with each SW - NE edge $(j,k)$ ($j$ below $k$), associate an edge weight $\overline{W}_{vh}(\sigma_j - \sigma_k)$. (Replace $h$ by $p$ or $q$ for the broken left and right half-lines.) Then the partition function is \begin{equation} \label{defZ} Z \; = \; \sum_{\sigma} \, \prod W_{vh}(\sigma_i - \sigma_j) \prod \overline{W}_{vh}(\sigma_j - \sigma_k) \;\; , \end{equation} the products being over all edges of each type, and the sum over all $N^M$ values of the $M$ spins. We expect the partition function per site \begin{displaymath} \kappa \; = \; Z^{1/M} \end{displaymath} to tend to a unique limit as the lattice becomes large in both directions. Let $a$ be a spin on a site near the centre of the lattice, as in the figure, and $r$ be any integer. Then the thermodynamic average of $\omega^{r a}$ is \begin{equation} \label{avfa} \tilde{F}_{pq}(r) \; = \; \langle \omega^{r a} \rangle \; = \; Z^{-1} \, \sum_{\sigma} \, \omega^{r a} \prod W_{vh}(\sigma_i - \sigma_j) \prod \overline{W}_{vh}(\sigma_j - \sigma_k) \;\; . \end{equation} We expect this to also tend to a limit as the lattice becomes large. We could allow each vertical (horizontal) rapidity line $\alpha$ to have a different rapidity $v_{\alpha}$ ($h_{\beta}$). If an edge of $\cal L$ lies on lines with rapidities $v_{\alpha}$, $h_{\beta}$, then the Boltzmann weight function of that edge is to be taken as $W_{vh}(n)$ or $\overline{W}_{vh}(n)$, with $v = v_{\alpha}$ and $h = h_{\beta}$. The weight functions $W_{pq}(n)$, $\overline{W}_{pq}(n)$ satisfy the star- triangle relation.\cite{BPAuY88} For this reason we are free to move the rapidity lines around in the plane, in particular to interchange two vertical or two horizontal rapidity lines.\cite{RJB78} So long as no rapidity line crosses the site with spin $a$ while making such rearrangements, the average $\langle \omega^{r a} \rangle$ is {\em unchanged} by the rearrangement.\footnote{Subject to boundary conditions: here we are primarily interested in the infinite lattice, where we expect the boundary conditions to have no effect on the rearrangements we consider.} All of the $v, h$ rapidity lines shown in Figure \ref{sqlattice} are ``full'', in the sense that they extend without break from one boundary to another. We can move any such line away from the central site to infinity, where we do not expect it to contribute to $\langle \omega^{r a} \rangle$. Hence in the infinite lattice limit $\tilde{F}_{pq}(r) = \langle \omega^{r a} \rangle$ must be {\em independent} of {\em all } the full-line $v$ and $h$ rapidities. The horizontal rapidity line immediately below $a$ has different rapidity variables $p$, $q$ on the left and the right of the break below $a$. This means that we cannot use the star-triangle relation to move it away from $a$. It follows that $\tilde{F}_{pq}(r)$ will in general depend on $p$ and $q$, as well as on the `` universal'' constants $k$ or $k'$. We are particularly interested in the case when $q = p$. Then the $p,q$ line is not broken, it can be removed to infinity, so \begin{equation} \label{defMr} {\cal M}_r \; = \; \tilde{F}_{pp}(r) \; = \; \langle \omega^{r a} \rangle \; = \; {\rm independent \; \; of }\; \; p \;\; . \end{equation} These are the desired order parameters of the chiral Potts model, studied by Albertini {\it et al}. By using this ``broken rapidity line'' approach, I was finally ably to verify their conjecture (\ref{conj}) in 2005\cite{RJB05a,RJB05b}. Here I shall present some of the observations that enabled me to do this. \subsection*{Automorphisms} There are various automorphisms that change $x_p, y_p \mu_p, t_p$ while leaving the relations (\ref{xymu} ) still satisfied. Four that we shall use are $R, S, M, V$, defined by: \begin{eqnarray} \label{autos} \{x_{Rp}, y_{Rp}, \mu_{Rp}, t_{Rp} \} & = & \{ y_p,\omega x_p, 1/\mu_p, \omega t_p \} \;\; , \nonumber \\ \{x_{Sp}, y_{Sp}, \mu_{Sp}, t_{Sp} \} & = & \{ 1/y_p, 1/x_p, \omega^{-1/2} y_p /(x_p \mu_p), 1/t_p \} \;\; , \\ \{x_{Mp}, y_{Mp}, \mu_{Mp}, t_{Mp} \} & = & \{ x_p, y_p, \omega \mu_p, t_p \} \;\; , \nonumber \\ \{x_{Vp}, y_{Vp}, \mu_{Vp}, t_{Vp} \} & = & \{ x_p, \omega y_p, \mu_p, \omega t_p \} \;\; . \nonumber \end{eqnarray} \subsection*{The central sheet $\cal D$ and its neighbours.} We shall find it natural, at least for the special case discussed below, to regard $t_p$ as the independent variable, and $x_p, y_p, \mu_p$ to be defined it terms of it by (\ref{xymu}). They are not single-valued functions of $t_p$: to make them single-valued we must introduce $N$ branch cuts $B_0, B_1, \ldots, B_{N-1}$ in the complex $t_p$-plane as indicated in Figure (\ref{brcuts}). They are about the points $1, \omega, \ldots, \omega^{N-1}$, respectively, \setlength{\unitlength}{1pt} \begin{figure}[hbt] \begin{picture}(420,260) (0,0) \put (50,125) {\line(1,0) {350}} \put (225,0) {\line(0,1) {250}} \put (325,125) {\circle*{9}} \put (175,208) {\circle*{9}} \put (175,42) {\circle*{9}} \put (315,100) {\Large 1} \put (185,214) {\Large $\omega$} \put (184,32) {\Large $\omega^2$} \put (358,100) {{\Large {$B_0$}}} \put (135,219) {{\Large {$B_1$}}} \put (134,22) {{\Large {$B_2$}}} \put (305,10) {{\Large {$t_p$-plane}}} \thicklines \put (295,124) {\line(1,0) {60}} \put (295,125) {\line(1,0) {60}} \put (295,126) {\line(1,0) {60}} \put (160,16) {\line(3,5) {30}} \put (160,17) {\line(3,5) {30}} \put (160,18) {\line(3,5) {30}} \put (160,234) {\line(3,-5) {30}} \put (160,233) {\line(3,-5) {30}} \put (160,232) {\line(3,-5) {30}} \thinlines \ \end{picture} \vspace{1.5cm} \caption{The cut $t_p$-plane for $N=3$.} \label{brcuts} \end{figure} Since the Boltzmann weights are rational functions of $x_p, y_p$, we expect $G_{pq}(r)$, considered as a function of $t_p$ or $t_q$, to also have these $N$ branch cuts. Given $t_p$ in the cut plane of Figure \ref{brcuts}, choose $\mu_p^N$ to be outside the unit circle. Then $x_p$ must lie in one of $N$ disjoint regions centred on the points $1, \omega, \ldots , \omega^{N-1}$. Choose it to be in the region centred on $1$. We then say that $p$ lies in the domain $\cal D$. When this is so (and $t_p$ is not close to a branch cut), then in the limit $k' \rightarrow 0$, $\mu_p^N = O(1/k')$ and $x_p \rightarrow 1$. The domain $\cal D$ has $N$ neighbours ${\cal D}_0, \ldots, {\cal D}_{N-1}$ , corresponding to $t_p$ crossing the $N$ branch cuts $B_0, \ldots, B_{N-1}$, respectively. The automorphism that takes $\cal D$ to ${\cal D}_i$, while leaving $t_p$ unchanged, is \begin{equation} \label{defAi} A_i \; = \; V^{i-1} R V^{N-i} \;\; . \end{equation} The mappings $A_i$ are involutions: $A_i^2 = 1$. \section{Functional relations} We define the ratio function \begin{equation} \label{defGpq} G_{pq}(r) \; = \; \tilde{F}_{pq}(r) /\tilde{F}_{pq}(r-1) \;\; . \end{equation} The functions $\tilde{F}_{pq}(r)$, $G_{pq}(r) $ satisfy two reflection symmetry relations. Also, although we cannot move the break in the $(p,q)$ rapidity line away from the spin $a$, we can rotate its parts about $a$ and then cross them over. As we show in \cite{RJB98} and \cite{RJB05b}, this leads to functional relations for $G_{pq}(r)$: \begin{eqnarray} \label{functrl} G_{Rp,Rq}(r) & = & 1/G_{pq}(N-r+1) \;\; , \nonumber \\ G_{p,q}(r) & = & 1/G_{RSq,RSp}(N-r+1) \;\; , \nonumber \\ G_{pq}(r) & = & G_{Rq, R^{-1} p}(r) \;\; , \\ G_{pq}(r) & = & \frac{x_q \mu_q - \omega^r x_p \mu_p } {y_p \mu_q - \omega^{\, r-1} y_q \mu_p} \; G_{R^{-1}q, R p}(r) \nonumber \\ G_{Mp,q}(r) & = & G_{p,M^{-1} q}(r) = G_{pq}(r+1) \;\; , \nonumber \\ \prod_{r=1}^N G_{pq}(r) & = & 1 \;\; . \nonumber \end{eqnarray} Also, {from} (\ref{defMr}), \begin{equation} \label{calcMr} {\cal M}_r \; = \; G_{pp}(1) \cdots G_{pp}(r) \;\; . \end{equation} For the case when $N=2$ we regain the Ising model. As is shown in \cite{RJB98}, there is then a uniformizing substitution such that $x_p, y_p, \mu_p, t_p$ are all single-valued meromorphic functions of a variable $u_p$, and $W_{pq}(n), \overline{W}_{pq}(n)$ and hence $G_{pq}(r)$ depend on $u_p$, $u_q$ only via their difference $u_q - u_p$. In fact all quantities are Jacobi elliptic functions of $u_p, u_q$ with modulus $k$. One can argue (based on low-temperature series expansions) that $G_{pq}(r)$ is analytic and non-zero in a particular vertical strip in the complex $u_q - u_p$ plane. The relations (\ref{functrl}) then define $G_{pq}(r)$. They can be solved by Fourier transforms and one readily obtains the famous Onsager result \begin{equation} {\cal M}_1 \; = \; (1-{k'}^2)^{1/8} \;\; . \end{equation} For $N > $ the problem is much more difficult. There then appears to be no uniformizing substitution and $G_{pq}(r)$ lives on a many-sheeted Riemann surface obtainable from $\cal D$ by repeated crossings of the branch cuts. One can argue from the physical cases (when the Boltzmann weights are real and positive) that $G_{pq}(r)$ should be analytic and non-zero when $p, q$ both lie in $\cal D$, but the relations (\ref{functrl}) only relate these sheets to a small sub-set of all possible sheets. There seems to be a basic lack of information. \section{Solvable special case: $q = V p$} The author spent much time mulling over this problem, then towards the end of 2004 he realised that the case \begin{equation} \label{spcase} q \; = \; Vp \end{equation} may be much simpler to handle, and still be sufficient to obtain the order parameters ${\cal M}_r$. The reason it is simpler is that one can rotate the left-half line $p$ anti-clockwise below $a$ until it lies immediately below the half-line $q$, as in Fig. 5 of \cite{RJB05b}. One has to reverse the direction of the arrow, which means the rapidity is not $p$ but $p' = R^{-1}p$. \setlength{\unitlength}{1pt} \begin{figure}[hbt] \begin{picture}(420,260) (0,0) \multiput(15,75)(5,0){74}{\bf .} \multiput(16,75)(5,0){74}{\bf .} \multiput(15,135)(5,0){74}{\bf .} \multiput(16,135)(5,0){74}{\bf .} \thicklines \put (308,72) {\large $< $} \put (309,72) {\large $< $} \put (310,72) {\large $< $} \put (308,132) {\large $< $} \put (309,132) {\large $< $} \put (310,132) {\large $< $} \put (42,170) {\large $\wedge$} \put (42,169) {\large $\wedge$} \put (42,168) {\large $\wedge$} \put (102,170) {\large $\wedge$} \put (102,169) {\large $\wedge$} \put (102,168) {\large $\wedge$} \put (162,170) {\large $\wedge$} \put (162,169) {\large $\wedge$} \put (162,168) {\large $\wedge$} \put (222,170) {\large $\wedge$} \put (222,169) {\large $\wedge$} \put (222,168) {\large $\wedge$} \put (282,170) {\large $\wedge$} \put (282,169) {\large $\wedge$} \put (282,168) {\large $\wedge$} \put (342,170) {\large $\wedge$} \put (342,169) {\large $\wedge$} \put (342,168) {\large $\wedge$} \thinlines \put (-5,166) {{\Large \it a}} \put (-5,36) {{\Large \it a}} \put (360,83) {{\Large $p' = R^{-1} p$}} \put (393,132) {{\Large $q$ }} \put (121,31) {{\Large \it b}} \put (241,31) {{\Large \it c}} \put (118,161) {{\Large \it e}} \put (238,161) {{\Large \it d}} \put (191,90) {{\Large \it g}} \put (195,105) {\circle*{7}} \put (18,163) {\line(1,-1) {115}} \put (138,163) {\line(1,-1) {115}} \put (258,163) {\line(1,-1) {115}} \put (18,47) {\line(1,1) {115}} \put (138,47) {\line(1,1) {115}} \put (258,47) {\line(1,1) {115}} \put (75,105) {\circle*{7}} \put (315,105) {\circle*{7}} \put (15,45) {\circle{7}} \put (135,45) {\circle{7}} \put (255,45) {\circle{7}} \put (375,45) {\circle{7}} \put (15,165) {\circle{7}} \put (135,165) {\circle{7}} \put (255,165) {\circle{7}} \put (375,165) {\circle{7}} \put (42,10) {{\Large \it v}} \put (102,10) {{\Large \it v}} \put (162,10) {{\Large \it v}} \put (222,10) {{\Large \it v}} \put (282,10) {{\Large \it v}} \put (342,10) {{\Large \it v}} \multiput(45,35)(0,5){28}{.} \multiput(105,35)(0,5){28}{.} \multiput(165,35)(0,5){28}{.} \multiput(225,35)(0,5){28}{.} \multiput(285,35)(0,5){28}{.} \multiput(345,35)(0,5){28}{.} \end{picture} \vspace{1.5cm} \caption{\footnotesize The lattice after rotating the half-line $p$ to a position immediately below $q$.} \label{dblerow} \end{figure} The result is that $p$ enters the sums in (\ref{defZ}), (\ref{avfa}) only via the weights of the edges shown in Figure \ref{dblerow}. The left-hand spins are the same - the spin $a$. The right-hand spins are set to the boundary value of zero. Further, we can sum over the spins between lines $p'$ and $q$. For instance, summing over the spin $g$ gives a contribution \begin{displaymath} U(b,c,d,e) \; = \; \sum_g W_{v p'} (b-g) \overline{W}_{v p'} (c-g) W_{v q} (g-d) \overline{W}_{v q} (g-e) \;\; . \end{displaymath} If $a, \sigma_1, \ldots , \sigma_L$ are the spins on the lowest row of Figure \ref{dblerow}, and $a, \sigma'_1, \ldots , \sigma'_L$ are those in the upper, then the combined weight of the edges shown in Figure \ref{dblerow} is \begin{equation} \label{rowprod} \prod_{i=1}^L U(\sigma_{i-1},\sigma_i, \sigma'_i,\sigma'_{i-1}) \;\; . \end{equation} Now $q = VRp'$, which from (\ref{autos}) means that \begin{equation} x_q = y_{p'} \;\; , \;\; y_q = \omega^2 x_{p'} \;\; , \;\; \mu_q = 1/\mu_{p'} \;\; . \end{equation} This is the equation (3.13) of \cite{BBP90}, the $q,r$ therein being our $p', q$ and $k, \ell$ having the values $0, 2$. From (3.17) therein, $U(b,c,d,e)$ vanishes if $0 \leq {\rm mod}(b-e,N) \leq 1$ and $2 \leq {\rm mod}(c-d,N) \leq N-1$. It follows that the spins in the upper row are either equal to the corresponding spins in the lower row, or just one less than them. From (2.29) and (3.39) of \cite{BBP90}, it follows that to within ``gauge factors'' (i.e. factors that cancel out of eqn. \ref{rowprod}) $U(b,c,d,e)$ depends on $p$ very simply: it is {\em linear} in $t_p$. In fact, these Boltzmann weights $U(b,c,d,e)$ are those of the $\tau_2(t_{p'})$ model\cite{BBP90,RJB90,RJB91} mentioned earlier. Just as this model plays a central role in the calculation of the chiral Potts free energy, so it naturally enters this calculation of the order parameters. In the low-temperature limit, when $k' \rightarrow 0$, $\mu_p, \mu_q \sim O({k'}^{-1/N})$, $x_p, x_q \rightarrow 1$, we can verify that the dominant contribution to the sums in (\ref{defZ}), (\ref{avfa}) comes from the case when $\sigma_1, \ldots, \sigma_{L}, \sigma'_1, \ldots, \sigma'_{L} $ are all zero. Also, to within factors that cancel out of(\ref{rowprod}) and (\ref{avfa}), \begin{equation} U(b,c,c,b) = 1 - \omega t_{p'} = 1 - t_p \;\; . \end{equation} It follows that the RHS of (\ref{avfa}), and therefore of (\ref{defGpq}), is a ratio of two polynomials in $t_p$, each of degree $L$, and each equal to $(1-t_p)^L$ in the limit $k' \rightarrow 0$. By continuity (keeping $L$ finite), for small values of $k'$ their $L$ zeros must be close to one. Provided this remains true (which we believe it does) when we take the limit $L \rightarrow \infty$, we expect $G_{p,Vp}(r)$ to be an analytic and non-zero function of $t_p$, except in some region near $t_p = 1$. As $k'$ becomes small, this region must shrink down to the point $t_p = 1$. Similarly, if we rotate the half line $p$ in Figure \ref{sqlattice} clockwise above $a$, we can move it be immediately above $q$, with $p$ replaced by $Rp$, as in Fig. 6 of \cite{RJB05b}. The $p', q$ of Figure\ref{dblerow} herein are now replaced by $q, Rp$. This corresponds equation (3.13) of \cite{BBP90} with the $q,r$ therein replaced by $q, Rp$. From (\ref{spcase}) it follows that $k, \ell$ in \cite{BBP90} now have the values $-1, N+1$. The combined star weights $U$ are now those of the $\tau_{N}(t_p)$ model. They are polynomials in $t_p$ of degree $N-1$, except for terms which contribute a factor $x_p^{\epsilon(r)}$ to the contribution of (\ref{rowprod}) to $G_{p,Vp}(r)$, where \begin{equation} \epsilon(r) \; = \; 1 - N \delta_{r,0} \;\; , \end{equation} the $\delta$ function being interpreted modulo $N$, so $\epsilon(0) = \epsilon(N) = 1-N$. When $k' \rightarrow 0$ these polynomials are $(1-\omega t_p) (1-\omega^2 t_p) \cdots (1-\omega^{N-1} t_p)$. In the large-$L$ limit, with $k'$ not too large, we therefore expect $x_p^{\epsilon(r)} G_{p,Vp}(r)$ to have singularities near $t_p = \omega, \ldots, \omega^{N-1}$, but {\em not} near $t_p = 1$. If we define \begin{equation} \label{greln} g(p;r) \; = \; G_{p,Vp}(r) \;\; , \end{equation} then this implies that the function $ x_p^{\epsilon(r)} g(p;r)$ does {\em not} have $B_0$ as a branch cut. This is in agreement with the fourth and sixth functional relations in (\ref{functrl}). If we set $q = Vp$ therein we obtain \begin{equation} \label{frln4} x_p^{-\epsilon(r)} g(p;r) \; = \; y_p^{-\epsilon(r)} g(V^{-1} R p;r) \;\; , \end{equation} using $V^{-1}R = R^{-1} V$. Here we have used the fourth relation for $r \neq 0$ and the sixth to then determine the behaviour for $r=0$. (For $r=0$ the fourth relation merely gives $0 = 0$.) {From} (\ref{defAi}) the automorphism $V^{-1}R$ is the automorphism $A_0$ that takes $p$ across the branch cut $B_0$, returning $t_p$ to its original value, while interchanging $x_p$ with $y_p$. Thus (\ref{frln4}) states that $ x_p^{-\epsilon(r)} g(p;r)$ is the same on both sides of the cut, i.e. it does not have the cut $B_0$. These are the key analyticity properties that we need to calculate $g(p;r)$ and ${\cal M}_r$. We do this in \cite{RJB05b,RJB05a}, but this meeting is in honour of Tony Guttmann, an expert in series expansion methods, so it seems appropriate to here describe the series expansion checks I made (for $N=3$) when I first began to suspect these properties. \section{Consequences of this analyticity} The above observations imply that $g(p;r)$, considered as a function of $t_p$, does {\em not} have the branch cuts of Figure \ref{brcuts}, except for the branch cut on the positive real axis. This means that $g(p;r)$ is unchanged by taking allowing $t_p$ to cross any of the branch cuts $B_1, \ldots ,B_{N-1}$ and then returning it to its original value, i.e. it satisfies the $N-1$ symmetry relations: \begin{equation} \label{autosA} g(p;r) \; = \; g(A_i \, p;r) \; \; \; {\rm for } \; \; i = 1, \ldots ,N-1 \;\; , \end{equation} $A_i$ being the automorphism (\ref{defAi}). For $N = 3$, this can be checked using the series expansions obtained in \cite{RJB98b}. We use the hyperelliptic parametrisation introduced in \cite{RJB90b,RJB93a,RJB93b}. We define parameters $x, z_p, w_p$ related to one another and to $t_p$ by \begin{equation} \label{defx} (k'/k)^2 = 27 x \prod_{ n =1}^{\infty} \left( \frac{1-x^{3n}}{1-x^n} \right)^{12} \;\; . \end{equation} \begin{equation} \label{eq4.5} w = \prod_{n=1}^{\infty} \frac{(1-x^{2n-1} z/w) (1-x^{2n-1} w/z) (1-x^{6n-5} zw) (1-x^{6n-1} z^{-1} w^{-1})} {(1-x^{2n-2} z/w) (1-x^{2n} w/z) (1-x^{6n-2} zw) (1-x^{6n-4} z^{-1} w^{-1})} \end{equation} (writing $z_p, w_p$ here simply as $z, w$), and \begin{equation} \label{eq27} t_p = \omega \frac{f(\omega z_p)}{f(\omega^2 z_p)} = \frac{f(-\omega /w_p)}{f(-\omega^2/w_p)} = \omega^2 \frac{f(-\omega w_p/z_p)}{f(-\omega^2 w_p/z_p)} \;\; , \end{equation} where $f(z)$ is the function \begin{equation} f(z) \; = \; \prod_{n=1}^{\infty} (1-x^{n-1}z ) (1-x^n/z) \;\; . \end{equation} Note that $x$, like $k'$, is a constant (not a rapidity variable) and is small at low temperatures. We develop expansions in powers of $x$. For $p$ in $\cal D$, the parameters $z_p, w_p$ are of order unity, so to leading order $w_p = z_p +1$, $x_p = 1$, $y_p = (\omega - \omega^2 z_p)/(1- \omega^2 z_p)$. The automorphisms $R,S, V$ transform $z_p, w_p$ to \begin{displaymath} z_{R p} = x z_p \;\; , \;\; z_{Sp} = 1/(x z_p) \;\; , \;\; z_{V p} = -1/w_p \end{displaymath} \begin{equation} w_{R p} = z_p/w_p \;\; , \;\; w_{Sp} = 1/(x w_p) \;\; , \;\; w_{V p} = z_p/w_p \;\; , \end{equation} so from (\ref{defAi}), if $p_i = A_i p$ then \begin{eqnarray} z_{p_0} = -1/(x w_p) , & z_{p_1} = -x w_p/z_p , & z_{p_2} = z_p \nonumber \\ w_{p_0} = -1/(x z_p) , & w_{p_1} = w_p , & w_{p_2} = x z_p/w_p \;\; . \end{eqnarray} If we write $ g(p;r)$ more explicitly as $g(z_p,w_p;r)$, then the relations (\ref{autosA}) become \addtocounter{equation}{1} \setcounter{storeeqn}{\value{equation}} \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{storeeqn}\alph{equation}} \begin{eqnarray} \label{eq1} g(z_p,w_p;r) & = & g(-x w_p/z_p,w_p;r) \\ \label{eq2} g(z_p,w_p;r) & = & g(z_p,x z_p/w_p;r) \;\; . \end{eqnarray} \setcounter{equation}{\value{storeeqn}} \renewcommand{\theequation}{\arabic{equation}} Using (\ref{defZ}), (\ref{avfa}), we can write (\ref{defGpq}) as \begin{equation} G_{pq}(r) \; = \; \sum_{j=0}^2 \omega^{jr } F_{pq}(j) \left/ \sum_{j=0}^2 \omega^{j(r-1) } F_{pq}(j) \right. \;\; , \end{equation} where $F_{pq}(j)$ is the probability that spin $a$ has value $j$. We use the series expansions (39) - (52) of \cite{RJB98b} for $F_{pq}(1)/F_{pq}(0)$ and $F_{pq}(2)/F_{pq}(0)$ in terms of \begin{equation} \alpha = z_q/z_p \;\; , \;\; \beta = w_q/w_p \;\; . \end{equation} Since $q = Vp$, $ z_{q} = -1/w_p$, $ w_{q} = z_p/w_p$ and we find from (39) of \cite{RJB98b} that $u = -\omega \, w_p/z_p$. (Choosing the cube root for $u$ to ensure that $F_{pq}(i)/F_{pq}(0)$ is real when $y_p = y_q = 0$ which is when $z_p = \omega^2$, $w_p = -\omega$: we then regain the physically interesting $q = p$ case of eqn. \ref{defMr}. ) For $p, q$ in $\cal D$, the parameters $z_p,w_p,z_q,w_q, \alpha, \beta$ are all of order unity, we can then use the expansion (48) of \cite{RJB98b} to obtain \begin{displaymath} F_{pq}(1)/F_{pq}(0) = \omega^2 \psi_1(z_p) \; = \; \omega^2 \psi_2(-w_p) \;\; , \end{displaymath} \begin{equation} \label{F12} F_{pq}(2)/F_{pq}(0) = \omega \psi_2(z_p) \; = \; \omega \psi_1(-w_p) \;\; , \end{equation} where \begin{displaymath} \psi_1(z) = - (z+1) x + (z+1)^3x^2/z - (z^3+6 z^2+ 16 z +16 +4 z^{-1}+z^{-2}) x^3 \end{displaymath} \begin{displaymath} + (z^4+11 z^3+ 41 z^2+85 z +81+25 z^{-1} + 7 z^{-2}+z^{-3})x^4 + O(x^5) \;\; , \end{displaymath} and \begin{displaymath} \psi_2(z) = z x - (2 z+1 +z^{-1}) x^2 - (z^2- 8 z -2 - 3 z^{-1}-z^{-2}) x^3 \end{displaymath} \begin{displaymath} - ( 2 z^3 - 5z^2+31 z+6 +14 z^{-1} + 5 z^{-2}+z^{-3}) x^4 + O(x^5) \;\; . \end{displaymath} The automorphism (\ref{eq1}) interchanges $\cal D$ with ${\cal D}_1$. To leading order in $x$, the mid-point is when $z_p = \i \, x^{1/2}, w_p = 1$. This is on the boundary of the domain $\cal D$, in which the series (48) of \cite{RJB98b} was obtained, so the series is not necessarily convergent at this point. Nevertheless, if we take $z_p = O(x^{1/2})$ in the above two series, we find the terms originally of order $x^j$ become of order not larger than $x^{(j+1)/2}$. Extrapolating, this suggests that the series do still converge at the midpoint, so we can use them to check whether the symmetry is satisfied. The first check occurs at order $x^{3/2}$, where both series contain a term \begin{displaymath} \pm \, (x z_p - x^2 w_p/z_p) \end{displaymath} (using the fact that to leading order $w_p = 1$ at the midpoint). This is indeed symmetric under $z_p \rightarrow -x w_p/z_p$. If we subtract this term from the series (using the expansion of $w_p$ in terms of $z_p$), we can then check the behaviour at order $x^2$, and similarly then at order $x^{5/2}$. All three checks are satisfied by both series. The perceptive reader will remark that (\ref{F12}) allows us to work with $w_p$ instead of $z_p$. Since $w_p$ is unchanged by $A_1$, the symmetry appears obvious. Indeed it is, but only because a quite remarkable event occurred in deriving these series, namely the $z$ series contains no powers of $z+1$ as denominators, and the $w$ series contains no powers of $w-1$. If one expands $w$ in terms of $z$ (or $z$ in terms of $w$), then one does find such terms. It is their absence from (\ref{F12}) that makes the series obviously convergent near $w = 1$ or $z = -1$. I have presented the argument in terms of $z_p$ to make it clear that one does indeed have three non-trivial checks on the symmetry to the available order of the series expansion. Similarly, (\ref{eq2}) interchanges $\cal D$ with ${\cal D}_2$, with mid-point $z_p = -1, w_p = \i \, x^{1/2}$. If one now works with $w_p$ as the variable, one can verify to the same three orders the symmetry $w_p \rightarrow x z_p/w_p$. So our series provide no less than six checks on the symmetries (\ref{eq1}), (\ref{eq2}). When I first observed this, I could see the resemblance to the properties of the free energy of the $\tau_2(t_q)$ model. One such property is that $\tau_2(t_q) \tau_2(\omega t_q) \cdots \tau_2(\omega^{N-1}t_q)$ is a rational function of $x_q^N$, so I looked at the series for \begin{eqnarray} \label{defL} {\cal L}(p;r) & = & \prod_{j\, = 0}^{N-1} g(V^j \, p;r) \nonumber \\ & = & g(z_p,w_p;r) \, g(-1/w_p,z_p/w_p;r) \, g(-w_p/z_p,-1/z_p;r) \;\; . \end{eqnarray} Choosing an arbitrary value for $z_p$ and working to 30 digits of accuracy, I soon found that the series (known to order $x^4$) fitted with the simple formulae \begin{equation} \label{Lconj} {\cal L}(p;0) = 1/x_p^2 \;\; , \;\; {\cal L}(p;1) = k^{1/3} x_p \;\; , \;\; {\cal L}(p;2) = k^{-1/3} x_p \;\; . \end{equation} All this strongly suggested that I was on the right track. It did not take long to justify my observations for general $N$. For instance, if $g(p;r)$ only has the branch cut $B_0$, and $x_p^{-\epsilon(r)} g(p;r)$ does not have that cut, then $x_p^{-\epsilon(r)} {\cal L}(p;r)$ does not have the cut $B_0$. But this function is unchanged by $p \rightarrow Vp$, which rotates the $t_p$ plane through an angle $2 \pi/N$. Hence it cannot have any of the cuts $B_0, B_1, \ldots, B_{N-1}$. We do not expect any other singularities (e.g. poles) for $p$ in $\cal D$, so the function is analytic in the entire $t_p$ plane. It is bounded (the Boltzmann weights $W, \overline{W}$ remain finite and non-zero as $y_p \rightarrow \infty$, the ratio $\mu_p/y_p$ remaining finite), so from Liouville's theorem it is a constant (independent of $p$ but dependent on $r$). We can relate these constants to the desired order parameters ${\cal M}_r$ in two ways, and then use these relations to calculate the ${\cal M}_r$. When $y_p = y_q = 0$ and $x_p = k^{1/N}$, our special case $q = Vp$ intersects with physically interesting case $q = p$, so from (\ref{defMr}), \begin{equation} x_p^{-\epsilon(r)} {\cal L}(p;r) \; = \; k^{-\epsilon(r)/N} \, ({\cal M}_r/ {\cal M}_{r-1})^N \;\; . \end{equation} When $y_p = y_q = \infty$ ($\mu_p/y_p$ remaining finite) and $x_p = k^{-1/N}$ we find not $q = p$ but $q = M^{-1}p$, which is related to $q = p$ by the fifth of the functional relations (\ref{functrl}), giving \begin{equation} x_p^{-\epsilon(r)} {\cal L}(p;r) \; = \; k^{\epsilon(r)/N} \, ({\cal M}_{r+1}/ {\cal M}_{r})^N \;\; . \end{equation} The left-hand sides of these last two equations, being constants, are the same in both equations. We can therefore equate the two right-hand sides, for $r = 1, \ldots, N-1$. Using the fact that ${\cal M}_0 = {\cal M}_N = 1$, we can solve for ${\cal M}_1, \ldots, {\cal M}_{N-1}$ to obtain \begin{equation} {\cal M}_r \; = \; k^{r(N-r)/N^2} \; \; {\rm for \; \;} r = 0, \ldots , N \;\; , \end{equation} which verifies the conjecture (\ref{conj}) of Albertini {\it et al} \cite{AMPT89}. For $N=3$ these results do of course agree with my original conjectures (\ref{Lconj}). In \cite{RJB05b} I also show that one can calculate $G_{P,Vp}(r) = g(p;r)$ by a Wiener-Hopf factorization, giving \begin{equation} \label{gS} g(p;r) \; = \; k^{(N+1-2r)/N^2} \, {\cal S}_p^{\, \epsilon (r)} \end{equation} for $r = 1, \ldots , N$, where \begin{equation} \label{defS} \log {\cal S}_p \; = \; - \frac{2}{N^2} \log k + \frac {1}{2 N \pi } \, \int_0^{2 \pi} \frac{k' {\rm e}^{\i\theta}}{1-k' {\rm e}^{\i\theta}} \, \log [\Delta(\theta) - t_p] \, {\rm d}\theta \;\; , \end{equation} and \begin{equation} \Delta( \theta ) \; = \; [(1-2k' \cos \theta + {k'}^2 )/k^2]^{1/N} \;\; . \end{equation} (This function ${\cal S}_p$ should not be confused with the automorphism $S$ defined in (\ref{autos}). As is implied by the above equations, ${\cal S}_p$ satisfies the product relation \begin{equation} {\cal S}_p {\cal S}_{Vp} \cdots {\cal S}_{V^{N-1} p} \; = \; k^{-1/N} x_p \;\; . \end{equation} Also, if one sets $q = Vp$ in the second of the relations (\ref{functrl}), uses the identity $R S = M V R S V$ and the fifth relation, one obtains $ g(p;r) g(RSVp;N-r) = 1$, from which we can deduce the symmetry \begin{equation} {\cal S}_p \, {\cal S}_{RSVp} = k^{-2/N^2} \;\; . \end{equation} For $N=3$ the automorphism $p \rightarrow RSVp$ takes $z_p, w_p$ to $-w_p, -z_p$, so this relation can then be written \begin{equation} {\cal S}(z_p,w_p ) {\cal S}(-w_p,-z_p) \; = \; k^{-2/9} \;\; . \end{equation} \section{Another interesting case: $q = V^2 p$} We now have the solution for $G_{pq}(r)$ for $q=p$ and for $q= Vp$. This suggests looking at one more case: $q= V^2 p$, where $y_q = \omega^2 y_p$. Similarly to section 5, we set $g_2(p;r) = G_{pq}(r)$ and \begin{displaymath} L_2(p;r) = \prod_{j=0}^{N-1} g_2(V^j p;r) \;\; . \end{displaymath} For $N = 3$ we have used the series expansions of \cite{RJB98b} to obtain for this case \begin{equation} F_{pq}(1) = \omega \phi(w_p) \;\; , \;\; F_{pq}(2) = \omega^2 \phi(1/w_p) \;\; , \end{equation} where \begin{displaymath} \phi(w) = (w - 1)x - (2 w^2 - 2 w + 1)x^2/w + (2 w^3 + 6 w^2 - 6 w + 1) x^3/w \end{displaymath} \begin{equation} - (2 w^4 + 8 w^3 + 24 w^2 - 22 w + 5) x^4/w + O(x^5) \;\; . \end{equation} As in the previous case, the coefficients are Laurent polynomials in $w$. There is no sign of any singularity near $w_p=1$, $t_p = \omega$ so this suggests that $G_{pq}(r)$, considered as a function of $t_p$, does not have the branch cut $B_1$. Indeed, this is a consequence of the third functional relation (\ref{functrl}). Setting $q = V^2 p$ therein, we obtain \begin{displaymath} g_2(p;r) \; = \; g_2(A_1 p;r) \;\; , \end{displaymath} which tells us that $g_2(p;r)$ is unchanged by taking $t_p$ across the branch cut $B_1$ and returning it to its original value. This means that the cut $B_1$ is unnecessary. However, $g_2(p;r)$ does appear to have the other two cuts $B_0$ and $B_2$. To the available four terms in the series expansion we found \begin{displaymath} L_2(p;1) \; = \; x_p^2 \;\; , \end{displaymath} and \begin{equation} L_2(p;0) \; = \; k^{-1/3} x_p^{-1} h(z_p,w_p)^3 \;\; , \;\; L_2(p;2) \; = \; k^{1/3} x_p^{-1} h(z_p,w_p)^{-3} \;\; , \end{equation} where \begin{displaymath} h(z,w) = 1 + (x^2 - 6 x^3 + 35 x^4) (w/z^2 + zw - z/w^2 + 3) \end{displaymath} \begin{equation} + \; x^4 (w^2/z^4 + z^2/w^4 + z^2 w^2 - 3) +O(x^5) \;\; . \end{equation} The result for $ L_2(p;1)$ looks encouraging, and indeed to the four available terms in the series expansion we also find \begin{equation} \label{g2p1} g_2(p;1) = k^{2/9} \, {\cal S}_p \, {\cal S}_{Vp} \;\; . \end{equation} The results for $ L_2(p;0)$ and $ L_2(p;2)$ are not so encouraging and I have failed to find any obvious result for these or for $g_2(p;0)$, $g_2(p;2)$. In \cite{RJB05b} I conjecture that for general $N$ the functions $G_{p,V^i p}(r)$ have a simple form as a product of $\cal S$ functions provided $i=0, \ldots, N-1$ and $r = 1, \ldots, N-i$. For other values of $i, r$ they remain a puzzle. (Except when $i=1$ and $r = N $: this case can be deduced from the sixth relation of eqn \ref{functrl}.) If (\ref{g2p1}) is correct, then we have some information on the function $L_{pq}(r) $ of eqn. 56 of \cite{RJB98}. From this and the first equation of (\ref{functrl}), \begin{equation} L_{pq}(r) = G_{pq}(r) G_{Rq,Rp}(r) = G_{pq}(r)/G_{qp}(N-r+1) \;\; . \end{equation} Setting $q= Vp$ and using (\ref{greln}), we obtain \begin{equation} L_{pq}(r) = g(p;r)/g_2(Vp;N-r+1) \;\; . \end{equation} Taking $r=0$, it follows from (\ref{gS}) and (\ref{g2p1}) that \begin{equation} L_{pq}(0) = k^{-4/9} /({\cal S}_p^2 \, {\cal S}_{Vp} \, {\cal S}_{V^2 p} ) = k^{-1/9}/(x_p{\cal S}_p) \;\; . \end{equation} The function $L_{pq}$, for arbitrary $p, q$, was introduced in \cite{RJB98} partly because its square is a rational function of $x_p, y_p$, $\mu_p$, $x_q, y_q$, $\mu_q$ when $N=2$, so the hope was that it might be similarly simple for all $N$. We see that this cannot be so: ${\cal S}_p$ is {\em not} such a function. \section{Summary} I have outlined the recent derivation of the order parameters of the solvable chiral Potts model, a derivation that verifies a long-standing and elegant conjecture.\cite{AMPT89} As with all the calculations on solvable models satisfying the star-triangle relations, the trick is to generalize the model to a point where one has a function, here $G_{pq}(r)$, to calculate, rather than a constant, as one can obtain relations and properties that define this function. On the other hand, this is an example where it pays {\em not} to over-generalize: we can handle the particular function $G_{p,Vp}(r)$, and this is sufficient for the purpose of obtaining the order parameters. The general $G_{pq}(r)$ continues to defy calculation. Series expansion methods can provide a valuable check on such derivations, which are of their nature believable but hard to make fully mathematically rigorous. One usually tries to present the argument in as logical a manner as possible, but this is usually {\em not} the manner in which it was originally developed. Here I have indicated the points in the calculation when I found the available checks both reassuring and encouraging.
proofpile-arXiv_065-3212
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and Model} Understanding how the processing of information in neural media is influenced by the biophysical processes that take place at the synaptic level is an open question. In particular, the effect of synaptic dynamics and noise on complex functions such as associative memory is not yet well understood. In relation to this, it has been reported that short term synaptic plasticity has a main role in the ability of some systems to exhibit switching between stored memories.\cite{jtorresNC} The same behavior ensues assuming dynamics of the neuron threshold to fire.\cite{dhornPRA} The origin of the switching mechanism seems in both cases at a sort of fatigue of the postsynaptic neuron under repeated presynaptic simulation. This destabilizes the current attractor which may result in a transition to a new attractor. It would be interesting to put this on a more general perspective concerning the role of noise in associative memory tasks. With this aim, we present in this paper a \emph{stochastic neural automata} that involves two independent competing dynamics, one for neurons and the other for synapses. Consider $N$ (binary) neuron variables, $s_{i}=\pm 1$, any two of them linked by synapses of intensity $w_{ij}$; $i,j=1,\ldots ,N$. The interest is on the configurations $\mathbf{S}\equiv \{s_{i}\}$ and $\mathbf{W }\equiv \{w_{ij}\}$. In order to have a well--defined reference, we assume that interactions are determined by the Hopfield \textit{energy} function. Furthermore, consistent with the observation that memory is a global dynamic phenomenon, we take the model dynamics determined at each time step by a single pattern, say $\mu $. Consequently, $H(\mathbf{S},\mathbf{W};t)=- \frac{1}{2}\sum_{i}\sum_{j\neq i}w_{ij}^{\mu }s_{i}s_{j}$ with $\mu =\mu(t)$ and assuming the Hebbian learning rule, for example, $w_{ij}^{\mu }=\frac{k}{N}\xi _{i}^{\mu }\xi_{j}^{\mu }$, where, $\xi _{i}^{\mu }=\pm 1$ are the variables that characterize the $\mu$ pattern, one out of the $P$ \emph{memorized} ones, and $k$ is a proportionality constant. Therefore, each configuration $\mathbf{W}$ is unambiguously associated to a single $\mu$, and we write $\mathbf{W}\equiv \mu $ in the following. The above may be formulated by stating that the probability of any configuration $(\mathbf{S},\mu )$ evolves in discrete time according to \begin{equation} P_{t+1}({\mathbf S},\mu )=\sum_{{\mathbf S^{\prime }}}\sum_{\mu^{\prime }} T[ (\mathbf{S},\mu) | (\mathbf{S^{\prime}},\mu^{\prime}) ] P_{t}(\mathbf{S^{\prime }},\mu ^{\prime }), \label{discrete_master_equation} \end{equation} where $T[ ( \mathbf{S},\mu ) | ( \mathbf{S^{\prime }},\mu^{\prime }) ]$ represents the probability of jumping from $(\mathbf{S^{\prime }},\mu ^{\prime })$ to $(\mathbf{S},\mu )$. We explicitly consider here the case in which \begin{equation} T[ (\mathbf{S},\mu) | ( \mathbf{S^{\prime }},\mu^{\prime}) ] = T_{0}^{\mu ^{\prime }}[ \mathbf{S} | \mathbf{S^{\prime }} ] \times T_{1}^{\mathbf{S}}[ \mu | \mu ^{\prime} ] \label{probability_of_jumping} \end{equation} with $T_{0}^{\mu^{\prime }}[ \mathbf{S} | \mathbf{S^{\prime}} ]$ corresponding to \textit{Little dynamics}, i.e., parallel updating, so that $T_{0}^{\mu^{\prime }}[ \mathbf{S} | \mathbf{S^{\prime }} ]= \prod_{i=1}^{N}t_{0}^{\mu^{\prime }}[s^{\prime },i]$. Furthermore, $t_{0}^{\mu^{\prime }}[s^{\prime },i] \equiv \Psi [ \beta _{0}\Delta H^{\mu^{\prime }}( s_{i}^{\prime } \rightarrow s_{i}= \pm s_{i}^{\prime}) ]$, where $\Psi(X)$ is an arbitrary function, except that it is taken to satisfy \textit{detailed balance} (see Ref.\cite{jmarroBOOK} for a discussion), $\beta _{0}$ is an (inverse) temperature parameter, and $\Delta H$ denotes the \textit{energy} change brought about by the indicated transition. For changes in the synapses, we take $T_{1}^{\mathbf{S}}[ \mu | \mu ^{\prime } ]=\Psi[ \beta _{1} \Delta H^{\mathbf{S}}(\mu ^{\prime }\rightarrow \mu) ]$. We also take $\sum_{\mathbf{S}}\sum_{\mu }T[ (\mathbf{S},\mu ) |( \mathbf{S^{\prime }},\mu ^{\prime })] =1$ for any $(\mathbf{S^{\prime }},\mu ^{\prime })$. After some algebra, one has that $\Delta H^{\mu ^{\prime }}(s_{i}^{\prime } \rightarrow s_{i}=\pm s_{i}^{\prime})= -k\xi _{i}^{\mu ^{\prime }}( s_{i}-s_{i}^{\prime }) (m^{\prime \mu ^{\prime }}-s_{i}^{\prime }\xi _{i}^{\mu ^{\prime }}/N)$ and $\Delta H^{\mathbf{S}}(\mu ^{\prime }\rightarrow \mu )=-\frac{1}{2}kN[ ( m^{\mu }) ^{2}-( m^{\mu ^{\prime }}) ^{2}]$, where $m^{\mu }(\mathbf{S})\equiv m^{\mu }$ is the overlap between the current state $\mathbf{S}$ and pattern $\mu$. The factor $N$ in $\Delta H^{\mathbf{S}}$ appears because we assume \emph{ global} energy variations (i.e., all synapses in the configuration are attempted to be changed at each step) instead of the energy variation per site in $\Delta H^{\mu^{\prime }}$. This model differs essentially from apparently close proposals, e.g., \cite{jtorresPRL,jtorresJPA,acoolenPRB}. First, because it assumes the same time scale for changes in both $\mathbf{S}$ and $\mu$. On the other hand, the choice here for $T[ ( \mathbf{S},\mu ) | ( \mathbf{S^{\prime }},\mu ^{\prime }) ]$ amounts to drive neurons activity and synaptic intensities by different temperature, $\beta _{0}^{-1}\equiv T_{0}$ and $\beta _{1}^{-1}\equiv T_{1}$, respectively. The case of our model with a single pattern is equivalent to the equilibrium Hopfield model with $P=1$; for more than one pattern, however, new nonequilibrium steady states ensue. This is closely due to the fact that $T[ ( \mathbf{S},\mu ) |( \mathbf{S^{\prime }},\mu ^{\prime })]$ does not satisfy detailed balance.\cite{jmarroBOOK} In principle, one may estimate from (\ref{discrete_master_equation}) how any observable $F(\mathbf{S},\mu )$ evolves in time. The result is an equation $\langle F\rangle _{t+1}=f_{t}({\bar{K}},F),$ where ${\bar{K}}$ is the set of control parameters and $\langle \cdots \rangle $ denotes statistical average with $P(\mathbf{S},\mu )$. \cite{jmarroBOOK} Alternatively, one may be directly concerned with the time evolution for the probability of jumping in terms of the overlaps $ \mathbf{m}\equiv \{ m^{\nu };\mu =1,\ldots ,P\}$ . One has that $\Pi _{t+1}(\mathbf{m},\mu )=\sum_{S}\delta \lbrack \mathbf{m}-\mathbf{m(S)}]P_{t+1}(\mathbf{S},\mu )$ satisfies \begin{equation} \Pi _{t+1}(\mathbf{m},\mu )=\int d\mathbf{m^{\prime }}\sum_{\mu ^{\prime }} \bar{T}[ ( \mathbf{m},\mu ) |( \mathbf{m^{\prime }},\mu ^{\prime }) ] \,\Pi _{t}(\mathbf{m^{\prime }},\mu ^{\prime }). \label{effective_master_equation} \end{equation} \begin{figure}[h!] \psfig{file=NEUCOM_fig1.eps,width=7.5cm} \caption{{\small Phase diagram showing three different phases. ($F$) {\it Ferromagnetic}, for $T_{0}<T_{0}^{3}(T_{1})$, with $\mathbf{m}\neq 0$ and $ j=0$. The system has \textit{static} associative memory. ($P$) {\it Paramagnetic}, for $T_{0}>T_{0}^{1}(T_{1})$, with $\mathbf{m}=0$ and $j=0$, without any kind of associative memory. ($O$) {\it Oscillatory}, for $T_{0}^{3}(T_{1})<T_{0}<T_{0}^{1}(T_{1})$, with $\mathbf{m}=0$, $j \neq 0$ and \textit{dynamic} associative memory, e.g. there are jumps between patterns either uncorrelated ($O(II)$) or time-correlated ($O(I)$), as explained in the main text. The transition between $O(I)$ and $O(II)$ is discontinuous. Here, $N=16384$ and $P=3$ spatial-correlated patterns with 20\% of average overlap between any two of them. } } \end{figure} This amounts to reduce the degrees of freedom, from a number of order $2^{N}+1$ in $(\mathbf{S},\mu )$ to $P+1$ in $(\mathbf{m},\mu ).$ Dealing with this sort of coarse--grained master equation requires an explicit expression for $\bar{T}[ ( \mathbf{m},\mu ) |( \mathbf{m^{\prime }},\mu ^{\prime }) ] $ which we take as \cite{acoolenDYN} $\bar{\Psi}[ \beta _{1}\Delta H^{\mathbf{m}}( \mu ^{\prime }\rightarrow \mu ) ] \mathcal{K}\int d \mathbf{q}\exp [ N\Phi ( \beta _{0},\mathbf{m},\mathbf{m^{\prime }},\mathbf{q},\mu ^{\prime }) ]$. Here, $\mathcal{K}$ is a constant, and $\mathbf{q}$ is the conjugated momentum of $\mathbf{m}$. Hence, $\mu $ and $\mathbf{m}$ evolve separately in time. Changes in $\mu $, given ${\mathbf{m}}$, are controlled by $\bar{\Psi}[ \beta _{1}\Delta H^{\mathbf{m} }( \mu ^{\prime }\rightarrow \mu ) ]$, while ${\mathbf{m}}$ evolves according to the term $\int d\mathbf{q}\exp [ N\Phi ( \beta _{0} \mathbf{m},\mathbf{m^{\prime }},\mathbf{q},\mu ^{\prime }) ]$ with a fixed $\mu^{\prime }$. A justification of this equation and a detailed study of its consequences will be reported elsewhere.\cite{jcortesTOBE} \section{Simulations} Here we report on some preliminary results of a Monte Carlo study of this model which reveals an intriguing situation. Different regimes are shown in Figure $1$ ({\em Left}) depending on the values of temperatures $T_{0}$ and $T_{1}$. To distinguish between them, we introduce the overlap (${\bf m}$) and the total number of jumps ($j$); three regimes occur that are close to the ones reported in \cite{jtorresNC}. There is an oscillatory phase which is illustrated in Figure $1$ ({\em Right}). The system in this case has associative memory, like in Hopfield model. However, this is here a dynamic process, in the sense that the system trapped in any attractor corresponding with a pattern is able to jump to the other stored patterns. Because the probability of jumping depends on the neurons activity, this mechanism is, in general, a complex process. \begin{figure}[h!] \psfig{file=NEUCOM_fig2.eps,width=7.5cm} \caption{{\small Activity of neurons versus time for $N=100$ neurons and $P=4$ patterns. Here, $T_{0}=0.9T_{0}^{c}$ and $T_{1}=1.69T_{1}^{c}$, where $T_{0}^{c}$ and $T_{1}^{c}$ are the corresponding critical values of temperatures.} } \end{figure} One might argue that these jumps are a finite size effect; it does not seem to be the case, however. Similar jumping phenomena, apparently independent of the size of the system,\cite{mmunozEURLET} have already been described in kinetic Ising--like models in which disorder is homogeneous in space and varies with time, and mean--field solutions also exhibit these phenomena. Some finite--size effects are evident, however; the synaptic temperature, for instance, scales with size. In fact, we obtain $T_{1}^{c}/N=0.0431 \pm 0.0001$ for $N=1024,1600$ and $4096$; consequently, we redefine $\beta _{1}\equiv \beta _{1}N$ from now on. A series of our computer experiments concerned $N=65536$ and $ P=6.$ In order to study in detail the oscillatory phase, it turned out convenient to look at time correlations. Therefore, we used correlated patterns, namely, there was an average overlap of 20\% between any two of the stored patterns. The goal was to detect non--trivial correlations between jumps, so that we computed the time $\tau _{\nu \gamma }$ the system \textit{remains} in pattern $\nu $ before jumping to pattern $\gamma ;$ $ \sum_{\gamma =1}^{P}\tau _{\nu \gamma }=\tau _{\nu }$ is the total time the system stays in pattern $\nu .$ This reveals the existence of two different kinds of oscillatory behavior. One is such that $\tau _{\nu \gamma }\simeq \tau ,$ independent of $\nu $ and $\gamma .$ That is, the system stays the same time at each pattern, so that jumping behaves as a completely random process, without any time correlation. This is denoted by \textit{O(II)} in Figure $1$ ({\em Left}). Even more interesting is phase \textit{O(I)}. As the probability of jumping between patterns is activity dependent, lowering $T_{0}$ leads to non--trivial time correlations, namely, $\tau _{\nu \gamma }$ depends on both $\nu $ and $ \gamma .$ We also observe that $\tau _{\nu \gamma }$ differs from $\tau _{\gamma \nu }$. This peculiar behavior suggests one that some spatial temporal information may be coded in phase \textit{O(I)} . \begin{figure}[h!] \psfig{file=NEUCOM_fig3.eps,width=7cm,angle=270} \caption{{\small Probability distribution for the time the system stays in a pattern before jumping to another one in the phase \textit{O(II)}. \underline{Inset:} Tail for large events. }} \end{figure} In order to understand further these two different jumping mechanisms, we simulated for $T_{0}=\{0.1,0.5,1.1\}$, in units of $T_{0}^{c},$ at fixed $T_{1}=1.36T_{1}^{c}.$ The resulting probability distribution of time $\tau _{\nu }$ averaged over $\nu ,$ $ P(\tau ),$ is shown in Figure $2$. The data fit $P(\tau )=A\exp (-B\tau ^{2})\tau ^{2}.$ This predicts that $\langle \tau ^{2}\rangle ^{2}=\frac{9}{ 64}\pi ^{2}\langle \tau \rangle ^{4}$ which compared with our simulations gives relative errors of $e(\%)=\{3.3,3.8,11.2\}$ for $T_{0}=\{0.1,0.5,1.1\}$, respectively. The error increases with $T_{0}$ because the overlaps then tend to become too small and jumps are, consequently, not so well-defined. Also interesting is the average of time $\tau$ before jumping, because diverging of $\langle \tau \rangle $ indicates that the overlap is stable and no jumps occur. The {\it trial} distribution above gives $\langle \tau \rangle =A/2B^{2}.$ Therefore, $B$, which also enters the probability normalization as $A=4( B^{3}/\pi ) ^{1/2}$, indicates whether there are jumps $(B\neq 0)$ or not $(B=0)$. $B$ measures the jumping frequency. It is also worth studying the tails of the distributions for large events. This is illustrated in Figure $2$ ({\em Inset}) . One may argue that, as in Ref.\cite{phurtadoTOBE} for somewhat related phenomena, this tail is due to the superposition of many exponentials, each corresponding to a well--defined type of jumping event. We are presently studying this possibility in detail. We acknowledge P.I. Hurtado for very useful comments and financial support from MCyT-FEDER, project BFM2001-2841 and the J.J.T.'s \textit{Ram\'{o}n y Cajal} contract.
proofpile-arXiv_065-3226
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Cataclysmic Variables are close binaries systems composed by a white dwarf that accretes matter from a red dwarf or subgiant star via an accretion disk, if the magnetic field of the primary is negligible. Classical novae are eruptive cataclysmic variables with only one high amplitude outburst observed. The spectra of recent novae could show, depending of their evolutionary epoch after the eruption, a complex superposition of the spectra of the accretion disk, shell permitted and forbidden lines and eventually signatures of the secondary star (see \citet{Jon31} for spectral evolution of RR Pic before the eruption). The irradiation of the secondary by the shell ionizing source may induce additional line emission \citep*{Per02} though the shielding of the white dwarf by the accretion disk should be relevant once the accretion has been reestablished. Novae and some nova-like systems usually present intense HeII emission lines. The ratio HeII/H$\beta$ is often much smaller for quiescent dwarf novae than it is in nova remnants and nova-likes. The UX UMa type nova-likes present a variable intensity of HeII 4686, as in IX Vel where the HeII and CIII/NIII lines are present in some nights and absent in others \citep{Hes90}. It has been suggested in the past that the HeII line is not produced by viscous heating in the accretion disk, but it is a recombination line produced by photoionization in a region illuminated by the boundary-layer ionizing photons \citep{Wil80}. RR Pic is a cataclysmic variable classified as a classical nova, with eruption recorded in 1925. \citet*{van66} observed periodic variations its light curve, and also observed eclipses which did not occur in all conjunction phases. \citet{Vog75} determined the RR Pic's orbital period from photometric observations as being 0.1450255(2) d. \citet{War86} presented RR Pic light curves from 1972 to 1984, showing an intense brightness modulation and active flickering. The presence of shallow and irregular eclipses is mentioned by \citet{War87}. Spectroscopy of the nova shell filaments was performed by \citet*{Wil79} as a part of their study of the physical conditions in novae ejecta. They also estimated the separation between the two main knots in the shell of RR Pic, first observed by van den Boos and Finsen (cited in \citet{Jon31}). The observation of the shell knots at different epochs reveal their average expansion velocity. The last imaging of RR Pic showing the shell features and dimensions was taken by \citet*{Gil98}. They obtained a 30'' separation between two knots at opposite sides of the shell, so, we could accept this value as an approximate value of the current shell apparent dimension. Fast photometric variability was first detected by \citet{War76} with a period of about 30 s, the variability was confirmed \citep{War81} with periodicities at 20 s and 40 s, with a more persistent period of 32 s. \citet*{Fri98} performed a wavelet transform study of the flickering of some cataclysmic variables, including RR Pic. Such a study shows that RR Pic has an intense fast photometric activity when compared to other novae. This nova is also suspected of being an intermediate polar \citep{Kub84}, on the basis of the detection of a coherent brightness modulation of about 15 s in U, B and V bands. However, \citet*{Hae85} could not confirm the existence of this period using a large photometric database. The RR Pic's orbital period is 0.14502545(7) d, calculated by \citet{Kub84}. \citet*{Hae91} performed time resolved spectroscopy of the H$\alpha$ line, presenting the first measurements of the line profile variations with the orbital phase. \citet{Sch03} also studied the H$\alpha$ and HeI line profiles. In this work we propose to locate and quantify the HeI, HeII and Balmer line sources as well as constrain the stellar masses in this system. The observations and data reduction are detailed in the section 2. The radial velocity study, mass constraints and Doppler tomography are shown in section 3. A discussion of the results is made in section 4. Finally, our conclusions are outlined in section 5. \section{Observations} The RR Pic spectrophotometric observations were made from 2001 to 2003. The observations were performed with the 1.60 m Perkin-Elmer telescope at LNA - Brazil and with the 1.52 m ESO telescope at La Silla - Chile. In both cases we used Cassegrain spectrographs with a spectral resolution of about 2 \AA. The first observations were aimed at the H$\alpha$ line. The last ones were intended to cover H$\beta$, He II and H$\gamma$ spectral lines. For more details see journal of observations below. \placetable{tbl-1} The slit position angle was chosen to include a comparison star in the slit and also to be at an angle that avoids the shell bright knots (see \citet*{Gil98} for a shell image). The slit width was set to include a fraction of about 2/3 of the star seeing disk, so a negligible part of the shell emission is included in our spectra. In general, the contribution of the shell emission over the stellar profile is well subtracted by interpolating the local background along the spatial direction. The observations were bracketed by arc lamp exposures in a regular basis to allow good wavelength calibration of the spectra. The interval between consecutive lamp exposures was estimated considering the spectrograph mechanical flexure maps, aiming to minimize its effects in the derived velocities. The dispersion solutions were interpolated in airmass for each target exposure. Differential spectrophotometry was performed, using the integrated flux from the slit comparison star. In order to perform the absolute flux calibration of all the spectra, tertiary spectrophotometric standard stars were also observed \citep{Ham94}. Wide slit observations of the slit comparison star were made to correct our spectra from slitlosses and from differential atmospheric dispersion effects. All the data reduction was made using standard IRAF\footnote[1]{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} reduction procedures. The images were bias subtracted and corrected from flatfield. Then the spectra were extracted using optimal extraction algorithm \citep{Hor86}, calibrated in wavelength and flux. The spectra in the red range were also corrected from telluric absorption effects in the vicinity of H$\alpha$ line using a scaled telluric absorption template with the same spectral resolution. \section{Results} \subsection{Spectral Features} We present in figure 1\notetoeditor{figure 1 must be placed as a double column figure} the average spectra in the red region, covering H$\alpha$ and HeI 6678 lines and in the blue region, covering H$\beta$, H$\gamma$ and HeII 4686 lines. The Balmer and HeI lines seem double peaked while the HeII appears single peaked with extended wings. Only the most intense lines can be used for Doppler tomography. We can see that near HeII there is a blend of CIII and NIII lines. Unfortunely, the blending between them is too severe and prevent the Doppler mapping of these lines. No absorption lines from the secondary could be detected. \placefigure{fig1} \subsection{Line Profile and Radial Velocity Study} We can see in figure 1 that the HeII line is blended with the CIII/NIII complex, but this blending does not compromise too much the HeII blue wing profile, so we can simply limit the maximum velocity used in HeII radial velocity study to avoid the nearby lines. The maximum velocity was fixed at 1000 km s$^{-1}$. The spectra were binned in phase boxes, using the orbital period proposed by \citet{Kub84} and our spectroscopic conjunction phase (see fig. 4), the spectra were binned in phase boxes. From the phase diagrams it could be noticed that the lines do not present a large oscillation around the rest wavelength, indicating a low primary's radial velocity semi-amplitude. As we go to the line wings we can see that the oscillation around the rest wavelength becomes more noticeable. No emission is found above 1200 km s$^{-1}$. The H$\alpha$ line profile does not show a clear single peaked profile at any orbital phase (fig. 2)\notetoeditor{figure 2 must be placed as a single column figure}. The H$\alpha$ line presents a more intense emission near $\phi$ = 0. The HeI line is also more intense near phase $\phi$ = 0, has a more structured shape compared to the H$\alpha$ line, and also shows emission at larger velocities. For H$\beta$, H$\gamma$ and HeII, an increase of the line intensity is found near phase 0.6. The phase sampling of our data was verified, confirming that this intensity increase could not be caused by an irregular phase coverage. \placefigure{fig2} \placefigure{fig3} The HeI and HeII (fig. 3)\notetoeditor{figure 3 must be placed as a single column figure} phase diagrams have different shapes, but its important to recall that they were observed at different epochs. The H$\alpha$ phase diagram is slightly different from those of H$\beta$ and H$\gamma$, and they were also derived from data taken at different dates. The H$\beta$ line data were used to estimate the primary's radial velocity semi-amplitude $K_1$. This line was chosen because it is one of the most intense in our spectra, it is not blended and also because there are more independent observations in the blue dataset than in the red set. The diagnostic diagram (fig. 4)\notetoeditor{figure 4 must be placed as a single column figure} is built using radial velocities derived by convolving the line profiles with a double Gaussian mask (\citet*{Sch80}, hereafter SY). A mask with 50 km s$^{-1}$ (FWHM) Gaussians was applied. Different values of the Gaussian's half-separation were used in order to sample different projected velocities in the line profile. The Gaussian half-separation velocity ``$|V|$'' appear in the diagram as the horizontal axis. For each half-separation a radial velocity curve is obtained, fitted with a periodic function and the parameters of such this fit are given in the y axis of the panels in the diagnostic diagram. The phase scale is given by the spectroscopic inferior conjunction of the secondary, i.e. the timing for positive to negative crossing of the line wing radial velocity curves. The ``best'' value of $K_1$ found in the line wings is the one found at a minimum RMS, on a plateau in the $K_1$ curve . In order to estimate $K_1$, an average of the $K_1$ values for velocities ranging from 466 km s$^{-1}$ to 605 km s$^{-1}$ was made, obtaining $K_1$ = 37(1) km s$^{-1}$. A systemic velocity $\gamma$ = 1.8(2) km s$^{-1}$ and the spectroscopic secondary conjunction phase $\phi_0$ = 2452295.7744(3) HJD were estimated using the same range in ``$|V|$''. One can see that the diagnostic diagram is well behaved, for such a large velocity range in the line wings we have a small value of the systemic velocity and also a small zero phase variation. The problem of estimating the white dwarf orbital velocity using emission lines in CVs is a classical issue. It is, of course, desirable that the measurement of $K_1$ is performed at the highest possible velocity in the wings. We simply expect that the high velocity gradient in the disk suffer from less anisotropies. A plateau in the diagnostic curve is also expected if there are no anisotropies. Choosing $K_1$ from a steep diagnostic curve is much more complicated. Possibly, different authors with different S/N ratios in the line wings will find different $K_1$ values. For instance, if it is a raising $K_1$ versus $|V|$ diagnostic curve, the observer who has the best S/N will derive the largest $K_1$ values. The uncertainty in $K_1$ is derived from the dispersion between $K_1$ values in the diagnostic diagram plateau. The sinusoid amplitude fitting uncertainty of individual $K_1$ points is of the same order (2 km s$^{-1}$). However, both are just formal errors. The uncertainty in $K_1$ propagates to the derived mass ranges, so these ranges are just formal as well. They do not include the effect that measured $K_1$ velocities in CVs are possibly not the white dwarf orbital velocities. We have cross checked our H$\beta$ wing velocities with H$\alpha$ measurements using the same convolution masks and comparable results were found. Again, H$\beta$ and H$\alpha$ were measured at different epochs. In the next sections the $K_1$, $\gamma$ and phase reference values found here will be adopted. \placefigure{fig4} Another way of obtaining the primary's radial velocity semi-amplitude is from a Doppler tomogram centered at the system's center of mass. Circular isophotes with increasing radius (or velocity modulus) were fitted to the H$\beta$ tomogram constructed as described above. The inner isophotes will follow the bright features in the Doppler map but the outer will tend to trace the intrinsic high velocity emission in the disk. In the first panel on figure 4 the continuous line represents the $K_1$ values obtained from the displacement of each isophote center y-axis origin. It is found that the value of $K_1$ obtained from this method is larger than that obtained from double Gaussian mask convolution for $|V| <$ 700 km s$^{-1}$. They become marginally compatible with the SY plateau only at $|V| >$ 700 km s$^{-1}$. This difference could be explained if we consider that in the SY method we have sampled the emission from regions with different intrinsic velocities that yields the same projected velocity inside the Gaussian mask. In the next steps we will use the value of $K_1$ obtained from the double Gaussian method. The SY method was preferred to the tomogram isophote fitting in the particular case of RR Pic because $K_1$ is much smaller than the tomogram FWHM resolution. The presence of asymmetries in the brightness distribution at high velocities may impact the determination of isophote centers and, consequently, produce a relatively large systematic effect on $K_1$. A radial velocity study of RR Pic was also performed by \citet{Sch03}, using H$\alpha$ and HeI 6678 spectral lines. These authors estimated a primary radial semi-amplitude of $K_1 \sim$ 170 km s$^{-1}$. This value could not be confirmed by our measurements of the H$\beta$ line wings. \citet{Sch03} have measured $K_1$ at wing velocities of $\sim$ 800 km s$^{-1}$. There is almost no signal at such velocities. This can be verified by inspecting their tomograms and phase maps. In addition, both their radial velocity analysis and Doppler tomography were computed using a rather small number of independent phases (19 spectra). \subsection{Constraints on Stellar Masses} Considering the geometry of the system, the presence of shallow eclipses, a maximum disk radius \citep{Pac77} and the volume radius of the secondary \citep{Egg83} we find a possible inclination range $60 \degr < i < 80 \degr$. This result is similar to that obtained by \citet{Hor85} considering the eclipse of the central region of the disk. \placefigure{fig5} In figure 5\notetoeditor{figure 5 must be placed as a single column figure} we present the mass diagram for RR Pic. A lower limit of 0.2 M$_\sun$ for the primary mass is derived from the H$\beta$ line FWZI. The upper limit for its mass was fixed at 1.4 M$_\sun$. In addition, the secondary mass must be roughly equal or lower than the primary's for a stable accretion regime. The secondary also needs to have a mass below the limit for a main sequence secondary filling its Roche lobe \citep{Pat84}. Using broad limits on M$_1$ ($0.2 < M_1 < 1.4~M_\sun$), in addition to the previously derived value K$_1$ = 37 km s$^{-1}$ and inclination range ($60\degr < i < 80\degr$), we found from the mass function that $0.09 < q < 0.2$. A secondary's radial velocity semi-amplitude $K_2$ ranging from 200 km s$^{-1}$ to 400 km s$^{-1}$ is also derived. A bootstrapping simulation \citep{Horb86} was performed aiming to confirm the mass intervals given by the mass diagram. An inclination range of 60$\degr$ to 80$\degr$ with a flat error distribution and a $K_1$ value of 37(1) km s$^{-1}$ were used in this simulation. The results of this simulation are shown in figure 5. One can see from this same figure that the secondary must have a mass smaller than 0.15 M$_\sun$. This indicates a low mass ratio ($q = M_2 / M_1$) for the system. Calculating the secondary's mean density from the equation 1 \citep{War95}, we find $\bar\rho$ = 8.8 g cm$^{-3}$. From this result one finds that the spectral type of the secondary should be near M5 if it is a main sequence star \citep{All00}. \begin{equation} \bar \rho_2=107 P_{orb}^{-2}(h)~~ g~cm^{-3} \end{equation} The fact that the secondary has a mass much smaller than a main sequence star suggests that the secondary may have evolved from the main sequence. One can also constrain the white dwarf mass considering the fact that RR Pic had a moderately fast nova outburst. The white dwarf mass is probably greater than 0.6 M$_\sun$, as expected from classical novae outburst models \citep{Sta89}. In addition, the white dwarf mass is probably not too close to the Chandrasekhar limit, since no recurrent nova outbursts were observed in RR Pic over the last 80 years \citep{Web87}. In the next steps we will use the masses given by the center of the most probable mass region in the mass diagram: $M_1$ = 1 M$_\sun$ and $M_2$ = 0.1 M$_\sun$. \subsection{Doppler Tomography} The Doppler tomography method was applied to the brightest emission lines, obtaining Doppler maps with origin at the system's center of mass. Before interpreting the Doppler tomograms is important to notice that the H$\alpha$ HeI observations and the H$\beta$ H$\gamma$ HeII data were taken at different epochs. This fact implies that tomograms from each dataset should be compared only with tomograms from the same dataset. The position of the secondary and the binary center of mass, as well the primary and $L_1$ Lagrange's point were plotted for a 90$\degr$ orbital inclination. These points will be displaced in their y-values for a different orbital inclination. The self-consistency of the Doppler maps could be verified comparing the tomogram projections with the observed spectra at its correspondent orbital phase. The projections have a good agreement with their equivalent line profiles, both in flux and in line profile shape. The H$\alpha$ tomogram (fig. 6)\notetoeditor{figure 6 must be placed as a single column figure} presents a ring shape, as expected for an accretion disk reconstruction, where there is a low velocity limit in the outer disk radius and an observational high velocity limit at the inner disk. The H$\alpha$ tomogram presents most intense emission in the $(-V_x, +V_y)$ and $(-V_x, -V_y)$ quadrants. In contrast, the H$\beta$ and H$\gamma$ Doppler tomograms also present the signature of a disk, but with a weaker emission in the $(+V_x, -V_y)$ quadrant. When compared to the H$\alpha$ map an emission deficit in the $(+V_x, +V_y)$ quadrant is verified in the upper Balmer transitions. \placefigure{fig6} \placefigure{fig7} The HeI Doppler tomogram (fig. 7)\notetoeditor{figure 7 must be placed as a single column figure} presents a ring shape, but it shows the inner ring radius at velocities greater than those found in the H$\alpha$ tomogram, suggesting that the HeI emission is enhanced in the inner disk region. An emission enhancement in the lower part of the tomogram is also seen. The HeI Doppler tomogram is noisier than the other tomograms because the HeI 6678 line is much fainter than the other ones. The HeII 4686 Doppler tomogram (fig. 7) presents a distinct behavior when compared to Balmer and HeI tomograms, showing emission at very low velocities. This low velocity emission can be explained by the presence of a wind coming from the accretion disk, or by the emission from stationary material inside the Roche lobe. This emission could also be associated with gas spilling over the disk. The line production mechanism for such vertically extended gas distribution may be recombination as the gas is easily irradiated by the inner disk region and boundary layer. Following this interpretation, the region of enhanced emission in the $-V_x$ region of the HeII tomogram could be associated with the hot spot, where the stream hits the accretion disk. An enhanced emission in this same quadrant could be also seen in H$\beta$ tomogram. \subsection{Accretion Disk's Radial Emissivity Profiles} The disk radial emissivity profiles obtained from H$\alpha$, H$\beta$, H$\gamma$, HeI 6678 and HeII 4686 spectral lines are presented in figure 8\notetoeditor{figure 8 must be placed as a double column figure}. The Doppler maps previously discussed are centered in the system's center of mass. By including the primary's radial velocity, one may shift the Doppler tomograms to be centered at the white dwarf. The radial disk emissivity profile is estimated from these tomograms by calculating the mode of the intensity over concentric rings centered at the origin. The mode was chosen as the statistical estimator, allowing us to obtain the emissivity of the disk disregarding large emission anisotropies. Using a primary's mass of 1 M$_\sun$, the Doppler tomograms were converted from velocity space to position space using a Keplerian velocity law. The radial emissivity profiles are corrected for reddening considering the color excess $E(B-V)=0.02$ \citep*{Bru94} and R = 3.1. The radial emissivity profile inclinations obtained are -1.5 for H$\alpha$ and H$\beta$, -1.7 for H$\gamma$ and HeII and -1.9 for HeI, the error of these values is about 0.1. \placefigure{fig8} The HeII radial emissivity profile shows two subsets of points with different behavior and inclinations. The discontinuity between these two subsets is at about 500 km s$^{-1}$, so this difference could not be attributed to a blending effect with the CIII/NIII complex, since blending effects should only appear at velocities greater than 1000 km s$^{-1}$. From figure 8 it can be seen that the emission inclinations for H$\alpha$ and H$\beta$ lines are similar, and that the profile seems steeper for the HeI and HeII emission. Note however, that if there is a wind contribution in HeII it will also be present in the radial emissivity profile, so this disk emission profile may be contaminated by a wind component. To convert the Doppler tomograms to position space we assumed a Keplerian velocity law, which is not necessarily true, introducing additional errors in the radial emissivity curves. The emission power law index depends on the mass of the primary in the sense that the power law becomes steeper with increasing mass. Hence, no absolute value of the power law index can be given. However, as the change of the power law index with the primary's mass must be the same for all lines, the fact that one line emission is more centrally concentrated than other line emission is independent of the primary's mass. \section{Discussion} Until today, there is no well established photometric ephemeris for RR Pic in the literature. While the presence of a grazing eclipse seems to be confirmed, there is no published O-C diagram for these eclipses. As far as we understand, the phasing adopted by \citet{Sch05} is based on a single feature in \citet{War86} light curves, which was interpreted as the grazing eclipse. If correct, this ephemeris would result in a spectroscopic phase offset of 0.17. Since RR Pic presents high amplitude flickering, no fiducial phasing from grazing eclipses can be firmly established without combining several eclipse light curves and analyzing their phase residuals. Therefore, a discussion about the presence of a spectroscopic phase shift (as observed in several CVs) awaits a better definition of the eclipse ephemeris. The secondary's mass obtained from the mass diagram is significantly smaller than the limit given by a main sequence star filling its Roche lobe. One interpretation that directly arises from this result is that the secondary could be more evolved than a main sequence star. No absorption features from the secondary that could support such hypothesis are found in our spectra. \citet*{Har05} performed K-band infrared spectroscopy of RR Pic and they have not detected any obvious absorption feature from the secondary either, so the secondary's spectral type remains elusive. From the radial emission profiles in fig. 8 it is possible to verify that the HeI emission is more centrally concentrated than the Balmer and HeII emission. We can draw a preliminary discussion comparing the RR Pic radial emissivity profiles with others estimatives in the literature. The RR Pic radial emission profiles inclinations are smaller than those of other systems. \citet*{Dia03} found an radial emissivity profile inclination of -2.1 for H$\alpha$ line and -2.4 for the HeI 6678 line for V841 Oph. \citet*{Dia99} found -2.3 for H$\beta$ line and -2.9 for HeII for V347 Pup. In this study we obtained -1.5 for H$\alpha$ and H$\beta$ lines, -1.7 for H$\gamma$ and HeII 4686 lines and -1.9 for HeI 6678 line, the errors are about 0.1. From these values we conclude that the line emission is less concentrated in RR Pic (assuming a 1 M$_\sun$ white dwarf) than it is in V841 Oph and V347 Pup. As the power law index increases as the white dwarf mass decreases, in order to reach the H$\beta$ power law index found in V347 Pup, RR Pic must have a white dwarf with approximately 0.3 solar masses, which seems extremely unlikely. The discontinuity found in the HeII emission profile may be regarded as an evidence of non-Keplerian motion or wind emission in the HeII line source regions. One may ask why we don't observe the diffuse emission in the Balmer tomograms as observed in the HeII. The diffuse component is also present in Balmer lines, but for this lines the disk emission may be more intense than the diffuse component. In the HeII case, the diffuse emission, produced by recombination, may be dominant when compared to the emission originated in the accretion disk. \section{Conclusions} A radial velocity study of the RR Pic system was performed using extensive spectrophotometric observations. A value for the primary's radial velocity semi-amplitude of 37(1) km s$^{-1}$, considerably smaller than the value of about 170 km s$^{-1}$ given by \citet{Sch03} was found. This small primary's radial velocity implies a secondary star mass below 0.16 M$_\sun$. This mass estimative is approximately half of the limiting mass of a main sequence star filling its Roche lobe, which points out to an evolved companion star. The primary's mass could not be constrained due to the absence of secondary's photospheric lines. From the fact that RR Pic presents shallow eclipses, the system's orbital inclination was constrained to a interval between 60$\degr$ and 80$\degr$. The mass ratio $q$ could be constrained in the wide interval between 0.09 and 0.2. The H$\alpha$ and HeI Doppler images of RR Pic show a clear ring signature. Furthermore, the H$\beta$ and H$\gamma$ Doppler maps present ring shaped structures, while the HeII map show an enhanced emission at low velocities, indicating that this high ionization line is produced in velocity field that is different from the disk. Radial emissivity profiles were obtained from the tomograms, indicating a more concentrated emission for H$\gamma$ line than for the H$\alpha$ and H$\beta$, and a more concentrated emission from HeI while compared to HeII. In addition, the RR Pic disk may present less radially concentrated emissivity profiles, when compared to other novae and nova-like. However, the emission distribution in other quiescent disks should be derived in order to explore its correlation with other properties of the binary system. \acknowledgments This work is based on data obtained at LNA/CNPq and La Silla/ESO observatories. F.M.A.R is grateful from support from FAPESP fellowship 01/07078-8. MD acknowledges the support by CNPq under grant \#301029.
proofpile-arXiv_065-3238
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Ultracompact X-ray binaries (UCXBs) are very tight, interacting binaries ($\approx 10^{10}$ cm orbit) with periods $\le 80$ min. They contain neutron stars (NSs) or black holes accreting from a low-mass ($\le 0.1 {\rm M}_{\odot}$) degenerate companion. UCXBs are a sub-class of the low-mass X-ray binaries (LMXBs). It has been known for a long time that bright LMXBs are found to be overabundant in globular clusters (GCs) compared to the Galactic field (Katz 1975, Clark 1975). This leads to the suggestion that LMXBs, and consequently UCXBs, are preferentially formed in the dense environment of the GC cores through various stellar interactions, such as direct collisions between a NS and a red giant (Verbunt 1987, Davies et al.~1992, Ivanova et al.~2005), tidal capture of a main sequence star by a NS (Bailyn \& Grindlay 1987), or exchange interactions between NSs and primordial binaries (Rasio et al.~2000). It has been suggested that most of the 13 bright LMXBs in GCs might be UCXBs (Bildsten \& Deloye 2004, Ivanova et al.\ 2005). However, only two are confirmed in GCs to date. These are 4U\,1820-30 in NGC\,6624 ($P_{\rm{orb}} = 11.4$ min, Stella et al.~1987) and 4U\,1850-087 in NGC\,6712 ($P_{\rm{orb}} = 20.6$ min, Homer et al.~1996). Two other GC sources have been suggested as particularly strong UCXB candidates (Homer 2003). One of these candidates is CXO\,J212958.1+121002 in M\,15, also known as M\,15 X-2 (White \& Angelini 2001). M\,15 is the only galactic GC known to harbour {\it two} bright LMXBs. A single source was detected in early X-ray studies, 4U\,2127+119, and that was identified with the optical counterpart AC\,211 (Auri{\`e}re et al. 1984, Charles et al. 1986). {\it Chandra} observations later on resolved 4U\,2127+119 into {\it two} X-ray sources (White \& Angelini 2001). One of these was the previously known LMXB AC\,211. The second source, CXO\,J212958.1+121002 or M\,15 X-2, is actually 2.5 times brighter than AC\,211 in X-rays. Based on {\it Hubble Space Telescope (HST)} data from Guhathakurta et al.\ (1996), White \& Angelini (2001) identified a blue $U = 18.6$ mag star as the optical counterpart to the second source (star 590 in De\,Marchi \& Paresce 1994). However, the orbital period of M\,15 X-2 has so far not been determined. Here, we present FUV data of M\,15 X-2 taken with the {\it HST} that allow us to classify the source as an UCXB. In Sect.~\ref{data} we describe the data and their reduction. We present the analysis of the photometry and determine the period of M\,15 X-2 in Sect.~\ref{analysis}. We summarize our results and conclusions in Sec.~\ref{summary}. \section{Observations and data reduction} \label{data} M\,15 was observed with the Advanced Camera for Surveys (ACS) on board the {\it HST} in September and October 2003, and October to December 2004. Images were taken using the far-ultraviolet (FUV) filters F140LP, F150LP, and F165LP in the Solar Blind Channel (SBC), and the near-UV (NUV) F220W filter in the High Resolution Channel (HRC). For our variability study (see Sect.~\ref{analysis}) only relative magnitudes are needed. These were derived directly from the individual flatfielded images. Aperture photometry was carried out using {\tt daophot} (Stetson 1991) running under {\tt IRAF} \footnote{{\tt IRAF} (Image Reduction and Analysis Facility) is distributed by the National Astronomy and Optical Observatories, which are operated by AURA, Inc., under cooperative agreement with the National Science Foundation.} using an aperture radius of 4 pixels and a sky annulus of 50 to 60 pixels. For this purpose, we only used the 90 exposures taken in SBC/F140LP, since these data provide the longest time coverage (from October 14 to December 5, 2004). Eighty of these images had exposure times of 300 sec, four were exposed for 140 sec and six for 40 sec. One orbit of observation yielded 9 data points. In order to determine the time-averaged spectral energy distribution (SED) of our counterpart, we also carried out {\it absolute} photometry on combined and geometrically corrected images for each filter. These master images were created using {\tt multidrizzle} running under {\tt PyRAF}. For our aperture photometry, we used an aperture radius of 5 pixels for all SBC FUV data, a smaller radius of 4 pixels in the HRC NUV data, and a sky annulus of 5 to 7 pixels. The smaller aperture in the NUV was chosen to avoid the effects of severe crowding. In the FUV, aperture corrections were determined via curves of growth constructed from isolated stars in our master images. For the NUV data, we used the encircled energy fractions published by Sirianni et al. (2005). In order to derive a reliable SED, we ideally want mean fluxes in non-overlapping wavelength windows. However, the ACS/SBC/F140LP bandpass fully includes F150LP, and F150LP in turn includes F165LP. We therefore created two artificial narrow-band filters that are defined as the differences between the actual filters, i.e.\ we define F140N = F150LP - F140LP and F150N = F150LP - F165LP. Thus the count rate of a source in F140N, for example, is simply obtained by subtracting its count rate in F150LP from that in F140LP. Fig.~\ref{spect} (top panel) shows the resulting throughput curves for the artificial filters. As can be seen, they barely overlap and are thus ideal for characterizing the UV SED of our source. In order to convert count rates into STMAGs, we used the {\tt synphot} package running under {\tt IRAF}. Full details on our analysis procedure will be provided in a forthcoming publication that will present our entire FUV/NUV data set for M\,15. \section{Analysis and Discussion} \label{analysis} \subsection{Source Identification} We identified the FUV counterparts to AC\,211 and to M\,15 X-2 on the F140LP images, using the Guhathakurta et al.~(1996) {\it HST} images and the positions provided by White \& Angelini (2001) as a reference guide. Fig.~\ref{XB2} shows a close-up of the FUV (SBC/F140LP) and NUV (HRC/F220W) master images, centred on AC\,211 and M\,15 X-2. The {\it Chandra} positions for these sources are also indicated, after shifting them by $\approx 1\farcs2$~south in order to optimally align the X-ray and FUV positions of AC\,211. The centre of the FUV counterpart to M\,15 X-2 is slightly offset ($0\farcs165$ south) of the {\it Chandra} position, but still well within the internal $0\farcs25$ {\it Chandra} error radius for this source (White \& Angelini 2001). Our offset is consistent with the $0\farcs13$ offset that White \& Angelini (2001) found between their {\it Chandra} positions and the optical counterpart from De\,Marchi \& Paresce (1994). Both AC\,211 and M\,15 X-2 are clearly detected as strong FUV and NUV sources in our ACS images. \subsection{Time-Series Analysis} Fig.~\ref{lightcurve} shows the mean-subtracted light curve for all observing epochs. Low-amplitude variations with a peak-to-peak amplitude $> 0.1$ mag can be seen. We searched for a periodic signal by carrying out $\chi^{2}$ fits for a grid of trial frequencies. The resulting periodogram is shown in Fig.~\ref{power}. The best fit yields a $\chi_{\nu}^{2}=1.31$ and suggests a period of $22.5806$ min. In order to test the coherence of this period, we carried out Monte Carlo simulations. Briefly, we created 10000 fake data sets with the same time sampling, periodic signal and noise characteristics as our real data, but with a random phase offset assigned to each of the six observing epochs. Thus the phase was fully coherent within each epoch, but fully randomized between them. We then again carried out a sequence of $\chi^{2}$\ fits for each data set. Next, we fixed the period at the best global value and fitted each epoch separately with both phase and amplitude as free parameters. We then defined a {\it phase coherence index} (PCI, see e.g.\ Haswell et al.\ 1997), which in our case is simply the $\chi^{2}$ of the phase estimates for the individual epochs, with the phase of the global fit as the reference value. {\it We find that only 0.9\% of the fake data with randomized phases have a PCI as good as the real data.} We can therefore reject the null hypothesis that the periodic signal loses coherence completely over time-scales comparable to our inter-epoch spacing. The latter is $\approx 11$ days, corresponding to $\approx 700$ cycles. We can view this as a constraint on the period derivative, i.e. $\dot{P}$ must be small enough so that less than one cycle is lost over $N \simeq 700$ cycles, i.e.\ $\dot{P} \lesssim N^{-1}$. The quality factor of the 22.58~min signal must therefore be $Q = \dot{P}^{-1} \gtrsim 700$. By contrast, mHz QPOs tend to have $Q \approx 10$ (e.g. Chakrabarty et al.\ 2001, Boroson et al.\ 2000). As a further check, we carried out Monte Carlo simulations in which the input sinusoids were coherent, i.e. the phase was fixed at the same value for each epoch. This produced a PCI distribution that was consistent with the PCI of the real data set. Thus the 22.58~min signal is consistent with being fully coherent over the entire $\simeq 3300$\ cycles spanned by our observations. All of these tests support the orbital nature of this signal. We also used our Monte Carlo simulations to estimate the statistical error on the parameters of the observed signal. For this purpose, we again used coherent input signals and conservatively used error bars scaled so as to yield $\chi_{\nu}^{2} =1$ for the fit to the real data. We then used the standard deviation of the periods and amplitudes found for the fake data sets to estimate the errors on the measured parameters. The final results were $P = 22.5806 \pm 0.0002$ min for the orbital period and $a = 0.062 \pm 0.004$ mag for the semi-amplitude. This yields an ephemeris for the time of maximum light \begin{equation} T_{max}\rm{(BJD)} = 2453308.88743(16) + 0.01568096(13) \times E, \end{equation} where the numbers in brackets give the errors on the last two digits. A sine wave with M\,15\,X-2's period, amplitude and fixed phase is overplotted on the lightcurves in Fig.~\ref{lightcurve}. This visually confirms that there is no sign of a loss of coherence over the entire observational time span of 3312 cycles. We conclude that the periodic signal is almost certainly an orbital modulation. \footnote{We note that the average time resolution of our data is $\approx 5.5$ min which corresponds to a Nyquist frequency of $\nu_{Ny} \approx 130 \rm{d}^-1$. A period of $\approx 7.4$ min above $\nu_{Ny}$ could be reflected to yield our observed signal of 22.6 min. However, such short orbital periods are extremely unlikely (see e.g. Deloye \& Bildsten 2003, Homer 2003).} \subsection{Continuum Spectral Energy Distribution} The SED of M\,15 X-2 is shown in Fig.~\ref{spect} (top panel). Each point is plotted at the average wavelength of the corresponding filter. As expected for compact, interacting binaries, the SED of M\,15 X-2 rises towards the blue. However, it is also worth noting that there seems to be an excess of flux in F150N, i.e.\ around 1550~\AA. This excess flux can be caused by additional C\,IV $\lambda = 1550$~\AA\ and/or He\,II $\lambda = 1640$~\AA\ line emission. We therefore fit a power law to the two bracketing data points only. The best fit is found for a power-law index $-2.0 \pm 0.2$. This power-law spectrum is overplotted in Fig.~\ref{spect}. The depression at $\approx 2200$ \AA\ is due to a well known reddening feature there. \subsection{Evidence for Line Emission} \label{ev_line} The excess flux in F150N seems to indicate line emission due to C\,IV at~$\lambda = 1550$~\AA\ and/or He\,II $\lambda = 1640$~\AA\ (the latter would also contribute to F165LP). However, such a peak might also be caused simply by a turnover in an otherwise smooth continuum. We have therefore carried out synthetic photometry for blackbodies (BBs) with temperatures $100000 \ge T_{eff} \ge 10000$. Fig.~\ref{spect} (bottom panel) shows the resulting BB sequence in the F140N-F150N vs F150N-F165LP colour-colour diagram (CC diagram). Note that we reddened all synthetic photometry by M\,15's $E_{(B-V)}=0.1$ mag (Harris 1996). As expected, the BBs are located on a sequence going from blue (for hot sources) to red colours (for cool sources). The cross on the sequence marks $T_{eff} = 20200$ K, which is the temperature of a BB peaking at $\lambda = 1550$~\AA, in the centre of the F150N filter. The observed location of M\,15 X-2 in the CC diagram is also marked and is clearly distinct from the blackbody sequence. The reason is simply that the F150N filter is much narrower than the peaks of the BB distributions. Thus, the latter cannot cause a strong excess in this filter alone. We caution that true stellar spectra {\it can} have turnovers more sharp than suggested from a comparison with BB spectra, as the example of AC\,211 shows (Downes et al.\ 1996). The location of a power-law $F_{\lambda} \propto \lambda^{-2.0}$ spectrum is also marked in the CC diagram. In order to check how strong a line might be needed to account for the observed flux excess, we have also carried out synthetic photometry of power law spectra (with index -2.0) {\it and} an emission line at C\,IV $\lambda = 1550$~\AA\ with equivalent widths (EW) of 10~\AA\ to 50~\AA. As can be seen, M\,15 X-2 is located close to the power-law + C\,IV 1550~\AA\ sequence, but slightly above, which suggests additional He\,II emission. We then carried out synthetic photometry of power law spectra and C\,IV $\lambda = 1550$~\AA\ ($\rm{EW} \simeq 30$ \AA) {\it and} He\,II $\lambda = 1640$~\AA\ emission line with EWs of 10~\AA\ to 60~\AA. We conclude that the SED of M\,15 X-2 can be described by a power-law $F_{\lambda} \propto \lambda^{-2.0}$ with an additional C\,IV 1550~\AA\ and He\,II 1640~\AA\ emission line with $\rm{EW} \simeq 30$~\AA\ each. However, given that there are three free parameters in this model, a perfect match to just three colours is of course guaranteed. Spectroscopy will be needed to confirm the spectral shape and the existence of line emission. \section{Discussion} \label{summary} Knowledge of the orbital period of M\,15 X-2 allows us to derive a more defined picture of this ultracompact system. Eggleton (1983) showed that for small mass ratios $0.01 \le q = M_{2}/M_{1} \le 1$ the mean density $\rho$ of the Roche lobe-filling companion becomes a function of $P_{orb}$ mainly, i.e.\ $P_{orb} \times \rho^{1/2} \simeq 0.438$. For an orbital period of 22.6 min, this gives $\rho = 786~ \rm{g}~ \rm{cm}^{-3}$ which is consistent with the mean density of a low-mass white dwarf (WD). We can then use the mass-radius relationships published by Deloye \& Bildsten (2003, their Fig.~4) to constrain the minimum mass of the donor star to $0.02~\rm{M}_\odot \le M_{2,min} \le 0.03~\rm{M}_\odot$ and its minimum radius to $0.03~\rm{R}_\odot \le r_{2,min} \le 0.04~\rm{R}_\odot$, depending on composition. These lower limits correspond to low-temperature donors. Using Kepler's 3rd law and assuming a NS-dominated system mass near $1.4~\rm{M}_\odot$, we estimate the binary separation to $\approx 2.1 \times 10^{10}$ cm. A blackbody of $T_{eff} \approx 32000$ K has a blue spectral slope most similar to the one we fitted to M\,15 X-2 (see Fig.~\ref{spect}). Placed at M\,15's distance, such a blackbody would have a radius of $r_{bb} \approx 6.5 \times 10^{9}$ cm or $0.1\ \rm{R}_\odot$ if it is to have the same flux that we measured for M\,15 X-2. This is larger than expected for the radius $r_{2}$ of the degenerate companion, but comparable to the circularization radius of $r_{circ} \approx 0.2\ \rm{R}_\odot$ of the accretion disk. We therefore conclude that the FUV light is coming from the accretion disk rather than from the WD donor. This is consistent with the UCXB model of Arons \& King (1993) in which the orbital modulation is then caused by the irradiation of the WD donor. Using their Eq.~15 we then estimate the inclination angle of the system $i \approx 34^{\circ}$. This face-on inclination is consistent with the absence of eclipses. We note that no modulation can be seen in X-rays (Hannikainen et al. 2005). M\,15 X-2's X-ray luminosity was found to be $L_{X} \approx 1.4 \times 10^{36}\ \rm{erg}\ \rm{s}^{-1}$ (White \& Angelini 2001, Hannikainen et al.\ 2005). Assuming a 10 km radius and a mass of $1.4\ \rm{M}_\odot$ for the NS, this requires $\dot{M} > L_X R_\star / G M_\star \approx 10^{-10}\ \rm{M}_\odot\ \rm{yr}^{-1}$. This can be compared to the accretion rate expected from conservative mass transfer driven by angular momentum loss via gravitational radiation in an UCXB \begin{equation} \dot{M_{gr}} = 1.27 \times 10^{-8} \times q^{2} \times M_\star^{8/3} \times P_{orb}^{-8/3}(\rm{h}) \times (1+q)^{-1/3} \times (5/6 + n/2 -q)^{-1}. \label{mdot} \end{equation} Taking our minimum donor mass and corresponding mass-radius index $n \simeq -0.1$ (e.g. Deloye \& Bildsten 2003), we derive a lower limit of $\dot{M_{gr}} \approx 4 \times 10^{-10}\ \rm{M}_\odot yr^{-1}$. This suggests that the observed X-ray emission can be powered by gravitational radiation-driven mass transfer. We conclude that M\,15 X-2 can be classified as an UCXB, only the third confirmed such system in a GC. Our results are consistent with the idea that indeed many GC LMXBs are UCXBs. \acknowledgments We thank Tom Maccarone, Tom Marsh, Geoff Daniell and Chris Deloye for valuable discussions. This work was supported by NASA through grant GO-9792 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555.
proofpile-arXiv_065-3257
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} In system with strong electron-phonon (e-ph) interaction, the carriers lose mobility, ultimately acquiring polaronic character. A polaron is a state in which the phonon and electron degrees of freedom are strongly entangled, and the presence of an electron is associated to a finite lattice distortion, which in turn binds the electron leading to the so-called self-trapping effect. Polarons also tend to create bound pairs, called bipolarons of course the presence of Coulomb repulsion destabilize bi-polarons in favor of a pure polaronic state at finite densities \cite{HewsonVarenna,KastellaniVarenna}. Typical signatures of polarons are seen in multi-peaked photoemission spectra\cite{EgamiVarenna} and transport measurements, where an activated behavior with a characteristic energy given by the polaronic binding energy is observed. The polaronic peak found in the mid infrared measurements of optical conductivity \cite{CalvaniVarenna} may also not only detect the polaronic binding energy \cite{Fehske-Akw} but also other subtle polaronic transitions at very low energy \cite{frat1}. Another less classical indication of polaronic formation comes from the analysis of lattice displacements associated to the excess charge as obtained by the distribution of distances between atoms\cite{EgamiVarenna}. A joint analysis of both spectral \cite{Fehske-Akw} and local lattice distortions can disentangle various kind of behavior in polaronic systems. In fact we notice that neglecting the repulsive interaction a gapped pair state can be formed even without a significant associated polarization. In this case bipolarons are expected to be a relatively mobile particle. The aim of this work is to provide a thorough analysis of polaronic spectral properties in both single and multi-polaron cases. The backbone of our presentation is the DMFT which is introduced and briefly discussed in general in section \ref{sec:DMFT}. In the single polaron case we review the exact solution of the Holstein model \cite{sumi,depolarone} (section \ref{sec:HolsteinAndHtJ}) and we present some new results for the Holstein $t-J$ model comparing our results with those of Mishchenko \cite{MishchenkoVarenna}. At large polaronic densities we use both Exact Diagonalization (ED) and Quantum Monte Carlo (QMC) techniques to solve the DMFT equations for the Holstein model respectively at the $T=0$ (sec. \ref{sec:HHFED}) and $T>0$ (section \ref{sec:HHFQMC}). In this case we compare spinless and spinful fermions cases and we discuss in detail the role of the adiabatic ratio. In this way the properties of a pure polaronic state can be disentangled from those of bipolaronic state. We compare also numerical solutions with analytic approximate schemes based on Born Oppenheimer (\ref{sec:HHFEDad}) and Lang-Firsov canonical transformation (\ref{sec:HHFEDantiad}) respectively in adiabatic and antiadiabatic regime. \section{The DMFT method} \label{sec:DMFT} \subsection{Introduction} \label{sec:naive} Dynamical Mean Field Theory is a non-perturbative technique originally developed as the exact solution of a interacting electron problem on an infinite dimensional lattice \cite{BrandtMielsch,Muller-Hartmann}. A comprehensive review can be found in ref. \cite{DMFTreview} now let us sketch some key point to understand the developments presented in following sections. Let a consider a general tight-binding problem on a lattice with coordination number $z$ \begin{equation} \label{hamiltonian0} H = -\frac{t}{\sqrt{z}}\sum_{\langle ij \rangle \sigma} c^\dagger_i c_j + \sum_i V[c_i,c^\dagger_i] \end{equation} where $c_i$ are fermionic annihilation operator acting on site $i$ (spin index omitted for simplicity) $V$ is a local (on-site) potential. The scaling of hopping $t$ is such as the limit $z\rightarrow \infty$ give a non trivial result. Mean field theory turns out to be exact in infinite dimensions i.e. when the number of nearest neighbor diverges, therefore we can replace the effect of hopping on neighboring sites as {\it cavity field} $\eta$ $$ \eta_i= \frac{t}{\sqrt{z}}\sum^{(i)}_j c_j $$ with the sum running on nearest neighbor of site $i$. In terms of the internal fields $\eta$, Hamiltonian can be written formally as a sum of single site operators as \begin{equation} H = --\sum_i \eta^\dagger_i c_i-\sum_i c^\dagger_i\eta_i + \sum_i V[c_i,c^\dagger_i] \end{equation} For fermions $\eta$ obeys anticommutation relations $$ \left [ \eta_i \eta^\dagger_j\right ] = t\delta_{i,j} \\ \left [ \eta_i c^\dagger_j\right ] = \frac{t}{\sqrt{z}}\delta_{i,j}. $$ On a more formal ground it can be demonstrated that the cavity field is a Gaussian Grassmann field which is therefore determined solely by its correlation function\cite{DMFTreview}. \subsection{Single impurity action} \label{sec:ImpurityAction} The previous arguments can be more formally developed using a path-integral formalism. In analogy with classical mean-field\cite{DMFTreview}, the fermions of all sites but one (namely $0$) are integrated out leading to a single-site partition function \begin{equation} \label{Zpart} Z = \int \Pi_i \mathcal{D} \psi^{\dagger} \mathcal{D} \psi \exp(-S) \end{equation} where now $\psi$ and $\psi^\dagger$ are Grassmann anticommuting eigenvalues of the creation/destruction operators of site $0$\cite{Negele}. The action $S$ is given by \begin{equation}\label{Simpurity} S = -\int_0^\beta d\tau \int_0^\beta d\tau' \psi^\dagger(\tau) \mathcal{G}^{-1}_0(\tau-\tau') \psi(\tau') + \int_0^\beta d\tau V[\psi(\tau),\psi^\dagger(\tau)] \end{equation} where the correlator $\mathcal{G}^{-1}_0(\tau-\tau')$ is \begin{equation} \label{selfcons0} \mathcal{G}^{-1}_0(\tau-\tau')= \partial_\tau \delta(\tau-\tau')+\langle\eta_i(\tau)\eta_i(\tau')\rangle \end{equation} which is independent on $i$ due to translation invariance. The action (\ref{Simpurity}) depends parametrically on the environment through the correlator $\mathcal{G}_0$. (\ref{Simpurity}) is indeed the action of a {\it single} impurity embedded in a medium whose properties are related to the original lattice via the self-consistency equation (\ref{selfcons0}). To be more concrete let us consider an infinite coordination Bethe lattice of half-bandwidth $D$, for which the Green's function of $\eta$ is proportional to the local Green's function $G_{j,j}$ \cite{DMFTreview} $$ \langle\eta_i(\tau)\eta_i(\tau')\rangle=\frac{D^2}{4z}\sum_{j} G_{j,j} $$ then Eq. (\ref{selfcons0}) reads $\mathcal{G}^{-1}_0(\tau)=-\partial_\tau+(D^2/4) G(\tau)$, or in frequency domain \begin{equation} \label{selfcons2} \mathcal{G}^{-1}_0(i\omega_n)=i\omega_n+(D^2/4) G(i\omega_n). \end{equation} Eqs (\ref{Zpart}) and (\ref{Simpurity}), together with the self-consistency condition (\ref{selfcons2}) form a closed set of mean-field equations. \subsection{Single impurity Hamiltonian} \label{sec:ImpurityHamiltonian} A Hamiltonian formalism can be also developed when suitably defined fermion fields are introduced to get required gaussian cavity field. Eq. (\ref{Simpurity}) can be obtained by integrating out auxiliary fermionic fields $c_k$ of an Anderson Impurity Hamiltonian (AIM) \begin{equation} \label{AndersonImpurity} H_{AIM} = \sum_k V_k (f^\dagger c_k+c_k f^\dagger)+\sum_k E_k c^\dagger c_k+ V[f,f^\dagger] \end{equation} levels $E_k$ and hybridization constants $V_k$ must be chosen to give the appropriate cavity field. In the Bethe lattice case \begin{equation} \label{eq:self-consAIM} \frac{D^2}{4}G(i\omega_n) = \sum_k \frac{V_k^2}{i\omega_n - E_k}. \end{equation} where in the l.h.s. we read the local {\it lattice} propagator. Eq. (\ref{eq:self-consAIM}) becomes the self-consistency condition which determines the appropriate AIM parameters $E_k$ and $V_k$. \section{Holstein model in infinite dimensions} \label{sec:Holstein} The Holstein molecular crystal model is the paradigmatic model for small polarons. Its Hamiltonian reads \begin{eqnarray} \label{eq:themodel} H &=& -t\sum_{\langle i,j\rangle,\sigma} (c^{\dagger}_{i,\sigma} c_{j,\sigma} + h.c. ) - g\sum_{i,\sigma} (n_{i,\sigma}-\frac{1}{2})(a_i +a^{\dagger}_i) + \nonumber\\ &+& \omega_0 \sum_i a^{\dagger}_i a_i, \end{eqnarray} where $c_{i,\sigma}$ ($c^{\dagger}_{i,\sigma}$) and $a_i$ ($a^{\dagger}_i$) are, respectively, destruction (creation) operators for fermions and for local vibrations of frequency $\omega_0$ on site $i$, $n_{i,\sigma}=c^{\dagger}_{i,\sigma}c_{i,\sigma}$ the electron density per spin, $t$ is the hopping amplitude, $g$ is an electron phonon coupling. In the half-filled case we always fix the chemical potential to the particle-hole symmetric value, which fixes the density per spin to $n = 1/2$. In the spinless case there is no sum on $\sigma$. We choose as parameter of the model the e-ph coupling constant $\lambda = 2g^2/\omega_0 D$ where $D$ is the half-bandwidth of our infinite-coordination Bethe lattice, and the adiabatic ratio $\ad = \omega_0/D$. In the single electron case the spin index is unessential, moreover the mean density is zero and the el-ph interaction in Eq. (\ref{eq:themodel}) is replaced by $- g\sum_{i} n_{i}(a_i +a^{\dagger}_i)$. The partition function (\ref{Zpart}) is now defined as \begin{equation} \label{Zholstein} Z = \int \mathcal{D} x(\tau) \int \mathcal{D} \psi^{\dagger} \mathcal{D} \psi(\tau) \exp(-S) \end{equation} where using units where the spring constant $K=M\omega_0^2=1$ $x(\tau)=\sqrt{\omega_0/2}(a(\tau)+a^\dagger(\tau))$. The single impurity action $S$ associated to the lattice Hamiltonian (\ref{eq:themodel}) in an infinite coordination lattice reads this case: \begin{eqnarray} \label{Sel} S &=& -\int_0^\beta d\tau \int_0^\beta d\tau' \sum_{\sigma} \psi_{\sigma}^\dagger(\tau) \mathcal{G}^{-1}_0(\tau-\tau') \psi_{\sigma}(\tau') + \\ \label{Sph} &+&\frac{1}{2} \int_0^\beta d \tau \left( \frac{\dot{x}^2(\tau)}{\omega_0^2} + x^2(\tau) \right) + \\ \label{Selph} &+&\sqrt{\lambda} \int_0^\beta d\tau x(\tau) \left(n(\tau) - 1 \right). \end{eqnarray} where $n(\tau)=\sum_{\sigma} \psi_{\sigma}^\dagger(\tau) \psi_{\sigma}(\tau)$. \subsection{DMFT-QMC method} \label{sec:DMFT-QMC} An efficient method to solve Eqs. (\ref{Sel},\ref{Sph},\ref{Selph}) at finite temperature is the QMC method (in the Blankenbecler-Scalapino-Sugar approach\cite{BSS}. This method works well for not too low temperatures and naturally yields electronic and bosonic correlation functions, as well as the probability distributions associated to the phonon fields. The method is not affected by the negative sign-problem, and its main limitation comes in the adiabatic regime ($\gamma \ll 1$) where the phonon becomes heavy making more difficult to sample correctly the available phase space. In the BSS scheme the fermions are integrated out , and the phonons coordinates $x(\tau)$ are discretized in $L$ imaginary-time slices of width $\Delta \tau = \beta/L$ and then sampled by QMC. $L$ has to be chosen large enough to reduce as much as possible $\Delta\tau$, which controls the the Trotter discretization error. To keep $\Delta \tau$ less than $1/8$ we used $32$ slices except for the lowest temperature ($\beta=8$) for which we have used $L=64$, \subsection{DMFT-ED method} \label{sec:DMFT-ED} The Anderson Impurity Model for the Holstein model reads \begin{eqnarray} \label{eq:Anderson_Holstein} H_{AIM} &=& = -\sum_{k,\sigma} V_k (c^{\dagger}_{k,\sigma} f_{\sigma} + h.c)+ \sum_{k,\sigma} E_k c^{\dagger}_{k,\sigma} c_{k,\sigma} -\nonumber\\ &-&g \sum_\sigma \left ( f^\dagger_\sigma f_\sigma - \frac{1}{2}\right) (a +a^{\dagger}) + \omega_0 a^{\dagger} a. \end{eqnarray} We solve (\ref{eq:Anderson_Holstein}) by means of ED by truncating the sums in the first two terms of Eq. (\ref{eq:Anderson_Holstein}) to a small number of terms $N_b$, so that the Hilbert space is small enough to use, e.g., the Lanczos algorithm to compute the $T=0$ Green's function. For the case of phonon degrees of freedom we consider here, also the infinite phonon space has to be truncated allowing for a maximum number of excited phonons $N_{ph}$. In all the calculations presented here the convergence of both truncations have been checked. The value of $N_{ph}$ has to be chosen with special care in the adiabatic regime and in strong coupling, where phonon excitations are energetically convenient. As far as the discretization of the bath is concerned, the convergence of thermodynamic averages and Matsubara frequency properties is exponentially fast and $N_b \sim 8-9$ is enough to obtain converged results. The method also offers the advantage of a direct evaluation of real-frequency spectral properties such as the electron and phonon spectral functions. The main limitation is that these quantities reflect the discrete nature of our system. In practice, the spectra are formed by collections of $\delta$-functions, which limits our frequency resolution, and makes the method better suited to gain knowledge on the main features of the spectra, rather than on the fine details. \subsection{Quantities of interest} \label{sec:quantities} We mainly characterized electronic and phononic properties by considering respectively the electron density of states $\rho(\omega) = -\frac{1}{\pi} G(\omega)$ and the phonon probability distribution function (PDF) $P(x) = \left \langle\phi_0 |x\rangle \langle x| \phi_0\right \rangle$, where $|x\rangle$ is a phonon coordinate eigenstate, and $|\phi_0\rangle$ is the ground state vector. At finite electron density a lattice polarization reflects in the presence of two peaks in $P(x)$, corresponding to opposite polarization of occupied and unoccupied sites (bimodal behavior) \cite{Millis-adiab}. In the single electron case the polarization of a lattice site where one electron sits is a marker of a polaronic crossover. Also in this case a bimodal behavior can be observed in the adiabatic regime but generally speaking we have a definite polarization when phonon fluctuations are less than the average polarization due to the presence of the electron. In this way a {\it qualitative} difference is identified between the polarized and unpolarized regimes, which allows for an unambiguous way to draw a crossover line, as opposed to estimates based on smoothly varying functions as average lattice fluctuations or electron kinetic energy. A Metal to Insulator Transition (MIT) can be probed by the low energy behavior of $\rho$. This is a consequence of the infinite dimensional limit where the self-energy is momentum-independent. Thus the vanishing of the low-energy quasi-particle spectral weight coincides with the divergence of the effective mass, which determines the MIT. \section{Single electron in Holstein and Holstein t-J models} \label{sec:HolsteinAndHtJ} \subsection{$T=0$ continued fraction for a single polaron} \label{sec:singlepolaron} The single electron case in both Holstein, $t-J$ and $t-J$-Holstein case can be solved semi-analytically taking advantage of peculiar features of the zero density case. We briefly describe here the formalism at $T=0$. Generalization to thermalized lattice can be found in \cite{depolarone}. We use here the AIM formalism of eq. (\ref{eq:Anderson_Holstein}). For a single electron the Green's function is purely retarded, then the retarded impurity propagator can be defined as \begin{equation} \label{Gfreq} G(\omega)=\langle0|f\frac{1}{\omega+i\delta-H}f^\dagger|0\rangle \end{equation} which has the correct prescription $\delta>0$ for convergence of the time integrals. The vacuum energy is defined here to be zero. To proceed further one needs to introduce the generalized matrix elements \begin{equation} \label{matrix-elements} G_{n,m}=\langle0|\frac{a^n}{\sqrt{n!}}f \frac{1}{\omega+i\delta-H}f^\dagger \frac{(a^\dagger)^m}{\sqrt{m!}}|0\rangle \end{equation} so that the element $G_{0,0}$ will be the Green function. Let us separate the impurity Hamiltonian of eq. (\ref{eq:Anderson_Holstein}) into $H_0$ and $H_I$, $H_I$ is the local interaction term and $H_0$ the remainder. A useful operator identity for the resolvent is \begin{equation} \label{Risolv} \frac{1}{z-H}=\frac{1}{z-H_0}+ \frac{1}{z-H_0} H_I \frac{1}{z-H} \end{equation} The diagonal matrix element of this operator on the impurity zero phonon state $f^\dagger |0\rangle$ is the Green's function of eq. (\ref{Gfreq}). In the subspace of zero electron $p$-phonon states $|0,p\rangle=(a^\dagger)^p/\sqrt{p!}|0\rangle$ one can write \begin{equation} H_I=\sum_p f^\dagger |0,p\rangle\langle 0,p|f (a+a^\dagger) \end{equation} leading to the recursion formula for the $G_{n,m}$'s \begin{equation} \label{Eqric} G_{n,m}=G_{0n} \delta_{n,m} -g\sum_p G_{0n} X_{n,p} G_{p,m} \end{equation} where $G_{0n}=G_0(\omega-n\omega_0)$ is the diagonal element of the free resolvent and $X_{n,p}$ are the phonon displacement matrix elements $X_{n,p}= \sqrt{p+1} \delta_{n,p+1}+ \sqrt{p} \delta_{n,p-1}$. One immediately recognizes that, due to the particular form of the matrix ${\bf X}$, ${\bf G}^{-1}$ is a tridiagonal matrix, so that the solution of the problem is reduced to the inversion of a matrix in arbitrary dimensions. Following the lines given in Ref.\cite{Viswanath-Muller} one can express the diagonal element of the ${\bf G}$ matrix in terms of the diagonal and non-diagonal elements of ${\bf G}^{-1}$. The local propagator (the $0,0$ element of ${\bf G}$) is obtained in terms of a Continued Fraction Expansion (CFE), as a functional of the ''bare'' propagator $G_0$: \begin{equation} \label{CF} G(\omega)={1 \over\displaystyle G_0^{-1}(\omega)- {\strut g^2 \over\displaystyle G_0^{-1}(\omega-\omega_0)- {\strut 2g^2 \over\displaystyle G_0^{-1}(\omega-2\omega_0)- {\strut 3g^2 \over\displaystyle G_0^{-1}(\omega-3\omega_0)-...}}}} \end{equation} Due to the impurity analogy, this is also the local propagator of the original lattice problem, provided that self-consistency condition (eq. (\ref{selfcons2}) for real frequencies) is fulfilled. As a consequence the lattice self-energy $\Sigma$ is immediately obtained by $G=1/(G^{-1}_0-\Sigma)$. An example of the spectral function that can be obtained with this formalism is presented also in this volume (see \cite{FehskeBronoldAlvermannVarenna} fig. 4). \subsection{Holstein $t-J$ model in infinite dimensions.} \label{sec:HtJ} To derive the Holstein-$t-J$ Hamiltonian for a single hole we sketch the treatment of Ref. \cite{HtJ}. First the $t-J$ Hamiltonian is transformed by a canonical transformation into a ferromagnetic one. Then we introduce fermionic $h$ {\it hole} and bosonic $b$ {\it spin-defect on the antiferromagnetic ground state} operators. As a further step an Holstein-Primakoff transformations is performed which introduces spin waves \cite{martinez}, then we adopt the linear spin wave scheme and we use explicitly the infinite coordination limit to get: \begin{eqnarray} H&=&\frac{t}{2\sqrt{z}}\sum_{\langle ij \rangle \sigma} \left(h_j^\dagger h_i a_j + {\rm h.c.}\right) -g \sum_i \left[h_i^\dagger h_i - \langle h_i^\dagger h_i \rangle \right] (a_i+a_i^\dagger) +\omega_0\sum_i a_i^\dagger a_i\nonumber\\ &&+\frac{J}{4z}\sum_{\langle ij \rangle} \left[b_i^\dagger b_i+b_j^\dagger b_j\right] +\frac{J}{2} \sum_i h_i^\dagger h_i. \label{hamilhatjz} \end{eqnarray} The first term of Eq. (\ref{hamilhatjz}) describe the kinetic hopping of one hole on the antiferromagnetic background, which is accompanied by the creation (destruction) of a spin defect which breaks (restores) $2z$ magnetic bonds with individual energy $J/4z$. In addition we have the usual local e-ph interaction which couples {\rm the hole density} to the local phonon. The last term in Eq. (\ref{hamilhatjz}) can be absorbed in the definition of the hole chemical potential which, for the single hole case here considered, has to be set at the bottom of the hole band. The single-hole Green's function is given by the resolvent of Eq. (\ref{Gfreq}) in which the spinless fermions $f$ are replaced now by the hole operators $h$. Following the same path of the previous calculation and using the results of Ref. \cite{strack} for the $t-J$ model, we can write the local hole propagator $G(\omega)$ as function of the sum of a hopping and a phonon contribution to the self-energy\cite{HtJ}: \begin{equation} \label{eqG} G(\omega) = \frac{1}{\omega - \Sigma_{\rm hop}(\omega)-\Sigma_{\rm el-ph}(\omega)}, \end{equation} where the hopping contribution describes the dynamics of the hole through the antiferromagnetic background, \begin{equation} \Sigma_{\rm hop}(\omega) = \frac{t^2}{4} G(\omega-J/2), \label{sigmat} \end{equation} while the e-ph self-energy takes into account the on site multiple scattering with phonons and it is formally the same as in Eq. (\ref{CF}) after identifying \begin{equation} \label{Gt} G^{-1}_0(\omega)=\omega-\frac{t^2}{4} G(\omega-J/2). \label{gt} \end{equation} Both $\Sigma_{\rm hop}(\omega)$ and $\Sigma_{\rm el-ph}(\omega)$ are expressed as functionals of the {\em total} Green's function $G(\omega)$ leading to a self-consistent interplay between the spin and the e-ph interaction. Eqs. (\ref{eqG}), (\ref{sigmat}) and (\ref{gt}) represent a closed self-consistent system which we can be numerically solved by iterations to obtain the explicit {\em exact} expression of the local Green's function $G(\omega)$ \cite{HtJ}. The formal scheme looks quite similar to the single electron solution of the Holstein model \cite{depolarone}. However, due to the antiferromagnetic background, the physical interpretation is quite different. Due to the orthogonality of the initial and final antiferromagnetic background, the non-local component of the Green's function in the Holstein-$t-J$ model, even for $J=0$, is strictly zero $G_{ij}(\omega)=G(\omega)\delta_{ij}$,\cite{strack} whereas for the pure Holstein model $G_{i \neq j}(\omega)$ is finite and provides informations about the non-local dynamics: $G({\bf k},\omega)=1/[\omega -\epsilon_{\bf k}-\Sigma(\omega)]$. In addition, the magnetic ordering has important consequences also on the local Green's function $G_{ii}(\omega)$. In the pure Holstein model for instance $G_{ii}(\omega)$ takes into account any generic dynamics which occurs back and forth a given site whereas in the Holstein-$t$-$J$ model the electron must follow a retraceable path in order to restore the antiferromagnetic background.\cite{strack} A Bethe-like dynamics is thus enforced by the magnetic ordering regardless the actual real space lattice. The object made up by the hole plus the local modification of the spin configuration due to the presence of the hole is the ``spin polaron''. \subsection{Results} \label{sec:resultsHtJ} The local physical properties of one hole in the infinite dimension Holstein-$t-J$ model have been extensively investigated in Ref. \cite{HtJ}. In Fig.~\ref{fig:DOS_HtJ} we report the evolution of the local spectral function $\rho(\omega)=-(1/\pi)\mbox{Im}G(\omega)$ as function of the spin-exchange $J/t$ and of the e-ph interaction $\lambda$ for $\omega_0/t=0.1$. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.33,angle=270]{DOS_HtJ.eps} \end{center} \caption{Hole spectral function in the $t-J$-Holstein model for $\gamma=0.1$ as function of $J/t$ and $\lambda$. Left panel: $J/t=0.4$ and (from bottom to top) $\lambda=0.0,0.05,0.15,0.25,0.35,0.45,0.55$. Right panel: $\lambda=0.25$ and (from bottom to top) $J/t=0.0,0.1,0.5,1.0,1.5,2.0$.} \label{fig:DOS_HtJ} \end{figure} In the limits $J=0$ and $\lambda=0$ we recover respectively the results of Refs. \cite{depolarone} and \cite{strack}. They are qualitatively different: for $J/t=0$ and finite $\lambda$ the spectral density is the same as the pure Holstein model (see e.g. fig. 5 \cite{FehskeBronoldAlvermannVarenna} of the present volume), whereas for $\lambda=0$ and finite $J/t$ the spectrum is described by magnetic peaks spaced as $(J/t)^{2/3}$ (for small $J/t$). Switching on at the same time both the magnetic interaction and the e-ph interaction gives rise to the interesting interplay between these degrees of freedom. This can be shown for instance in the evolution of the spectral function as function of $\lambda$: increasing the e-ph interaction not only gives rise to additional phonon peaks which superimpose on the magnetic one, but it also enhances the nature of the magnetic polaron, from a large one (corresponding to $(J/t)^{2/3}$ spaced peaks) to a more localized one (corresponding to $(J/t)$ spaced peaks). A similar behavior appears as function of $J/t$: here for large $J/t$ the gross structure of the spectral function is described by equally ($J$) spaced magnetic structures with fine features determined by the e-ph interaction. Note that this latter is represented by phononic peaks spaced by $\omega_0$ as in the antiadiabatic limit, although $\gamma$ here is just $\gamma=0.1$, since the antiadiabatic regime is intrinsically enforced by the reduction of the kinetic energy due to the magnetic polaron trapping. In any case within DMFT we loose hole band dispersion. As a consequence we were not able to reproduce even qualitatively the interchange of magnetic and polaronic peaks observed by increasing el-ph coupling at finite dimensionality \cite{MishchenkoVarenna}. However within our localized solution we recover many of the qualitative features of both magnetic and lattice polaron crossovers \cite{HtJ}. Another interesting quantity which points out the interplay between magnetic and e-ph interaction is the PDF $P(x)$. Notice due to localization of DMFT solution $P(x)$ gives the lattice distortion associated to the localization center. In the left panel of Fig. \ref{fig:HtJ-px} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.33,angle=270]{PX_HtJ.eps} \end{center} \caption{Phonon PDF in the $t-J$-Holstein model for $\omega_0/t=0.5$ Left panel: $J/t=0.4$ and (from left to right) $\lambda=0.0,0.05, 0.25, 0.5, 0.75, 1.0, 1.25$. Right: $\lambda=0.5$ from left to right $J/t=2.0, 1.0, 0.4 0.05$.} \label{fig:HtJ-px} \end{figure} we show the $P(x)$ for $J/t=0.4$ as function of $\lambda$. Here the lattice polaron formation is characterized by the value $\lambda_{pol}$ at which broadening of the PDF is maximum. For larger $\lambda$ the PDF recovers a gaussian form due to the lattice fluctuations around the new minima at finite distortion. The magnetic polaron formation is also pointed out by the analysis of the $P(x)$ (right panel of Fig. \ref{fig:HtJ-px}): the magnetic trapping favors the lattice one, further reducing the anomalous lattice fluctuations towards the gaussian ones. \section{Half-filled Holstein model: spinless vs spinful cases at $T=0$} \label{sec:HHFED} \subsection{Adiabatic regime} \label{sec:HHFEDad} We briefly describe, as a starting point of the Born-Oppenheimer (BO) procedure, the adiabatic limit in which $\ad \rightarrow 0$ keeping $\lambda$ fixed. This limit has been thoroughly studied in Ref. \cite{Millis-adiab} which we here briefly resume. When $\omega_0\rightarrow0$ and $\lambda$ finite the kinetic term (\ref{Sph}) forces the phonon path $x(\tau)$ to be $\tau$-independent, $x(\tau)\equiv x$. Phonons becomes classical and the interaction term reads $-\sqrt{\lambda} x \int_0^\beta d\tau (n(\tau)-1)$. The gaussian integrals in (\ref{Zholstein}) with the action given by (\ref{Sel},\ref{Sph},\ref{Selph}) can be computed analytically leading to \begin{equation} Z = \int d x \exp(-\beta V(x)) \end{equation} where the adiabatic potential $V(x)$ is \cite{NoteKF2,BrandtMielsch,ChungFreericks} \begin{equation} V(x)=\frac{1}{2} x^2-\frac{\sqrt{\lambda}}{2} |x|-\frac{s}{\beta}\sum_n \log \left ( \frac{G^{-1}_0(i\omega_n)+\sqrt{\lambda} x}{i\omega_n+\sqrt{\lambda} x}\right ) \label{eq:E-adiab} \end{equation} where $s$ is spin degeneracy. In formula Eq. (\ref{eq:E-adiab}) we have found useful to separate the contribution in absence of hybridization ( first line of Eq. (\ref{eq:E-adiab})) from a remainder (last line). Through the adiabatic potential we compute the phonon PDF as \begin{equation} \label{defPXadiab} P(x) = \frac{\exp(-\beta V(x))}{Z}. \end{equation} Taking advantage of the Gaussian nature of the fermions we get for the local propagator \begin{equation} \label{Gadiab} G(\omega) = \int d x P(x) \frac{1}{G^{-1}_0(\omega)-\sqrt{\lambda} x}. \end{equation} which defines the self consistency condition through Eq. (\ref{selfcons2}). The self-consistency condition (\ref{selfcons2}) together with Eqs. (\ref{Gadiab},\ref{defPXadiab},\ref{eq:E-adiab}), completely solves the problem. Notice also the correspondence between the spinless and and the spinful case upon rescaling $\lambda$ to $\lambda/2$ in the latter case \cite{Millis-adiab}. For the Bethe lattice (at zero temperature) it can be shown that the potential (\ref{eq:E-adiab}) becomes double welled above a critical value $\lambda_{pol}=3\pi/(8 s)$. A MIT occurs at a larger coupling $\lambda_{MIT} = 1.328/s$\cite{Millis-adiab}. The BO procedure goes on by quantizing the adiabatic potential after adding the phonon kinetic energy contribution. Introducing the scaled variable $u=gx/\sqrt{s}$ the BO phononic Hamiltonian reads \begin{equation} \label{eq:BO} H_{BO}=-\frac{\ad}{2s}\frac{d^2}{d u^2}+s V(u). \end{equation} and $V(u)$ is given by Eq. (\ref{eq:E-adiab}). Notice that the spinful BO Hamiltonian maps onto twice the spinless one upon rescaling \begin{eqnarray} \label{scaling} \lambda/2 &\rightarrow& \lambda \nonumber \\ 2\ad &\rightarrow& \ad. \end{eqnarray} While phonon properties are immediately obtained at this stage from the solution of the one-dimensional anharmonic system of Hamiltonian Eq. (\ref{eq:BO}), electronic properties must account non-trivially for the tunneling of phonon coordinates. The simplest way to describe electrons coupled to a tunneling system is to map it onto a two level system. In our model this can be accomplished by changing the basis (operators $a$) from that of the harmonic oscillator to the that defined by the solution of (\ref{eq:BO}). Then projecting out all the states but the first two ($\vert +\rangle,\vert -\rangle$) we get the following two state projected model (TSPM): \begin{eqnarray} \label{eq:TSPM-definition} H&=&-\frac{2}{s}\sum_\sigma\epsilon\left(f_\sigma^+f_\sigma-\frac{1}{2}\right)\sigma_z -\Delta\sigma_x +\sum_{k,\sigma}E_k c^\dagger_{k,\sigma}c_{k,\sigma}+\nonumber\\ &+&\sum_{k,\sigma} V_k \left( f_\sigma^\dagger c_{k,\sigma} + c^\dagger_{k,\sigma} f_\sigma\right), \end{eqnarray} where $\sigma_x$ and $\sigma_z$ are two Pauli matrices in the space spanned by $\vert + \rangle$ and $\vert - \rangle$ and the quantities $\epsilon$ and $\Delta$ are given by \begin{eqnarray} \label{eq:TSPM-parameters} \epsilon &=& g\frac{s}{2} \langle+\vert a+a^\dagger\vert -\rangle\\ \Delta &=& \frac{\omega_0}{2}(\langle +\vert a^\dagger a\vert +\langle - \rangle -\vert a^\dagger a\vert -\rangle) \end{eqnarray} The latter quantity $\Delta$ is the tunneling frequency between the two phononic states. A similar model has been introduced in Ref. \cite{pata} to study the strong coupling limit of the Holstein model. The TSPM reproduces exactly the DMFT of the Holstein model in two limits: weak coupling and adiabatic limit. In the former case the projection of the phonon space has no relevance therefore the TSPM reproduces the perturbation expansion developed (in the limit of infinite bandwidth) in Ref. \cite{Engelsberg}. The adiabatic limit is instead recovered as $\Delta\rightarrow 0$. No phonon tunneling occurs and the model can be solved exactly by CPA recovering the solution of Ref. \cite{Millis-adiab}. To analytically span from the strong ($V_\kvec \rightarrow 0$) to the weak ($g\rightarrow 0$) coupling regimes of the equivalent impurity Hamiltonian it is useful to devise an Iterated Born-Oppenheimer Coherent Potential Approximation (IBOCPA) scheme. Starting from the Green's function for $V_k=0$ (\ref{eq:GatBOCPAspinless}) for the spinless case and (\ref{eq:GBOCPAspinful}) for the spinful one, we notice that in both cases $G_a(\o)$ can be written ad sum of two contribution $G_a(\o)=(1/2)(G_{a,+}(\o)+G_{a,-}(\o))$ where ($\pm$) label a phonon state \cite{PCarta}. Then we write the propagator in presence of hybridization as \begin{equation} \label{eq:GBOCPA} G(\o)=\frac{1}{2}\sum_{\s=\pm}\frac{1}{G^{-1}_{a,\s}(\o)-\frac{D^2}{4}G(\o)} \end{equation} where the hopping of the electron from the impurity to the bath is described by the cavity-field correlator $D^2/4 G(\o)$ (\ref{selfcons2}). Iteration proceeds substituting the propagator (\ref{eq:GBOCPA}) back in Eq. \ref{eq:E-adiab} giving a new BO potential and new TSMP parameters (eq. (\ref{eq:TSPM-parameters})) and finally a new Green's function. Iteration continues after convergence is reached \cite{NotaIBOCPA}. In (\ref{eq:GBOCPA}) $G^{-1}_{a,\s}(\o)$ are propagators obtained from the solution of the atomic ($V_k=0$) limit of the TSPM in both spinless and spinful cases. The spinless atomic Green's function can be easily found to be \begin{equation} \label{eq:GatBOCPAspinless} G_a(\o) = \frac{1}{2}\sum_{\s=\pm} \left( \frac{\epsilon^2}{\l^2}\frac{1}{\o+2\l\s} + \frac{\Delta^2}{\l^2}\frac{1}{\o}\right) \end{equation} $G_a(\omega)$ has a pole at $\omega=0$ induced by phonon tunneling, whose weight vanishes as $\Delta\rightarrow 0$, accompanied by two resonances at $\pm 2\l$. The zero energy peak is due to transitions in which both charge and phonon "spin" change while the side peaks arise from charge transfer in a frozen phonon "spin". In this sense the side peaks are adiabatic features which survive when phonon tunneling $\Delta\rightarrow 0$. The spinful atomic Green's function is \begin{equation} \label{eq:GBOCPAspinful} G_a(\o)=\frac{1}{2}\sum_{\s=\pm}\frac{1}{2\l}\left( \frac{\l-\Delta}{\o+\s(\l+\Delta)}+\frac{\l+\Delta}{\o+\s(\l-\Delta)}\right) \end{equation} The most striking difference with the spinless case (\ref{eq:GatBOCPAspinless}) is the absence of the zero frequency pole. In the spinful case the tunneling of the phonon $\Delta$ is always associated to a finite energy transition, and it only splits the finite frequency poles associated to the transition from singly to the empty or doubly occupied ones. IBOCPA assumes that the tunneling states of the phonon in the adiabatic potential remain unaltered during an hybridization event. As in standard CPA a band is associated to each local level, but our IBOCPA gives a Fermi liquid solution in the spinless case for every value of the coupling, as opposed to the case of the Hubbard model. The low-energy band arising from the zero energy pole in the zero hybridization limit is indeed {\it coherent}. This can be easily realized by analysis of the self-energy. When $\omega\rightarrow 0$ the spinless propagator defined by (\ref{eq:GatBOCPAspinless}) and (\ref{eq:GBOCPA}) is dominated by the zero-energy pole of $G_a$ (\ref{eq:GatBOCPAspinless}) and consequently the self-energy obtained through (\ref{eq:GBOCPA}) is purely real. Conversely in the spinful case a MIT due to local pair formation occurs in the CPA approximation at a critical value of $\lambda$. Finally we emphasize that we recover the adiabatic solution of Ref. \cite{Millis-adiab} as $\ad \rightarrow 0$ for finite $\lambda$ as $\Delta\rightarrow 0$. In this case the IBOCPA is exact and gives the Green's function of Ref. \cite{Millis-adiab}. However the IBOCPA procedure is certainly affected by serious problems approaching the MIT in the spinful case. A more careful treatment of the low-energy part of the Green's function has been performed in this case in Refs. \cite{pata,Bulla,HewsonVarenna}, where it has been observed a MIT scenario similar to the half-filled the Hubbard model, i.e., a quasi-particle peak that shrinks to zero width approaching a critical value of $\lambda$. In the spinless case instead a resonance is present at zero energy within a CPA approach. It is not associated to a Kondo effect but rather to phonon tunneling which drives charge fluctuations. On the other hand a Kondo like behavior can be ascribed to the bipolaron or pair formation. For a discussion of the limitation of the IBOCPA approach see also Ref. \cite{PCarta}. \subsection{Antiadiabatic regime} \label{sec:HHFEDantiad} While in adiabatic limit the phonon displacement becomes a classical variable, and we are left with an electronic model which depends parametrically on it, in the opposite limit ($\ad >> 1$) the roles are exchanged, and we have a parametrically fixed electronic charge on a given site. In this regime the most reasonable starting point is the Lang-Firsov (LF) canonical transformation\cite{Lang-Firsov,JuliusVarenna} $S=\exp (T)$. The generator of the transformation reads \begin{equation} \label{eq:LangFirsov} T = -\alpha \sum_\sigma (f^\dagger_\sigma f_\sigma-\frac{1}{2})(a^\dagger-a), \end{equation} introducing the parameter $\alpha=g/\omega_0$ which is the relevant e-ph coupling parameter in the anti-adiabatic regime \cite{storia,depolarone}. The canonical transformation diagonalizes the impurity Hamiltonian in the absence of hybridization by eliminating the e-ph interaction part. In the spinful case the phonon energy term of (\ref{eq:Anderson_Holstein}) gives rise to the well known bipolaronic instantaneous attraction \cite{JuliusVarenna}. The hybridization term of (\ref{eq:Anderson_Holstein}) is modified by acquiring an exponential term in the phonon coordinates leading to \begin{eqnarray} \label{eq:AndersonLF} e^T H e^{-T} = -\sum_{k,\sigma} e^{\alpha (a^\dagger-a)} V_k (c^{\dagger}_{k,\sigma} f_{\sigma} + h.c.) + \sum_{k,\sigma} E_k c^{\dagger}_{k,\sigma} c_{k,\sigma}-\nonumber \\ -2\frac{g^2}{\omega_0} (s-1) n_\uparrow n_\downarrow - \frac{g^2}{\omega_0} \sum_\sigma (\frac{1}{2}- n_\sigma) + \omega_0 a^{\dagger} a, \end{eqnarray} where $n_{\sigma}=f^{\dagger}_{\sigma} f_{\sigma}$. Notice that in the anti-adiabatic limit $\ad \rightarrow \infty$, if $\lambda$ is kept constant $\alpha$ vanishes. In this case spinless electrons are not renormalized, while spinful electrons are described by an attractive Hubbard model with $|U|/D=\lambda$. If we want to proceed with analytical methods, the hybridization term must be treated in an approximate way. Assuming that in the anti-adiabatic limit the impurity density is constant during the fast motion of the phonon, we average out the phonon term on the displaced phonon ground state. This is the so-called Holstein Lang-Firsov Approximation (HLFA), which has not to be confused with the exact canonical transformation (\ref{eq:LangFirsov}). HLFA gives rise to the exponential renormalization of the hybridization constants where each $V_k$ is replaced by $V_k\exp(-\alpha^2/2)$. Such a replacement implies the well known exponential renormalization of the bandwidth $D\exp(-\alpha^2)$. To get the {\it electron} Green's function $G(\omega)$, the explicit action of LF transformation have to be taken into account into both creation and destruction operators appearing in the definition of the Green function. Following Refs. \cite{Ranninger_spectral} and \cite{Alexandrov-Ranninger}, we obtain in both spinless and spinful cases \begin{equation} \label{eq:GreenLF} G(\omega)=e^{-\alpha^2}G_p(\omega) +\frac{1}{2}\sum_{n\ne0}e^{-\alpha^2} \frac{\alpha^{2|n|}}{|n|!} G_p(\omega-n\omega_0). \end{equation} where $G_p(\omega)$ is the Green's function of an impurity (with a negative $U$ interaction in the spinful case) with an exponentially reduced hybridization to a bath of conduction electrons. The self-consistency condition can be written explicitly in the spinless case due to the lack of interaction terms on the impurity: \begin{equation} \label{eq:Gp_spinless} G_p(\omega) = \frac{1}{\omega-e^{-\alpha^2}\frac{t^2}{4}G(\omega)}. \end{equation} where $G(\o)$ is the local Green's function of the lattice. In the spinful case a Lang-Firsov Coherent Potential Approximation (LFCPA) can be devised for the resulting HLFA attractive Hubbard model with $U=-2g^2/\omega_0$ giving \begin{equation} \label{eq:Gp_spinful} G_p(\omega) = \frac{1}{2} \left ( \frac{1}{\omega-e^{-\alpha^2}\frac{t^2}{4}G(\omega)-U/2}+ \right. \left. \frac{1}{\omega-e^{-\alpha^2}\frac{t^2}{4}G(\omega)+U/2} \right ). \end{equation} Notice that the theory developed here for the Holstein impurity model differs from that developed directly in the lattice model\cite{Ranninger_spectral}. In that case an equation identical to (\ref{eq:GreenLF}) is recovered for a band of free electrons therefore giving a low-energy {\it coherent} polaronic band in the spinless case. It is however easy to show that this form of the spectral function is not compatible with a $k$-independent self-energy. The self-consistency condition (\ref{eq:Gp_spinless}) gives rise to a non-zero damping at the Fermi level even in the spinless case. However, when $\ad $ becomes larger $\alpha$ gets smaller reproducing the anti-adiabatic coherent behavior at low energy in the spinless and the negative-$U$ behavior in the spinful cases. In the anti-adiabatic regime the HLFA approach gives an estimate of the MIT \begin{equation} \label{eq:MIT_LF} \lambda_{MIT}=|U/D|_{MIT}\exp(-\alpha^2) \end{equation} where $|U/D|_{MIT} \simeq 2.94$ is the MIT value of the negative-$U$ Hubbard model \cite{Uc1Uc2neg}. The pairing MIT occurs for smaller $\lambda$ as the adiabatic regime is approached (see Fig. \ref{fig:PD}). The phonon PDF can be easily derived within LF approach. Being the local electron densities parametric variables in the anti-adiabatic we find \begin{equation} P(x)=\sum_l w_l P_0(x-x_l) \end{equation} where $w_l$ is the probability of having an occupancy $n_l$, $x_l$ the relative displacements and $P_0(x)$ the ground state PDF of an harmonic oscillator. $P_0(x-x_l)$ is then the conditional probability of having a displacement $x$ given a definite occupation $n_l$. In the spinless case $n_l=0,1$ with equal probability giving \begin{equation} \label{eq:PXLFspinless} P(X)=\frac{1}{2} \left ( P_0(x-x_0)+ P_0(x+x_0) \right) \end{equation} where $x_0=\sqrt{\lambda}/2$. A definite polarization can be associated to the ground state if the PDF becomes bimodal. By requiring $d P(X)/d X\vert_{X=0} > 0$, which simply means $X=0$ turns from a maximum to a minimum, we get the usual anti-adiabatic condition for the existence of a polaronic state, i.e., $\alpha^2>1$ (see fig. \ref{fig:PD}). In the spinful case $n_l=0,1,2$ \begin{equation} \label{eq:PXLFspinful} P(X)= n_d (P_0(X-2X_0)+P_0(X+2X_0))+(1-2n_d)P_0(X) \end{equation} where $n_d= \langle n_\uparrow n_\downarrow\rangle$ is the site double occupancy. It is worth noting that in the insulating state $n_d\simeq1/2$, and the zero-displacement PDF associated to singly occupied sites is depleted. The existence of a definite polarization is now associated to a bipolaronic state. The condition under which (\ref{eq:PXLFspinful}) becomes bimodal, is \begin{equation} exp(-2\alpha^2)(4\alpha^2-1) \ge \frac{1-2n_d}{2n_d}. \end{equation} An estimate for the bipolaronic transition can be obtained by taking $n_d = 1/2$, which gives $\lambda_{pol}=\gamma/2$. The presence of a fraction of singly occupied states increase the critical value of $\lambda$ (see Fig. \ref{fig:PD}). We notice that the spinful PDF for $n_d = 1/2$ maps onto the spinless one after the {\it same rescaling} of the adiabatic regime (\ref{scaling}). \subsection{Results from DMFT-ED} \label{sec:HHFEDresults} The behavior of $P(x)$ and $\rho$ obtained from DMFT-ED compared with that of the previous theories is shown in Figs. \ref{fig:data_adiab} and \ref{fig:data_antiad} for adiabatic and antiadiabatic regime respectively. In each diagram the value of the quantity shown has been shifted upward according to the value of the coupling lambda. The values of $\gamma$ and $\lambda$ have been chosen according to the scaling (\ref{scaling}). Let us first discuss the adiabatic regime. Polaron crossover is seen as a qualitative change in the shape of phonon PDF shown in the upper panels of Fig. \ref{fig:data_adiab} where BO approximation and DMFT results are compared. The anharmonicity due to e-ph interaction increases as the coupling increases leading first to a non-Gaussian and finally to a bimodal PDF at $\lambda > \lambda_{pol}$. This behavior signals the appearance of static distortions, even if we are neglecting any ordering between them. >From Fig. \ref{fig:data_adiab} is evident that BO approximation works well in {\it both} the metallic and the polaronic regimes. The reason for the accuracy of the BO procedure in the polaronic regime is that, contrary to its usual implementation in the weak e-ph coupling\cite{russianBO}, here we take into account the anaharmonicity through Eq. (\ref{eq:BO}) in a non perturbative way. However, BO does not accurately reproduce the phonon PDF around the polaron crossover. In this case electron and phonon states are strongly entangled, and cannot be approximated properly by a disentangled BO state. By a comparison of the spinless and spinful cases in Fig. \ref{fig:data_adiab} we see that the occurrence of the MIT does not influence much the differences between full DMFT and BO, which are in both cases relevant near the polaron crossover. In the lower panels of Fig. \ref{fig:data_adiab} we compare the electronic DOS from ED-DMFT with IBOCPA. The different behavior of the spinless and spinful case is not so evident. However, comparing the spinless spectrum with the corresponding spinful, we see that a quasiparticle peak is present in the former case, while a depletion of low energy part is much more evident in the latter. At strong coupling the discretization of the bath inherent to the ED solution of DMFT does not allow us to identify a well defined quasiparticle peak in the spinless case. A more careful analysis of the quasi particle spectral weight \cite{Max1} shows however that no pairing MIT occurs in the spinless case. Notice that the IBOCPA seems to be much closer to DMFT-ED in the spinless than in the spinful case where the CPA approximation for the electronic degree of freedom is apparently much less adequate. The upper panel of Fig. \ref{fig:data_antiad} shows that the phonon PDF becomes bimodal at a very large value of the coupling. HLFA overestimates polaronicity in the spinless case while it behaves better in the spinful case. In the lower panels of Fig. \ref{fig:data_antiad} the electron DOS is compared with the results of HLFA in the antiadiabatic regime. The different behavior of spinless and spinful cases is marked here by the presence of a pairing MIT in the former, well before the (bi)polaron crossover. HLFA correctly catches the gross behavior of the DOS. Notice that at the strong coupling the CPA employed to obtain the lower diagrams accurately reproduces both the position and the width of the side bands. The different behavior of spinless and spinful system can be easily understood in terms of strong coupling anti-adiabatic perturbation theory for the original lattice problem \cite{Freericks-strong} introducing the charge sector pseudo-spins \cite{Micnas}. In the spinful case at second order in hybridization $V_\kvec$ an anisotropic Kondo Hamiltonian can be obtained \cite{Cornaglia-condmat,StJ,PCarta}. The anisotropic Kondo couplings (see also Eq. (8) of Ref.\cite{Cornaglia}) reads \begin{equation} J_{\parallel,\perp} = \frac{8V^2}{D}\sum_m (\pm)^m \frac{e^{-\alpha^2}\alpha^{2m}}{m!(m\ad+\lambda/2)}, \end{equation} where the $+$ ($-$) sign is taken for the $\parallel$ ($\perp$) coupling and we have assumed for simplicity $V=\sum_k V_k$. In the spinless case the processes leading to $J_{\perp}$ do not exist while the remaining $J_{\parallel}$ is solely associated to charge fluctuations. As opposed to the charge Kondo effect of the spinful case, no Kondo effect is expected for spinless fermions. A strong coupling estimates of $J_{\parallel}$ gives \begin{equation} J_{\parallel}\simeq \frac{8V^2}{D\lambda} \end{equation} and $J_{\perp}/J_{\parallel} \propto \exp(-2\alpha^2)$ which means an exponential suppression of the superconductivity versus charge correlation at strong coupling due to retardation effects \cite{Cornaglia-condmat,StJ}. In the anti-adiabatic limit $\ad\rightarrow \infty$ instead the Kondo couplings becomes isotropic. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4,angle=270]{data_adiab.eps} \end{center} \caption{DMFT data in the adiabatic regime $\ad=0.1$ spinless (panels on the left) and $\ad=0.2$ spinful (panels on the right). The various curves refer to different value of $\lambda$ spanning from $0.1$ to $1.8$ in the spinless case and from $0.05$ to $1.1$ in the spinful case and are shifted according $\lambda$ value. Upper panels show the phonon PDF while lower panels the electronic DOS.} \label{fig:data_adiab} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4,angle=270]{data_antiad.eps} \end{center} \caption{DMFT data in the antiadiabatic regime $\ad=2.0$ spinless (panels on the left) and $\ad=4.0$ spinful (panels on the right). The various curves refer to different value of $\lambda$ spanning from $0.4$ to $6.5$ in the spinless case and from $0.2$ to $3.0$ in the spinful case and are shifted according $\lambda$ value. Upper panels show the phonon PDF while lower panels the electronic DOS.} \label{fig:data_antiad} \end{figure} The above observations can be summarized in the phase diagrams of Fig. \ref{fig:PD}. The polaron crossover line $\lambda_{pol}$ is strongly $\ad$ dependent in both cases. Above this line a polaronic (bipolaronic) regime is attained in the spinless (spinful) case. In the spinful case, we also have a $\lambda_{MIT}$ line, which separates a normal phase from a paired insulating phase\cite{Max1}. For large phonon frequency we can have pairs without bipolaronic behavior, as it can be understood by recalling that in the antiadiabatic limit the Holstein model becomes an attractive Hubbard model, where no polarization is associated to the pairing. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.33,angle=270]{PD_T=0.eps} \end{center} \caption{Phase diagrams of the spinless (left) and spinful (right) $T=0$ Holstein model at half filling. Left panel: the bold line is the polaron crossover from bimodality of $P(x)$ and the dotted line is the anti-adiabatic estimate $\lambda_{pol}=2\gamma$ for the polaron crossover. Right panel:bold curve is the bipolaronic MIT from vanishing of quasi-particle spectral weight $Z$, thin solid line the polaron crossover, bold dotted line is the anti-adiabatic prediction for bipolaronic MIT (Eq. (\ref{eq:MIT_LF})), light dotted line is the anti-adiabatic estimate $\lambda > \gamma /2$ for the polaron crossover.} \label{fig:PD} \end{figure} \section{Half filled Holstein model: spinless vs spinful cases at $T>0$} \label{sec:HHFQMC} Using the QMC procedure described in section \ref{sec:DMFT-QMC} we are able to study the normal phase at finite temperature. At fairly high temperature the MIT becomes a crossover and we are faced with the problem of finding a suitable quantity to locate unambiguously this crossover. In analogy with the phonon PDF used to mark the polaron crossover we can define a distribution of a quantity that locates the pairing crossover. Let us define the distribution of the center of mass $X_c$ (``centroid'') of the phonon path \begin{equation} \label{PXc} P(X_c)= \left \langle \delta(X_c-\frac{1}{\beta}\int_0^\beta x(\tau)d\tau) \right \rangle \end{equation} where the averages are evaluated over the action (\ref{Sel}-\ref{Selph}). In the same formalism the phonon PDF is the distribution of the the endpoint $x=x(0)=x(\beta)$. The meaning of the centroid variable $X_C$ has been discussed in \cite{Kleinert} for a single particle in a binding potential. Here the variable $X$ represents the fluctuating position of the particle, and $X_c$ is the classical position of the particle \cite{Kleinert}. For an heavy particle, the classical limit holds, so that $P(X)$ and $P(X_C)$ coincide \cite{Kleinert}. The lighter is the particle, the broader the wave function, increasing the variance of $P(X)$ while $P(X_c)$ turns out to be essentially determined by the binding range of the potential. Here, we use $P(X_C$) for the many-body problem, and propose that pairing can be associated with a multimodal behavior in $P(X_c)$ which takes place at a given value of the coupling $\lambda_{pair}$. \cite{centroide} The ability of our estimator to determine the pairing crossover can be understood by inspecting the interaction term (\ref{Selph}). In the adiabatic limit ($\ad \rightarrow 0$) the kinetic term forces the phonon path to be $\tau$-independent and the phonon field becomes classical. In this limit $X_c$ is equal to $X$ and $P(X)$ and $P(X_c)$ obviously coincide. Thus the centroid distribution becomes bimodal when the system is polarized, i.e., $\lambda_{pair}=\lambda_{pol}$, which is exactly what one expects since a static field can induce pairing only with a finite polarization. Notice that in this sense the bimodality of $P(X_c)$ is a {\it precursor} of the actual pairing MIT which occurs at $T=0$ and $\omega_0=0$ at a {\it larger} value of the coupling \cite{Millis-adiab} (see fig. \ref{fig:PD}). On the other hand, in the opposite atomic ($D \rightarrow 0$, $\ad \rightarrow \infty $) limit the electron density becomes a constant of motion. Therefore Eq. (\ref{Selph}) takes the transparent form $-\sqrt{\lambda} (n-1) \int_0^\beta d\tau x(\tau)$ where the electron density is directly coupled to the centroid $X_c$. The average appearing in Eq. (\ref{PXc}) is readily carried out, giving \begin{equation} \label{PXcAtomic} P(X_c) \propto \exp \left[-\beta (\frac{X_c^2}{2}-\frac{1}{\beta}\log (2 \cosh (\beta\sqrt{\lambda}X_c+1))) \right] \end{equation} which becomes bimodal for $\lambda>\lambda_{pair}=2T$. This is exactly the scale where double occupancies start to proliferate in the atomic limit. Therefore the bimodality of $P(X_c)$ correctly signals the onset of pairing also in the antiadiabatic regime. In the same limit, it can be proved that the endpoint distribution $P(X)$ has a variance which scales with $1/\sqrt{\Delta \tau}$ and as a consequence no definite polarization may occur. We finally notice that the $D \rightarrow 0$ limit of adiabatic $P(x)$ \cite{Millis-adiab} coincides with $P(X_c)$ of Eq. (\ref{PXcAtomic}). Since for $\omega_0=0$ the distributions of $X$ and $X_c$ coincide, we conclude that in the atomic limit $P(X_c)$ is the same for $\omega_0=0$ and $\omega_0=\infty$. This suggests that the pairing crossover may depend on $\omega_0$ more weakly than the polarization one. \subsection{Results from DMFT-QMC} To analyze the evolution of $P(X)$ and $P(X_c)$ at finite $D$ and $\omega_0$ we use DMFT-QMC. The numerically exact results, shown in Fig.\ref{PD}, clearly show that $P(X)$ and $P(X_c)$ tend to coincide in the relatively adiabatic case $\ad = 0.1$, as expected from the previous arguments about the adiabatic limit. The two quantities are clearly different for $\ad = 1$ and $8$. For temperatures smaller than $\omega_0$, the polarization crossover $\lambda_{pol}$ moves to larger values as $\ad $ is increased, while the line ($\lambda_{pair}$) where $P(X_c)$ becomes bimodal is only slightly shifted to larger couplings with increasing $\ad$. This is strongly reminiscent of the behavior of the metal-insulator transition in the Holstein model at $T=0$, whose critical coupling is slowly increasing with $\ad$ and then saturates to the asymptotic $\ad=\infty$ value\cite{Max1}. The polarization crossover is instead roughly proportional to $\ad$\cite{Max1}. Both at zero and at finite temperature the line where the centroid becomes bimodal does not coincide with the metal-insulator line, but it can be considered a precursor which depends in a very similar way on $\ad$ and on $\lambda$. \begin{figure}[htbp] \begin{center} \includegraphics[width=6.5cm,angle=270]{PD_T.eps} \end{center} \caption{$\lambda_{pair}$ and $\lambda_{pol}$ at $\ad=8$. The dashed line represents the $\ad = 0$ result where $\lambda_{pair}=\lambda_{pol}$. The dashed arrow indicates the zero-temperature results for the polaron crossover $\lambda_{pol}$ for $\gamma=8$.} \label{PD} \end{figure} Our DMFT results can also be compared with the semi-analytical results for $\ad=0$ \cite{Millis-adiab} (thin dashed line in Fig.\ref{PD}). The $\ad=0.1$ case is in very good agreement with the adiabatic result, and also the $\ad=1$ and $8$ cases, at high temperature, fall on the same curve. The centroid distribution depends weakly on $\ad$, as suggested by the atomic limit. Interestingly, the adiabatic result displays a re-entrance at low temperatures as a non monotonic behavior of $\lambda_{pair}$ and $\lambda_{pol}$ as a function of temperature. Although the QMC simulations do not reach sufficiently low temperatures, we find that the re-entrance is present also for $\ad$ different from zero, as indicated by the arrows in Fig.\ref{PD}, which mark $T=0$ results for the polarization crossover for the Holstein model\cite{Max1}. \section{Conclusions} The normal state properties of strongly coupled electron phonon systems can be drawn from the comparison of electronic spectral and phonon PDF properties. Qualitative changes in phonon PDF signals a polaronic crossover while an electronic MIT can be seen from a gap in the electronic DOS or by the vanishing of the quasi particle spectral weight. This last transition can be observed only at sufficiently high density provided the Coulomb repulsion is neglected. In the limit of low density the carrier show a polaronic behavior trough developing a definite polarization in the phonon PDF while at large densities occupied and empty sites makes the phonon PDF bimodal. At the polaron crossover fluctuation of phonon coordinates tends to be larger than those at any other coupling. In this regime Born-Oppenheimer approximation is shown to fail also when phonon frequency is much less then electron bandwidth. For a single hole in the $t-J$ model a further source of localization is due to magnetic superexchange energy which tends to localize the spin-defect spreading due to the presence of the hole. At non zero temperature the pairing MIT becomes a crossover. To locate the point of this crossover it is possible to define a suitable distribution associated to the phonon classical position which is coupled to the electron density in the non-adiabatic regime. There are several limitations in the DMFT method used to obtain the present results. The biggest one is the non dispersive nature of bosonic excitation. In the $t-J$ case the absence of spin-wave dispersion lead to localization of the single hole. However local properties seem to be quite well represented by DMFT in when compared with those found at finite dimensions \cite{MishchenkoVarenna}. In the half filled Holstein model case bipolarons do not have coherent motion therefore the vanishing of the single particle spectral weight implies a MIT. Several extensions of DMFT such as Extended-DMFT for the $t-J$ model \cite{EDMFT} or CDMFT for the Hubbard model \cite{CDMFT} try to overcome such limitations. Work along these lines for the Holstein and Holstein-Hubbard model are currently in progress.
proofpile-arXiv_065-3262
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Figure caption} \textbf{Fig. 1} - AFM\ topographies ($1.0$ $% \operatorname{\mu m}% $ $\times$ $0.5$ $% \operatorname{\mu m}% $) for: $1.54$ ML (a), $1.57$ ML (b), and $1.64$ ML (c) of InAs coverage. Panel (d)\ shows the number density dependence on InAs coverage of small and large QDs \textbf{Fig. 2} - Scaled distributions of the experimental island volume for: small QDs (a), large QDs in the range $1.54-1.57$ ML of InAs coverages (b), large QDs in the range $1.59-1.82$ ML of InAs coverages (c). Solid lines in panel (c)\ show the theoretical scaling function for $i=1,2,3$. Panel (d) shows the average of the experimental distributions of panel (c) compared\ to the theoretical scaling function for $i=2$. \textbf{Fig. 3} - (a) Total volume $V_{large}^{T}$ of large QDs plotted as a function of InAs coverage. The lowest line indicates the InAs flux ($F_{o}$) above the 2D-3D transition. The volume increase in the range between $1.6-1.8$ ML is accounted for by the effective flux $F$ (b) Derivative terms $\frac{d\rho_{large}}{d\Theta}\langle V_{large}\rangle$ and $\rho_{large}% \frac{d\langle V_{large}\rangle}{d\Theta}$ of $V_{large}^{T}$ plotted as a function of inAs coverage. \end{document}
proofpile-arXiv_065-3263
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The IRC system UY Aur was first reported as a visual double star by Joy \& van Biesbroeck (1944). Later, based on an infrared speckle study Ghez, Neugebauer \& Matthews (1993) and Leinert et al. (1993) confirmed that UY Aur is a binary system. Recently, Hartigan \& Kenyon (2003) reported the main properties of a sample of subarcsecond binaries in the Taurus-Auriga cloud based on HST spectra. They report UY Aur as a binary system composed of two classical T Tauri stars of spectral types M0 and M2.5 for the primary (UY Aur A) and the secondary (UY Aur B) respectively, separated by 0$\rlap.{''}89$ ($\sim$ 125 AU at 140 pc). The system has been studied in detail in the infrared at $J, H \, \& \,K'$ by Close et al. (1998). Using infrared adaptive optics, Close et al. detected a circumbinary disk of $\sim$ 500 AU radius. In order to reproduce the spectral energy distribution of UY Aur A and B, they include in their models small inner disks around each star. The derived radii of the circumstellar disks are about 10 and 5 AU for components A and B, respectively. The Close et al. images also suggest that both inner disks are being fed by the outer circumbinary disk through thin streamers of material. In the millimeter region, both line emission ($^{13}$CO) as well as continuum at 2.7 and 1.3 mm were reported by Duvert et al. (1998). They have imaged the emission from the circumbinary disk in the $^{13}CO \; \; J=1 \rightarrow 0$ and $J=2 \rightarrow 1$ transitions. Their spectral line observations agree well with the infrared adaptive optics circumbinary disk reported by Close et al. not only in position but in extent. Regarding the suggested small circumstellar disks, Duvert et al. proposed that the 2.7 and 1.3 mm continuum emission can be attributed to partially resolved circumstellar disks around each star, with some possible contribution of free-free radiation. In this work we report the first detection of centimetric emission at the position of UY Aur. We conclude that our 3.6 cm continuum detection is associated with the UY Aur system and we discuss a possible origin of it. \section{Observations} \begin{figure}[!t]\centering \begin{center} \includegraphics[width=18pc]{f1.ps} \caption{CLEANed natural weight map at 3.6 cm. The map shows the two sources detected in the UY Aur region. The peak of source 1 is located very close to the UY Aur position and is suggested to be associated to the binary system. Contours are -2,2,4,6 times 16$\mu$Jy beam$^{-1}$} \end{center} \end{figure} \begin{table}[!b] \begin{center} \small \caption{Source Parameters \begin{tabular}{lccc} \hline \\[-1ex] Source & $\alpha$(2000) & $\delta$(2000) & $S_{3.6cm}$ \\ &h m s & $^{\circ}$ $'$ $''$ & [mJy] \\[2ex] \hline \\[-2ex] 1 & 04 51 47.37 & 30 47 13.3 & 0.12$\pm$0.03 \\[0.7ex] 2 & 04 51 51.93 & 30 47 00.4 & 0.11$\pm$0.03 \\[2ex] \hline \end{tabular} \end{center} \hspace{0.1cm} {{\scshape Note}.$-$ Absolute position errors are $\sim 0{\rlap.}{''}2$.} \end{table} Our 3.6 cm observations were made with the Very Large Array (VLA) of the NRAO\footnote{The National Radio Astronomy Observatory is operated by Associated Universities Inc. under cooperative agreement with the National Science Foundation} on 2002 October 9th. The array was in the C configuration giving an angular resolution of $\sim 2\rlap{.}{''}3$ and a total on-source integration time of $\sim 51$ minutes was obtained. The amplitude and phase calibrators were 0137+331 and 0443+346, respectively. The bootstrapped flux density for 0443+346 was 0.615 $\pm 0.001$ Jy. The data reduction was performed using the Astronomical Image Processing System (AIPS) software of the NRAO. We have followed standard VLA procedures for editing, calibrating and imaging. Figure 1 shows a natural weight CLEANed map of the UY Aur region. In this map two radio sources were detected at a 6-$\sigma$ level. We will refer to these sources as Sources 1 and 2. The peak of Source 1 is located very close to the UY Aur position. Flux densities and source positions were obtained using the AIPS IMFIT procedure. The apparent elongation of Source 1 is not real but it is due to beam deconvolution. Actually, the position angle of both Source 1 and the beam is the same: P.A.$= 177 ^\circ$. Besides, small structures present in Source 1 (Fig. 2) are not reliable since they are just at a 2$-\sigma$ level above rms-noise. Thus, since neither Source 1 nor Source 2 are spatially resolved, we have only determined integrated flux densities and source positions (see Table 1) from a 2D-Gaussian fit to each source. \section{Discussion} \begin{figure*}[!t] \begin{center} \includegraphics[width=27pc,height=37pc,angle=-90]{f2.ps} \caption{This map shows an enlarged image of Fig. 1 around UY Aur. Crosses indicate the location of the UY Aur components corrected for precession and proper motion. The inset shows the least squares fit of the spectral index of the cm and mm emission. Contours as in Fig.1} \end{center} \end{figure*} Both continuum and line emission are associated with the UY Aur system. Based on their continuum emission at 1.3 and 2.7 mm, Duvert et al. (1998) found an unusual spectral index of 1.6$\pm$0.2, which is lower than the one expected for thermal dust emission, $\sim$2.5 (Dutrey et al. 1996). Duvert et al. suggest that this low spectral index is the result of a combination of two independent sources: one emitting free-free radiation (showing a flat spectrum) and a second source showing normal dust emission (with a steep positive index). They conclude that the free-free source should be located nearly coincident with or slightly north-east of the primary and propose centimetric observations to confirm this hypothesis. In this work, we have observed the UY Aur system as part of a radio survey of IRCs. We detected two 3.6 cm continuum sources, Sources 1 and 2 (see Fig. 1), at a 6$-\sigma$ level in the UY Aur region. On one hand, Source 2 does not have a known counterpart at any other wavelength. The {\em a priori} probability of finding a 3.6 cm source with a flux density of $\geq$0.11 mJy in a region of 2$'$ by 2$'$ is $\sim$ 0.16 (Windhorst et al. 1993). It is then quite likely that Source 2 is just a background source. On the other hand, Source 1 is located very close to the reported position of UY Aur (Herbig \& Bell 1988, hereafter HBC). Then, we propose that our centimeter source is related to the binary system. In order to confirm that our detected emission is associated with it, we have precessed and corrected for proper motion the position of UY Aur according to Jones \& Herbig (1979). The resulting position for UY Aur coincides with that of our Source 1 to within $0\rlap.{''}34$, which according to Duvert et al. (1998) is less than the 1-2$\sigma$ uncertainty in the optical position and proper motion. Since the main component of the binary (UY Aur A) is the brightest star in the system at optical and infrared wavelengths, we have assumed that the coordinates given in the HBC catalog belong to UY Aur A. Once the position of UY Aur A is fixed, we have derived the second component position relative to it (see Fig. 2) by taking a binary separation of $0{\rlap.}{''}894$ and a position angle of $228.8^{\circ}$ (Brandeker, Jayawardhana \& Najita 2003). On the one hand, our centimeter detection coincides with the position of UY Aur to within $0{\rlap.}{''}34$ and on the other hand it is consistent with the peak positions of the 1.3 and 2.7 mm emission reported by Duvert et al. to within $0\rlap.{''}2$. Besides, although the low flux of our detection, it is consistent with the lowest centimetric emission of 0.1 mJy present in almost all outflow sources (Reipurth et al. 2004). Therefore, we conclude that our detected 3.6 cm emission is associated with the UY Aur binary system. Regarding the spectral index, we have obtained a least squares fit to the millimeter and centimeter fluxes. The resulting spectral index, $\alpha = 1.66$, is consistent with that reported by Duvert et al. They have suggested that this low value may be a combination of normal dust emission and free-free radiation from a stellar wind or a jet. However, since both emissions follow a power law distribution ($\sim$2 for dust radiation and 0.6 for a stellar wind), it is not possible to sum them and still obtain a single power law distribution over such a long range of wavelength without a significant bend. Thus, a satisfactory fit to the observations as the result of combining two such distributions could not be obtained. Although the 3.6 cm flux may originate in free-free radiation, its low value clearly demonstrates that free-free emission is not contaminating the mm flux and does not explain the mm index. Then, the fact that the same low index is maintained over a large wavelength range might be fortuitous or might be entirely due to thermal dust emission. Duvert et al. (1998) proposed that the millimeter emission originates in the circumstellar region, actually in the small circumstellar disks, one around each component of the binary. The low spectral index in this region could be explained by circumstellar flat-disk models of D'Alessio, Calvet \& Hartmann (2001) where they show that disks whose spectral index is smaller than 2 are flat and the observed millimeter emission is due to optically thick and cold material. But what about the origin of our centimeter emission? The centimeter emission might originate in a stellar wind or an ionized jet as Duvert et al. (1998) have suggested. This last possibility might be supported by a kinematic study of Hirth et al. (1994). They deduce the existence of a bipolar, high velocity flow at a P.A.= 40$^{\circ}$ (P.A.= 220$^{\circ}$) associated with UY Aur. Then, our 3.6 cm emission may be related to this outflow. If the 3.6 cm flux is due to free-free emission, from our single wavelength observation it is not possible to distinguish between emission from a jet and a stellar wind. However, since our radio detection falls along the same power law distribution obtained from the millimeter observations (see inset of Fig.~2), we cannot discard the possibility that it might be the long wavelength continuation of the thermal dust emission. Further observations at higher resolution and additional wavelengths are needed. \section{Summary} We report for the first time VLA continuum emission at 3.6 cm associated with the binary IRC system UY Aur. Surprisingly, our centimetric emission follows closely the low spectral index obtained in the millimeter region. This low index might be explained by a flat, optically thick and cold circumstellar disk, in this case one or both of the two small circumstellar disks. On the other hand, if the 3.6 cm emission is due to free-free radiation, it may be related to the bipolar outflow reported by Hirth et al. (1994). However, from our single observation it is not possible to distinguish between free-free emission from a jet or a stellar wind. Radio centimeter observations at other wavelengths and/or with higher resolution are required to further clarify the origin of the radio continuum emission. \acknowledgments We thank Luis F. Rodr\'\i guez and Paola D'Alessio for their valuable comments on this work. We acknowledge financial support from DGAPA-PAPIIT and CONACyT-Ciencias B\'asicas. F.P.W. also was supported by the NSF International Researchers Fellowship Program.
proofpile-arXiv_065-3277
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Appendix A: Head of First Appendix} \end{document} \section{Conclusions} \label{sec:conclusions} We have proposed a novel technique (USM-Nets), based on ANNs, to build data-driven surrogate models that approximate the solution of differential equations while accounting for the dependence on both scalar physical parameters and the domain geometry. Our method is non-intrusive as it does not require the knowledge of the FOM equations, but rather it is trained with samples of precomputed solution snapshots obtained for different parameters and geometries. It is also meshless since the USM-Net learns the map from point coordinates to the solution. To characterize the geometrical features of the domain at hand, we consider a set of geometrical landmarks defined by the user. Our method is highly flexible, as it does not pose specific requirements on the definition of these landmarks, making it suitable for practical applications and significantly easing its technological or clinical translation. We have then presented an enhanced version of our surrogate modeling method, based on a UC system employed to pre-process the physical coordinates. As shown by our numerical results, using this UC system enhances the generalization accuracy in some cases. We have finally presented two test cases in fluid dynamics. The first is a lid-driven cavity problem with variable geometry and variable Reynolds number; the second one consists in predicting the steady-state pressure and velocity field within a coronary bifurcation, given the patient geometry. In both test cases, despite the noticeable variability of the physical and/or geometrical parameters, USM-Nets were able to approximate the solution within an approximation of the order of $10^{-2}$, being trained, and a few hundreds of solution snapshots. \section{Discussion} \label{sec:discussion} We have introduced USM-Nets, a deep learning class of surrogate models capable of learning the solution manifold of a PDE universally with respect to physical parameters and geometry. The ability of a surrogate model to capture the geometrical variability of the solution of a differential problem is a feature of great interest. Indeed, many applications require considering the solution of a physical problem in different domains. Biomedicine offers several examples in this regard since each patient presents a different geometry, and in many cases (as in hemodynamics) the geometry itself is the principal determinant of the solution. Examples are given by the blood flow in an aneurysm, in a stenotic artery, or through an artificial valve. Nevertheless, most of the reduced-order/surrogate models available in the literature consider a fixed domain accounting only for the variability of physical parameters \cite{benner2005dimension,antoulas2000survey,lassila2014model,benner2015survey,peherstorfer2015dynamic}. Few models rely on parametrized shape models that guarantee correspondence between points coming from different shapes, enabling the construction of projection-based models. As a matter of fact, representing a solution manifold in variable geometries is an arduous task. There are two main difficulties in this regard: (1) how to encode the properties of the geometry at hand and (2) how to construct a discrete representation of the solution that is universal with respect to the shape of the domain. Concerning point (1), USM-Nets only require the definition of a finite set of scalar quantities, called geometrical landmarks, that characterize the salient properties of the geometry at hand. Landmarks make USM-Nets an extremely flexible technique that can address a wide range of real-world applications. There are different approaches to landmark definition, such as the one based on the statistical analysis of sampled geometries, like the first coefficients of proper orthogonal decomposition (POD) or the positions of control points. However, approaches combining POD and ANN \cite{hesthaven2018non,carlberg2019recovering,dal2020data,o2021adaptive} might present difficulties in the database construction and limited generalization properties imposed by both the shape model, encoding the correspondence between points belonging to different geometries, and the truncation of the expansion. Landmarks could be simply the coordinates of some points that characterize the geometry at hand. This case, specifically, is well suited for clinical applications. Landmarks, such as the coordinates of a bifurcation, the position of an inlet, diameters, or areas can be processed directly from medical images without the need for segmentation and the generation of computational grids. The great flexibility of USM-Nets lies in the fact that no structural requirements are imposed on the definition of landmarks. Concerning point (2), a key feature of USM-Nets is their mesh-less nature, which frees them from a predetermined triangulation of the domain, overcoming the technical difficulties related to mesh element deformations. The mesh-less nature of USM-Nets is achieved by their architectural design. Unlike many existing surrogate modeling methods, that provide a map from the problem parameters to a set of degrees of freedom associated with a preconstructed parametrization of the solution trial manifold, the output of USM-Nets is the solution itself evaluated in a query point. Indeed, by fixing a given parameter vector $\boldsymbol{\mu}_p$ and a given geometry $\Omega$, the USM-Net is a function from $\mathbb{R}^{d}$ to $\mathbb{R}^{k}$, that is an approximation of the solution $\mathbf{u}(\cdot; \boldsymbol{\mu}_p, \Omega)$. Hence, instead of passing through a parametrization of the approximate solution, we make the ANN \textit{coincide} with the approximate solution. A further advantage of this architectural design is that USM-Nets encode by construction the spatial correlation (that is, with respect to the input $\mathbf{x}$) and do not need to learn it, thus achieving elevated accuracy levels even with lightweight NNs. We have then presented an enhanced version of PC-USM-Nets, called UC-USM-Nets, based on a universal coordinate system. Even if it is not straightforward in all practical cases to define a UC system (such as when the domain may vary in topology), the use of a UC system can improve the generalization accuracy of PC-USM-Nets, as shown by the numerical results. A UC system acts at two levels. Firstly, it allows us to partially compensate for the possible non-exhaustiveness of the geometrical landmarks in describing the geometry (see also point (1) of the discussion). In Test Case 2, for example, in the setting with only six landmarks, we can have two geometries $\Omega^1$ and $\Omega^2$ that differ from each other, even though they have the same landmarks (${P_g}(\Omega^1) = {P_g}(\Omega^2)$). In boundary areas far from the landmarks, the PC-USM-Net might fail in satisfying the no-slip condition, while the UC-USM-Net, on the other hand, allows the solution to be more accurate because the UC system \textit{informs} the model of the position of the boundary in the geometry at hand. Furthermore, regarding point (2), a UC system provides a more effective representation of the solution manifold. In fact, in this case, the FCNN does not receive as input the coordinates $\mathbf{x} \in \Omega$, but rather $\ptRef \in \widehat{\Omega}$, which are more informative of the \textit{role} that each point plays within the specific domain. \section{Introduction} \label{sec:intro} Models and methods in scientific computing and machine learning enable the extraction of relevant knowledge from available data \cite{alber2019integrating}. Data can represent, e.g., physical coefficients, geometrical factors, boundary or initial conditions. This is the case of computational fluid dynamics, a remarkable instance, especially when addressing problems in aerodynamics, such as the design of vehicles, and in biomedical engineering, such as patient-specific analysis of blood flow dynamics \cite{anderson1995computational,parolini2005mathematical,formaggia2010cardiovascular,brunton2020machine}. The standard approach to modeling and simulation (Fig.~\ref{fig:approach_comparison}, top) requires the construction of a computational mesh, a partition of the given geometry in simple elements, e.g. tetrahedral or hexahedral cells. In biomedical applications, the construction of the computational mesh requires a preliminary step of segmentation, i.e. the extraction of the boundaries of the computational domain from medical images, such as those derived from magnetic resonance or computerized tomography. Then, discretization methods (like Finite Elements and Finite Difference methods \cite{quarteroni2017numerical}) assemble on the elements of the mesh a suitable approximation of the operators associated with the partial differential equations (PDEs). Unfortunately, changes in shape require the re-execution of the entire process, necessitating the reallocation of significant computational resources. \begin{figure*} \centering \includegraphics[width=\textwidth]{img/introduction_scheme_compress.pdf}\\ \caption{Comparison between the standard modeling and simulation approach (top) and the USM-Nets approach (bottom) for a use case of clinical interest, namely the prediction of blood flow and pressure within a coronary bifurcation. The former approach requires a geometric preprocessing phase, which consists in segmenting the patient's clinical images to extract a computational domain $\Omega_h$. A partition of the latter into a set of cells (in the figure, into triangles) constitutes the computational mesh $\mathcal{T}_h$. The numerical solution is then obtained by the discretization of the Navier-Stokes equations on the computational mesh $\mathcal{T}_h$, solved through suitable computer-based algorithms. Our proposed USM-Net approach lightens both the geometric preprocessing and the solution approximation steps. The first one consists solely of landmarks extraction and (mesh-less) cloud of query points generation. Finally, the solution is obtained by evaluating the neural network as many times as the number of query points. The USM-Net can be trained either from experimentally measured data or from synthetic data generated using the Navier-Stokes solver. } \label{fig:approach_comparison} \end{figure*} For this reason, some computational approaches, like isogeometric analysis (IGA) and shape models, are designed to avoid the regeneration of a new computational mesh when a change of geometry occurs. IGA achieves this thanks to the use of non-uniform rational B-splines (NURBS) that exactly match CAD geometries, usually adopted in an industrial context~\cite{hughes2005isogeometric}. Similarly, geometrical shape models describe the possible variations of geometry through a limited number of parameters. Among the most diffused shape models we recall free-form deformation (FFD)~\cite{sederberg1986free,lamousin1994nurbs}, radial basis function (RBF)~\cite{buhmann2000radial}, and statistical shape models \cite{heimann2009statistical}, based e.g. on principal component analysis (PCA) \cite{jolliffe2016principal}. Firstly conceived in computer graphics, FFD is a technique that surrounds the object with a lattice of control points. Their motion drives the deformation of all object points through a polynomial interpolation. RBFs define a parametrized map describing the deformations of the geometry from a small number of selected control points as well. Compared to the FFD approach, the control points position is not constrained on a lattice but user-selected from application to application. Finally, PCA-based models describe a collection of similar geometries through weighted modes describing the principal geometrical variations with respect to a mean shape. These approaches enable the construction of reduced-order models (ROMs) for problems with parameterized geometry, which provide a computationally efficient approximation of the solution for many different choices of the geometrical parameters. In this context, empirical interpolation methods build efficient approximations of differential operators avoiding computationally costly reassembly. Among the main applications of parametrized ROMs \cite{quarteroni2011certified,carlberg2011efficient,quarteroni2015reduced,hesthaven2016certified,peherstorfer2016data,taira2017modal}, we mention those related to computational fluid dynamics using FFD \cite{samareh2004aerodynamic,lassila2010parametric}, RBF \cite{morris2008cfd,rendall2008unified,manzoni2012model} and PCA \cite{sangalli2009case}. Besides introducing a geometrical error, the critical aspect of shape models is that, when deforming a fixed mesh, the elements may encounter such deformations that the discretized problem becomes ill-conditioned. This might considerably limit the ability of the method to explore the geometric variability of the problem. To overcome the limitations of mesh-based approaches, mesh-free and particle methods construct a discretization of the geometry formed solely by a collection of points, relaxing the constraints given by the connectivity \cite{li2002meshfree}. These methods bring numerous advantages in terms of managing geometrical accuracy (even with discontinuities), imposing large deformations, adaptively refining, and code parallelization. However, they generate full-order models (FOMs) with limited computational efficiency. In this paper, we introduce a new surrogate modeling technique based on artificial neural networks (ANNs), that can learn the solution manifold of a given parametrized PDE. Notably, these surrogate models are not tailored on a given domain, but they account for the influence of geometry on the solution. We leverage the capabilities of approximating arbitrary complex functions with inexpensive output evaluation provided by ANNs, which are indeed \textit{universal} approximators of several families of functions, including continuous functions \cite{cybenko1989approximation} and Sobolev spaces \cite{mhaskar_neural_1996,montanelli2019new}. Based on these results, the surrogate models we propose are, in principle, able to approximate the solution manifold of a given differential problem with arbitrary accuracy and universally with respect to the variation of domain geometry and physical parameters. For this reason, we name them \textit{Universal Solution Manifold Networks} (USM-Nets). We train these networks using subsamples of solutions snapshots obtained by varying the physical parameters and the domain geometry (either synthetically generated through a FOM or experimentally collected). Therefore, the accuracy of the predictions of a specific USM-Net would directly depend on the richness of the training set used. However, the architecture we propose has the potential to accurately approximate the solution universally with respect to domain and physical parameters, which is not the case for methods that are constrained to a predetermined parameterization of the solution manifold. The design of a USM-Net avoids complex geometrical preprocessing (comprising segmentation and construction of the computational mesh; see Fig.~\ref{fig:approach_comparison}). We encode geometrical variability in a few \textit{geometrical landmarks}, a finite number of scalar indicators, inexpensively processed from the input image. In the most basic case, landmarks are the coordinates of specific reference points where we impose a correspondence between geometries. Landmarks might identify inlets, bifurcations, or other specific structures, and they provide, together with the physical parameters, the input to the ANN. Our approach is modular, thanks to the possibility of arbitrarily defining the loss function. Besides penalizing the misfit with the available data, during the training of the network we can enforce assumptions of regularity (imposing, e.g., weights penalization), the initial or boundary conditions, or the fulfillment of an equation in a strong (differential) form. The latter would represent a generalization of the so-called physics-informed neural network \cite{raissi2019physics,raissi2020hidden,regazzoni2021physics}, avoiding new executions of the training process for each geometry variation. The outline of this paper is as follows. First, in Sec.~\ref{sec:methods} we present the notation used throughout this paper and the proposed methods. Then, in Sec.~\ref{sec:test-cases} we present two test cases and we describe how our proposed methods can be applied to them. In Sec.~\ref{sec:results} we present the numerical results and in Sec.~\ref{sec:conclusions}, finally, we draw our conclusions. \section{Methods} \label{sec:methods} In this section, we first introduce the notation used throughout this paper. Then, we present our proposed USM-Net method. \subsection{Problem setting} We consider a space-dependent physical quantity $\mathbf{u}(\mathbf{x})$, defined on a domain $\Omega \subset \mathbb{R}^d$, where typically $d=2, 3$. For the sake of generality, we denote $\mathbf{u}(\mathbf{x}) \in \mathbb{R}^{k}$ as a vector field even though in certain cases it could be a scalar field (that is $k=1$). For example, $\mathbf{u}(\mathbf{x})$ may correspond to the blood velocity field in a vessel, the displacement field of soft tissue, or the pressure field of air around a body. Very often, the quantity $\mathbf{u}(\mathbf{x})$ depends on a set of physical parameters $\boldsymbol{\mu}_p \in \mathcal{P}$, where $\mathcal{P} \subset \mathbb{R}^{{n_p}}$ is the parameter space. The parameters $\boldsymbol{\mu}_p$ characterize the physical processes that determine $\mathbf{u}$. For example, the velocity of a fluid depends on its viscosity, while the displacement of biological tissue depends on its stiffness moduli and the applied force. Furthermore, the field $\mathbf{u}(\mathbf{x})$ depends on the domain $\Omega$ itself. In many practical applications, we are interested in the family of fields $\mathbf{u}(\mathbf{x})$ obtained by varying the domain $\Omega$ in a given set, denoted by $\mathcal{G}$: \begin{equation*} \mathcal{G} \subset \{ \Omega \subset \mathbb{R}^d, \text{ open and bounded}\}. \end{equation*} To stress the dependence of $\mathbf{u}(\mathbf{x})$ on both $\boldsymbol{\mu}_p$ and $\Omega$, we will henceforth write $\mathbf{u}(\mathbf{x}; \boldsymbol{\mu}_p, \Omega)$. Often, the physical processes that determine $\mathbf{u}(\mathbf{x}; \boldsymbol{\mu}_p, \Omega)$ can be described by a mathematical model assuming the form of a boundary-value problem, that is \begin{equation} \label{eqn:PDE_generic} \left\{ \begin{aligned} \mathcal{L}(\mathbf{y}, \boldsymbol{\mu}_p) = \mathbf{0} \qquad& \text{for $\mathbf{x} \in \Omega$}, \\ \mathcal{B}(\mathbf{y}, \boldsymbol{\mu}_p) = \mathbf{0} \qquad& \text{for $\mathbf{x} \in \partial\Omega$}, \\ \mathbf{u} = \mathcal{U}(\mathbf{y}, \boldsymbol{\mu}_p) \qquad& \text{for $\mathbf{x} \in \Omega$}, \\ \end{aligned} \right. \end{equation} where $\mathbf{y}$ in the state variable; $\mathcal{L}$ and $\mathcal{B}$ are the operators associated with the differential equation and with the boundary conditions, respectively; $\mathcal{U}$ is the observation operator. Since the state $\mathbf{y}$ is instrumental for obtaining $\mathbf{u}$, in what follows we will refer to $\mathbf{u}$ and not to $\mathbf{y}$ as the \textit{solution} of the FOM. If the differential problem \eqref{eqn:PDE_generic} is well-posed, then, given $\boldsymbol{\mu}_p \in \mathcal{P}$ and $\Omega \in \mathcal{G}$ there exists a unique solution $\mathbf{u}(\cdot; \boldsymbol{\mu}_p, \Omega) \colon \Omega \to \mathbb{R}^{k}$. This solution can be numerically approximated through a FOM based e.g. on Finite Differences or Finite Elements \cite{quarteroni2017numerical}. The goal of this paper is to build an emulator that surrogates the solution map $(\boldsymbol{\mu}_p, \Omega) \mapsto \mathbf{u}(\cdot; \boldsymbol{\mu}_p, \Omega)$, that is a model that, for any given value of the geometrical parameters $\boldsymbol{\mu}_p$ and any given geometry $\Omega$, provides an approximation of the corresponding solution $\mathbf{u}$. \subsection{USM-Net} A USM-Net is an ANN-based model, trained on a data-driven basis, that surrogates the solution map $(\boldsymbol{\mu}_p, \Omega) \mapsto \mathbf{u}(\cdot; \boldsymbol{\mu}_p, \Omega)$, without using any FOM. In the most common scenario, a FOM that describes the physical process is available, and it is used to generate the data needed to train the USM-Net. Still, the training of a USM-Net is also possible for physical problems in which the FOM is unknown, provided that sufficient experimental measurements are available. We present two versions of USM-Net: \begin{enumerate} \item {PC-USM-Net}, when the solution is represented in terms of the \textit{physical coordinates}, that is $\mathbf{x} \in \Omega$ (see Sec.~\ref{sec:methods:PC-USM-Nets}); \item {UC-USM-Net}, with the solution represented now by passing through of a system of \textit{universal coordinates}, that will be defined in Sec.~\ref{sec:methods:UC}. \end{enumerate} \subsubsection{PC-USM-Net} \label{sec:methods:PC-USM-Nets} The PC-USM-Net architecture is represented in Fig.~\ref{fig:architecture_merged} (top). It consists of an ANN, typically a fully connected ANN (FCNN), whose input is obtained by stacking three vectors: \begin{enumerate} \item the query point $\mathbf{x}$, that is the coordinate of the point where the solution $\mathbf{u}(\mathbf{x})$ is sought; \item the physical parameters $\boldsymbol{\mu}_p$; \item a set of geometrical landmarks associated with the domain at hand, that is $\boldsymbol{\mu}_g = {P_g}(\Omega)$, that typically represent the coordinates of key points of the domain (such as inlets, bifurcation points, etc.) or geometrical measures (such as diameters, thicknesses, etc.). In Sec.~\ref{sec:methods:landmarks} we will elaborate on possible choices for the function ${P_g}$. \end{enumerate} The output of the FCNN is an approximation of the solution $\mathbf{u}$ at the query point $\mathbf{x}$. More precisely, denoting by $\mathcal{NN}$ the FCNN and by $\mathbf{w}$ its trainable parameters (weights and biases), we have: \begin{equation*} \mathbf{u}(\mathbf{x}; \boldsymbol{\mu}_p, \Omega) \simeq \mathcal{NN}(\mathbf{x}, \boldsymbol{\mu}_p, {P_g}(\Omega); \mathbf{w}). \end{equation*} Hence, $\mathcal{NN}$ features $d + {n_p} + {n_g}$ input neurons and $k$ output neurons. \begin{figure}[ht] \includegraphics[width=\columnwidth]{img/architecture_merged.pdf} \caption{Architecture of a PC-USM-Net (top) and of a UC-USM-Net (bottom).} \label{fig:architecture_merged} \end{figure} \subsubsection{UC-USM-Net} \label{sec:methods:UC} As anticipated, a PC-USM-Net has a universality character with respect to domains, i.e. a single network is used to represent solutions in different geometries. However, a given point with physical coordinate $\mathbf{x}$, given as input to the PC-USM-Net, can play a different \textit{role} for different geometries. For example, a point $\mathbf{x}$ that for one geometry $\Omega^1$ belongs to the boundary of the domain, for another geometry $\Omega^2$ could be internal to the domain, and for yet another $\Omega^3$ could even be external. Therefore, we propose an evolution of PC-USM-Net aimed at capturing more effectively the correspondence between points among geometries. To achieve this goal, we rely on a \textit{universal coordinates (UC) system} for $\mathcal{G}$. A UC system is a map $\Phi_\geoSpace$ that, to any domain $\Omega$ and to any point $\mathbf{x} \in \Omega$, associates a point $\ptRef \in \widehat{\Omega}$ belonging to a \textit{reference domain} $\widehat{\Omega} \subset \mathbb{R}^{d}$. More precisely, the reference coordinate is obtained as $\ptRef = \Phi_\geoSpace(\mathbf{x},\Omega)$. We require that the application $\Phi_\geoSpace(\cdot, \Omega) \colon \Omega \to \widehat{\Omega}$ be a continuous bijection for any $\Omega \in \mathcal{G}$. Hence, the UC system $\Phi_\geoSpace$ defines a coordinate transformation that maps each domain $\Omega \in \mathcal{G}$ into the reference one $\widehat{\Omega}$. In Sec.~\ref{sec:test-cases} we will present two concrete examples of UC systems. Whenever a UC system is available, it can augment a PC-USM-Net. The enhanced version of a PC-USM-Net, called UC-USM-Net, is obtained by giving as input to $\mathcal{NN}$ the reference coordinate $\ptRef = \Phi_\geoSpace(\mathbf{x}, \Omega)$ instead of the physical one $\mathbf{x} \in \Omega$. More precisely, the surrogate model is defined as \begin{equation} \label{eqn:UC-USM-Net} \mathbf{u}(\mathbf{x}; \boldsymbol{\mu}_p, \Omega) \simeq \mathcal{NN}(\Phi_\geoSpace(\mathbf{x}, \Omega), \boldsymbol{\mu}_p, {P_g}(\Omega); \mathbf{w}). \end{equation} The resulting architecture is represented in Fig.~\ref{fig:architecture_merged} (bottom). We remark that UC-USM-Net is a generalization of PC-USM-Net, as the latter can be obtained from the former by setting $\Phi_\geoSpace$ equal to the identity function, that is by setting $\ptRef = \mathbf{x}$. For this reason, from now on we will consider without loss of generality the surrogate model of Eq.~\eqref{eqn:UC-USM-Net}. As we will show in the results section, UC-USM-Nets allow to improve the generalization accuracy of PC-USM-Nets, that is the accuracy of predictions for physical parameters and geometries not included in the training set, by providing geometrical prior knowledge during training. Moreover, we will show that, besides helping the ANN to link together points of different geometries, a UC system might provide details of the geometry that are not captured by the landmarks. We remark that, in practical applications, both the injectivity and the surjectivity requirements of the map $\Phi_\geoSpace(\cdot, \Omega) \colon \Omega \to \widehat{\Omega}$ can be relaxed. \subsection{Geometrical landmarks} \label{sec:methods:landmarks} In order to build a surrogate model that learns an approximation of the solution map, we introduce a low-dimensional description of the geometry. In particular, we construct an operator ${P_g} \colon \mathcal{G} \mapsto \mathbb{R}^{{n_g}}$ that, to any given computational domain $\Omega \in \mathcal{G}$, associates a finite number (say ${n_g}$) of geometrical landmarks $\boldsymbol{\mu}_g = {P_g}(\Omega) \in \mathbb{R}^{{n_g}}$, which are suitable to provide a compact description of $\Omega$. Depending on the structure of $\mathcal{G}$, different strategies can be followed to define the operator ${P_g}$. \begin{enumerate} \item In case an explicit parametrization of the elements of the space $\mathcal{G}$ is available, we define ${P_g}$ in such a way that the landmarks $\boldsymbol{\mu}_g = {P_g}(\Omega)$ coincide with the geometrical parameters themselves. An example is provided in Test Case~1. \item If such a parameterization is not available (as it is in many cases of practical interest), a straightforward choice is to take the coordinates of key points in the domain as landmarks. An example is provided in Test Case~2. \item Other more sophisticated techniques can be used to obtain a low-dimensional description of the computational domains. For example, the geometrical landmarks can be defined as the first, more relevant, coefficients associated with a POD analysis of a finite subset of $\mathcal{G}$ (shape model). Entering into the details of this or other techniques is beyond the scope of this paper. The method proposed in this paper is indeed general, as it is built on top of the different techniques that can be used to define the map. \end{enumerate} In general, our method does not require the operator ${P_g}$ to be invertible. As a matter of fact, ${P_g}$ is invertible only when an explicit parametrization of the space $\mathcal{G}$ is available. In fact, we allow for the case when two different geometries $\Omega^1 \neq \Omega^2$, both belonging to $\mathcal{G}$, are associated to identical landmarks (i.e. ${P_g}(\Omega^1) = {P_g}(\Omega^2)$). Since the geometrical landmarks characterize the variability of the geometry, a good design of ${P_g}$ requires that the condition ${P_g}(\Omega^1) = {P_g}(\Omega^2)$ implies that $\Omega^1$ and $\Omega^2$ are {\textit{minimally} different}. \subsection{Training a USM-Net} To train a USM-Net (that is, either a PC-USM-Net or a UC-USM-Net), we require the output of problem~\eqref{eqn:PDE_generic} for several pairs $(\boldsymbol{\mu}_p, \Omega) \in \mathcal{P} \times \mathcal{G}$. These solutions, called \textit{snapshots}, are typically obtained through the FOM, a high-fidelity numerical solver of Eq.~\eqref{eqn:PDE_generic}, based e.g. on Finite Elements of Finite Differences \cite{quarteroni2017numerical}. Yet, as our method is non-intrusive and does not require any knowledge of equation~\eqref{eqn:PDE_generic}, snapshots can also be derived from a black-box solver, or even from experimental measurements. We consider a collection of $N_{sn}$ snapshots, associated with $\boldsymbol{\mu}_p^{i} \in \mathcal{P}$ and $\Omega^{i} \in \mathcal{G}$, for $i = 1, \dots, N_{sn}$. For any snapshot, then, we consider a number of observations in a set of points belonging to $\Omega^{i}$. In case of high variability of the geometries in $\mathcal{G}$, the resolution of the FOM typically requires the generation of different meshes, without a one-to-one correspondence of the nodes. Therefore, to guarantee generality, we consider the case where each snapshot has a potentially different number of observation points. Specifically, we denote by $\{\ptObs{i}{j}, \, j=1,\dots,\numPoints{i}\}$ the set of observation points associated with the $i$-th snapshot. In conclusion, the training dataset consists of the following set \begin{equation*} \{ \boldsymbol{\mu}_p^{i}, \Omega^{i}, \{ \mathbf{u}(\ptObs{i}{j}; \boldsymbol{\mu}_p^{i}, \Omega^{i}) \}_{j=1}^{\numPoints{i}} \}_{i=1}^{N_{sn}}. \end{equation*} Training the USM-Net requires solving the following minimization problem \begin{equation*} \widehat{\mathbf{w}} = \underset{\mathbf{w}}{\operatorname{argmin}} \; \mathcal{J}(\mathbf{w}). \end{equation*} The loss function $\mathcal{J}$ is given by the misfit between snapshot data and predictions, plus (optionally) a regularization term $\mathcal{R}$: \begin{equation}\label{eqn:loss} \mathcal{J}(\mathbf{w}) = \frac{1}{N_{sn}}\sum_{i=1}^{N_{sn}} \frac{1}{ \numPoints{i}} \sum_{j=1}^{\numPoints{i}} d( \mathbf{u}_j^i, \widetilde{\mathbf{u}}_j^i)+ \mathcal{R}(\mathbf{w}), \end{equation} having defined \begin{equation*} \begin{split} \mathbf{u}_j^i &= \mathbf{u}(\ptObs{i}{j}; \boldsymbol{\mu}_p^{i}, \Omega^{i}), \\ \widetilde{\mathbf{u}}_j^i &= \mathcal{NN}(\Phi_\geoSpace(\ptObs{i}{j}, \Omega^{i}), \boldsymbol{\mu}_p^{i}, {P_g}(\Omega^{i}); \mathbf{w}) \end{split} \end{equation*} and where $d(\cdot, \cdot)$ is a suitable discrepancy metric (typically, $d(\mathbf{u}, \tilde{\mathbf{u}}) = \| \mathbf{u} -\tilde{\mathbf{u}}\|^2$). Standard techniques, such as Tikhonov or LASSO regularization, can be used for the regularization term $\mathcal{R}$. Additionally, $\mathcal{R}$ can be augmented by suitable terms informing the USM-Net of physical knowledge available on the solution (see also Sec.~\ref{sec:methods:gray-box}). An example in this sense will be shown in Sec.~\ref{sec:results}. \subsection{Grey-box USM-Net} \label{sec:methods:gray-box} So far, we have presented USM-Nets as fully non-intrusive (black-box) surrogate modeling techniques. Still, physical knowledge can be optionally embedded into their construction. Indeed, the training process can be augmented by informing the network either of physical constraints (such as conservation principles, symmetry properties, or the positivity of the solution) or of the differential equations and boundary conditions that characterize the problem. We distinguish between \textit{weak imposition} and \textit{strong imposition} of the physical knowledge. \paragraph{Weak imposition.} Prior knowledge on the solution map can be enforced through the regularization term $\mathcal{R}$ of the loss function of \eqref{eqn:loss}. $\mathcal{R}$ can include the norm of the residual of the FOM equations and boundary conditions evaluated in a collection of collocation points, as done in the training of Physics Informed Neural Networks \cite{raissi2019physics}. Similarly, other physical constraints can be rephrased in terms of minimization of a regularization term $\mathcal{R}$. Thanks to Automatic Differentiation, the inclusion of differential operators in the term $\mathcal{R}$ does not introduce severe implementation efforts, even if it will slow down the training process. Therefore, the user should wisely balance the advantages and disadvantages of introducing such a term. An example of the weak imposition of the boundary conditions is presented in Test Case 1 (Sec.~\ref{sec:test-cases:cavity}). \paragraph{Strong imposition.} Alternatively, we can enforce physical constraints by defining an architecture $\mathcal{NN}$ that satisfies them by construction. We now give a brief list of examples: \begin{enumerate} \item Non-negativity of the solution can be enforced by introducing after the FCNN a further layer that applies an operator with nonnegative output, such as $(\cdot)^2$ or $|\cdot|$. In other terms, we perform a composition between the FCNN and the nonnegative operator. \item Symmetry w.r.t. a given input coordinate can be enforced, e.g., by introducing an input layer that pre-processes the corresponding input through an even function, such as $|\cdot|$. As in the previous point, this corresponds to performing a composition between the even function and the FCNN. \item Dirichlet boundary conditions on a portion of the boundary (i.e. $\mathbf{u}(\mathbf{x}) = \mathbf{u}_D$ on $\Gamma_{D} \subset \partial\Omega$) can be strongly enforced by introducing a multiplicative layer after the FCNN that multiplies the solution by a mask function $\Phi_{\text{BC}}(\mathbf{x}, \Omega)$, such that $\Phi_{\text{BC}} = 0$ on $\Gamma_{D}$ and $\Phi_{\text{BC}} \neq 0$ elsewhere, and sums the datum $\mathbf{u}_D$. \item Solenoidality of the solution ($\nabla \cdot {\mathbf{u}} = 0$), a common requirement in fluid dynamics, can be enforced by interpreting the FCNN output as the flow field potential and introducing an output layer that returns its curl. An example of this technique is described in Sec.~\ref{sec:test-cases:cavity}. \end{enumerate} \subsection{Notes about implementation} From the implementation point of view, a few cautions are needed to make the training of USM-Nets computationally light. First of all, despite the application of the ${P_g}$ map in Fig.~\ref{fig:architecture_merged} is indicated as being an integral part of the USM-Net, the landmarks $\boldsymbol{\mu}_g^i = {P_g}(\Omega^i)$ can be pre-calculated before the training. Similarly, in the case a UC system is employed, the coordinate transformation $\ptRef_j^i = \Phi_\geoSpace(\mathbf{x}_j^i, \Omega^i)$ for any $i$ and $j$ can be performed offline, at a stage prior to training. In this manner, we set up an augmented dataset consisting of: \begin{equation*} \begin{aligned} (&\ptRef_1^1,&& \boldsymbol{\mu}_g^1, && \boldsymbol{\mu}_p^1), \qquad && \mathbf{u}_1^1 \\ (&\ptRef_2^1,&& \boldsymbol{\mu}_g^1, && \boldsymbol{\mu}_p^1), \qquad && \mathbf{u}_2^1 \\ &\vdots && \vdots && \vdots && \vdots \\ (&\ptRef_{\numPoints{1}}^1,&& \boldsymbol{\mu}_g^1 && \boldsymbol{\mu}_p^1), \qquad && \mathbf{u}_{\numPoints{1}}^1 \\ (&\ptRef_1^2,&& \boldsymbol{\mu}_g^2, && \boldsymbol{\mu}_p^2), \qquad && \mathbf{u}_1^2 \\ &\vdots && \vdots && \vdots && \vdots \\ \end{aligned} \end{equation*} and $\mathcal{NN}$ is trained to fit the map from the first three columns to the last one. Once the $\mathcal{NN}$ is trained, it can be used to approximate the solution for unseen parameters and/or geometries. This is the \textit{online} stage, which consists of the following steps: \begin{enumerate} \item Receive $\Omega$ and $\boldsymbol{\mu}_p$, \item Compute $\boldsymbol{\mu}_g = {P_g}(\Omega)$. \item For any $\mathbf{x}_j$ for which the solution is needed, compute $\ptRef_j = \Phi_\geoSpace(\mathbf{x}_j, \Omega)$. \item Evaluate $\mathbf{u}(\mathbf{x}_j; \boldsymbol{\mu}_p, \Omega) \simeq \mathcal{NN}(\ptRef_j, \boldsymbol{\mu}_p, \boldsymbol{\mu}_g; \mathbf{w})$. Typically, this operation can be vectorized to further increase the velocity of execution. \end{enumerate} We recall that both ${P_g}$ and (if used) $\Phi_\geoSpace$ are defined case-by-case, depending on the application. \section{Results} \label{sec:results} In this section, we present the results obtained by applying the methods presented in Sec.~\ref{sec:methods} to the two test cases of Sec.~\ref{sec:test-cases}. These results have been obtained using TensorFlow \cite{tensorflow2015whitepaper} and the optimizers of SciPy \cite{SciPy2020}. \subsection{Test Case 1: lid-driven cavity} We construct a training set of $N_{sn} = 400$ numerical simulations obtained by randomly sampling the height $H$ and the physical parameter $\boldsymbol{\mu}_p = \mathbb{R}\mathrm{e}$. Some examples of streamlines resulting from numerical solutions by varying the parameters are reported in Fig.~\ref{fig:cavity_comparison_train}. For each geometry, we subsample the solution in $\numPoints{} = 360$ random points. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{img/cavity_comparison_train_compress.pdf}\\ \caption{ Test Case 1: comparison of some numerical solutions that constitute the training set. The dataset is generated by approximating the FOM \eqref{eqn:cavity} for random samples of the values of the physical and geometrical parameters.} \label{fig:cavity_comparison_train} \end{figure*} By varying the dimensions of this dataset, we train several USM-Nets, formed by FCNNs made of 3 inner layers, respectively consisting of 30, 20, and 10 neurons. We consider four configurations: \begin{enumerate} \item velocity-field {PC-USM-Net}: receiving as inputs the two spatial coordinates, the physical and the geometrical parameters, and producing as outputs the two velocity components; \item velocity-field {UC-USM-Net}: receiving as input the two universal coordinates, the physical and the geometrical parameters, and producing as outputs the two velocity components; \item potential-field {PC-USM-Net}: receiving as inputs the two spatial coordinates, the physical and the geometrical parameters, and producing as output the two velocity components computed from the fluid flow potential; \item potential-field {UC-USM-Net}: receiving as inputs the two universal coordinates, the physical and the geometrical parameters, and producing as output the two velocity components computed from the fluid flow potential. \end{enumerate} For each configuration, we perform $500$ epochs of the ADAM optimizers \cite{kingma2014adam} followed by $20000$ epochs of the BFGS method \cite{goodfellow2016deep} to ensure convergence of the optimizer. For the case of a training set composed of $N_{sn} = 100$ numerical simulations, we post-process the velocity field to display streamlines. In Fig.~\ref{fig:Cavity_strategy_comparison} we report the streamlines resulting from the different ANN configurations for three test cases extracted from the $40$ numerical simulations that formed the test set: they represent the best, the average, and the worst-case scenarios, respectively. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{img/cavity_net_comparison_compress.pdf}\\ \caption{ Test Case 1: comparison of ANN streamlines reconstruction on three numerical solutions from the test set. Test cases are selected to display the range of ANN reconstruction errors in the test set: from minor (best case scenario: first row) to significant (worst case scenario: last row).} \label{fig:Cavity_strategy_comparison} \end{figure*} Training the ANN with a loss function composed only of the data misfit term results insufficient for an accurate reconstruction of the streamlines, especially in low-velocity areas. In Figs.~\ref{fig:Cavity_loss_comparison_model} and \ref{fig:Cavity_loss_comparison_potential}, we show the effect of the loss function components described in Section \ref{sec:test-cases:cavity} on the streamlines reconstruction. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{img/cavity_loss_composition_model_compress.pdf}\\ \caption{ Test Case 1: comparison of velocity-field PC-USM-Net streamlines reconstruction on three numerical solutions from the test set on varying the definition of the loss function. We consider a loss function composed of the data misfit term (second column), combined with boundary conditions (third column) or with the direction regularization (fourth column). Test cases are selected to display the range of ANN reconstruction errors in the test set: from minor (best case scenario: first row) to significant (worst case scenario: last row).} \label{fig:Cavity_loss_comparison_model} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{img/cavity_loss_composition_compress.pdf}\\ \caption{ Test Case 1: comparison of potential-field UC-USM-Net streamlines reconstruction on three numerical solutions from the test set on varying the definition of the loss function. We consider a loss function composed of the data misfit term (second column), combined with boundary conditions (third column) or with the direction regularization (fourth column). Test cases are selected in order to display the range of ANN reconstruction errors in the test set: from minor (best case scenario: first row) to significant (worst case scenario: last row).} \label{fig:Cavity_loss_comparison_potential} \end{figure*} To assess the generalization error on the test set resulting from different training configurations, we repeat the training considering only a subset of $[25,50,100,200,400]$ numerical simulations as training set. The test set comprises $40$ numerical simulations sampled in $10000$ points. We repeat the training with 10 different random initializations of the ANN weights and biases. The average root mean squared error (RMSE) of the velocity magnitude and direction are reported in Fig.~\ref{fig:Cavity_RMSE}, together with bands indicating the maximum and the minimum values for the different training set dimensions and ANN configurations. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{img/cavity_net_RMSE_compress.pdf}\\ \caption{ Test Case 1: RMSE on the velocity magnitude and direction on the test set, made of $40$ numerical simulations sampled in $10000$ points. We compare the four different configurations of ANN on varying the dimension of the training set. The error in the direction is significantly larger than the error in the magnitude and is inversely proportional to the number of simulations of the training set. } \label{fig:Cavity_RMSE} \end{figure*} \subsection{Test Case 2: coronary bifurcation} We consider a training set consisting of $N_{sn} = 500$ different geometries. For each geometry, we take the solution in $\numPoints{} = 1000$ randomly generated points. Then, for both the landmark configurations (26 landmarks and 6 landmarks), we train a PC-USM-Net and a UC-USM-Net. We consider a FCNN with 4 inner layers, respectively consisting of 20, 15, 10, and 5 neurons. This architecture has been tuned in order to minimize the validation error on a set of 100 geometries not included in the training dataset. To train the FCNN weights and biases, we run 200 iterations of the Adam optimizer \cite{kingma2014adam} and, subsequently, 5000 iterations of the BFGS algorithm \cite{goodfellow2016deep}. We perform 10 different training runs for each configuration, starting from different random initializations of the FCNN parameters. Each training run lasts about 45 minutes on a laptop equipped with Intel Core i7-1165G7 CPU (2.80 GHz). In Fig.~\ref{fig:coro_boxplot_up}, we show boxplots of the errors associated with a testing dataset of 100 geometries, neither in the training nor in the validation dataset. As expected, USM-Nets that are provided with 26 landmarks generate more accurate predictions than those that are aware of only 6 landmarks. However, we notice that USM-Nets based on only 6 landmarks still have a noticeable accuracy (relative RMSE error of about 3\% on both the velocity and pressure). This figure is compatible with the levels of precision typically required in clinical practice. \begin{figure*} \centering \includegraphics[]{img/coro_boxplot_up.pdf}\\ \caption{ Test Case 2: boxplots of the errors on the test dataset obtained with 26 landmarks and 6 landmarks and with the PC-USM-Net and the UC-USM-Net architecture. The boxplots refer to 10 training runs obtained starting from different random initializations of the ANN weights and biases. Left: error on the velocity field; right: error on the pressure field. } \label{fig:coro_boxplot_up} \end{figure*} Furthermore, the boxplots show that the use of UC can significantly enhance the performance of the USM-Net. The improvement is all the more evident in the case of the velocity field, compared to the pressure field. In order to highlight the role that a UC system has in improving the generalization accuracy of USM-Nets, we consider the pair of domains in the testing set that are characterized by the most similar landmarks. More precisely, these domains, which we will call $\Omega^1$ and $\Omega^2$, are such that $\|{P_g}(\Omega^1) - {P_g}(\Omega^2) \| < 10^{-4}$. Since landmarks characterize the domain at some control points, two domains with very similar landmarks may differ significantly away from the control points. This is what happens for the two domains considered, in particular in correspondence of the upper outflow track (see Fig.~\ref{fig:coro_geo_variability}, left). On the right side of Fig.~\ref{fig:coro_geo_variability} we show a detail of the velocity field obtained for these two domains with a PC-USM-Net and a UC-USM-Net, in comparison with the reference solution obtained by means of the FOM. We recall that the domain $\Omega$ affects the PC-USM-Net result only through the landmarks $\boldsymbol{\mu}_g$. Therefore, the PC-USM-Net will provide the same solution for two geometries with identical landmarks. As shown by Fig.~\ref{fig:coro_geo_variability}, this entails that the PC-USM-Net is not very effective in capturing the solution near the edge, where the solution is heavily affected by the geometric details of the domain not captured by the landmarks. The use of a UC system is helpful in this regard by defining a model that receives as input, not the physical coordinates, but the reference ones. These coordinates directly encode details of the geometry not captured by the landmarks. In particular, the UC system makes the points belonging to the boundary of the various domains correspond to each other. In this way, UC-USM-Nets are more effective than PC-USM-Nets in capturing the velocity field close to the boundary. \begin{figure*} \centering \includegraphics[width=\textwidth]{img/coro_geo_variability.pdf}\\ \caption{Test case 2: two domains ($\Omega^1$ and $\Omega^2$) belonging to the testing set that feature almost identical landmarks, that is ${P_g}(\Omega^1) \simeq {P_g}(\Omega^2)$ (left). On the right, a detail of the velocity field is compared among FOM solution, PC-USM-Net, and UC-USM-Net surrogates. The color map is intentionally flattened towards low values to highlight velocity variations near the edge.} \label{fig:coro_geo_variability} \end{figure*} In Figs.~\ref{fig:coro_comparison_u} and \ref{fig:coro_comparison_p} we show the velocity and pressure fields predicted by one of the trained UC-USM-Nets on a subset of the test dataset. \begin{figure*} \centering \includegraphics[width = \textwidth]{img/coro_comparison_u.png}\\ \includegraphics[]{img/colorbar_u.pdf} \caption{Test Case 2: comparison of the velocity magnitude field obtained with the FOM (top figure within each box) and with the UC-USM-Net (bottom figure within each box) in a subset of the test set.} \label{fig:coro_comparison_u} \end{figure*} \begin{figure*} \centering \includegraphics[width = \textwidth]{img/coro_comparison_p.png} \\ \includegraphics[]{img/colorbar_p.pdf} \caption{Test Case 2: comparison of the pressure field obtained with the FOM (top figure within each box) and with the UC-USM-Net (bottom figure within each box) in a subset of the test set.} \label{fig:coro_comparison_p} \end{figure*} \section{Test cases} \label{sec:test-cases} In this section, we present two test cases and provide details on the implementation choices we followed to apply the methods presented in Sec.~\ref{sec:methods}. \subsection{Test Case 1: lid-driven cavity} \label{sec:test-cases:cavity} Test Case 1 is based on the well-known stationary lid-driven cavity problem (see, e.g., \cite{botella1998benchmark}), for which we consider an extension with variable geometry. The challenge of this test case is to capture the different vortex topologies formed for different Reynolds numbers and different aspect ratios of the geometry. We consider a rectangular domain $\Omega_H = (0, 1) \times (0, H)$, with $H > 0$, and the following PDE (Navier-Stokes equations), where $\Gamma_H^D = \{ (x,y)^T \in \Omega_H \text{ s.t. } y = H \}$ denotes the top edge of the domain and $\mathbb{R}\mathrm{e}$ the Reynolds number: \begin{equation} \label{eqn:cavity} \left\{ \begin{aligned} &- \frac{1}{\mathbb{R}\mathrm{e}} \Delta \mathbf{v} + ( \mathbf{v} \cdot \nabla) \mathbf{v} + \nabla p = \mathbf{0} && \quad \text{in }\Omega_H, \\ &\nabla \cdot \mathbf{v} = 0 && \quad \text{in } \Omega_H, \\ & \mathbf{v} = (1,0)^T && \quad \text{on } \Gamma_H^D, \\ & \mathbf{v} = \mathbf{0} && \quad \text{on } \partial\Omega_H \setminus \Gamma_H^D. \\ \end{aligned} \right. \end{equation} The unknowns on the problem are the fluid velocity $\mathbf{v}$ and pressure $p$. The goal of Test Case 1 is to build an emulator that approximates the fluid velocity $\mathbf{v}$, given the geometry $\Omega_H$ and the Reynolds number. More precisely, we consider geometries with height $H$ in the interval $[0.5, 2]$: \begin{equation*} \mathcal{G} = \{\Omega_H = (0, 1) \times (0, H) \text{, for } 0.5 \leq H \leq 2\}. \end{equation*} The physical parameter consists of $\boldsymbol{\mu}_p = \mathbb{R}\mathrm{e}$ and ranges in the interval $\mathcal{P} = [10^2, 10^4]$. \subsubsection{Training data generation} To generate training data, we consider a Taylor-Hood Finite Element approximation of problem \eqref{eqn:cavity}. We employ structured triangular computational meshes with a uniform space resolution of $h = 10^{-2}$. We remark that, as a consequence, Finite Element approximations associated with different domains of $\mathcal{G}$ might feature different numbers of degrees of freedom. To tackle Newton convergence issues for large $\mathbb{R}\mathrm{e}$, we equip the solver with an adaptive continuation ramp with respect to the Dirichlet datum. To explore the set $\mathcal{G} \times \mathcal{P}$, we employ a Monte Carlo approach by independently sampling from a uniform distribution for $H$ and a log-uniform distribution for $\mathbb{R}\mathrm{e}$. After each FOM resolution, we export the velocity $\mathbf{v}(\mathbf{x})$ at a set of points randomly selected within the domain $\Omega_H$. \subsubsection{Geometrical landmarks} Test Case 1 has an explicit parametrization of the domains in the set $\mathcal{G}$, the height $H$ being the parameter. Henceforth, we define $\boldsymbol{\mu}_g = H$ as the unique geometrical landmark, by setting ${P_g}(\Omega_H) := H$. \subsubsection{UC system} A straightforward (and also effective) UC system for Test Case 1 consists in mapping each domain $\Omega_H \in \mathcal{G}$ into the unit square $\widehat{\Omega} := (0,1)^2$, through the transformation: \begin{equation*} \hat{x} = x, \quad \hat{y} = y / H. \end{equation*} More precisely, \begin{equation*} \Phi_\geoSpace\left( \begin{pmatrix} x \\ y\end{pmatrix}, \Omega_H \right) := \begin{pmatrix} x \\ y / H\end{pmatrix}. \end{equation*} \subsubsection{USM-Net architecture} We consider two different ANN architectures to build USM-Nets for Test Case 1 (see Fig.~\ref{fig:architecture_cavity}). \paragraph{Velocity-field architecture} The first architecture for $\mathcal{NN}$ relies on a FCNN mapping $\mathbf{x}$ (or $\ptRef$), $\boldsymbol{\mu}_p$ and $\boldsymbol{\mu}_g$ into an approximation of $\mathbf{v}(\mathbf{x}; \mathbb{R}\mathrm{e}, \Omega_H)$. To ease the FCNN training, we normalize both input and output data by mapping them in the interval $[-1, 1]$, and we preprocess the Reynolds number through a $\log$ transformation. In conclusion, the FCNN features 4 input neurons and 2 output neurons. \paragraph{Potential-field architecture} As an alternative, we build a FCNN with a single output neuron, interpreted a the fluid flow potential $\psi(\mathbf{x}; \mathbb{R}\mathrm{e}, \Omega_H)$, and we subsequently compute the approximation of the velocity field as: \begin{equation} \label{eqn:potential} \mathbf{v}(\mathbf{x}; \mathbb{R}\mathrm{e}, \Omega_H) = \begin{pmatrix} +\frac{\partial}{\partial y} \psi(\mathbf{x}; \mathbb{R}\mathrm{e}, \Omega_H) \\ -\frac{\partial}{\partial x} \psi(\mathbf{x}; \mathbb{R}\mathrm{e}, \Omega_H) \\ \end{pmatrix} \end{equation} These operations are performed through Automatic Differentiation (AD) of the FCNN output. We remark that we do not need a FOM-based potential $\psi$ for training data. The training is done directly with respect to the velocity data. The operations of \eqref{eqn:potential} represent indeed the last layer of the architecture $\mathcal{NN}$. Input and outputs are normalized as for the velocity-field architecture. \begin{figure}[ht] \includegraphics[width=\columnwidth]{img/architecture_cavity.pdf} \caption{Test Case 1: comparison of velocity-field and potential-field architectures.} \label{fig:architecture_cavity} \end{figure} \subsubsection{Loss function} Since the goal of Test Case 1 is to reconstruct the velocity field with a focus on the vortex structure of the solution, we employ a discrepancy metric $d$ that emphasizes the role of flow direction at each point in the domain, including those with low flow intensity. Specifically, we define \begin{equation*} \begin{split} d(\mathbf{u}, \tilde{\mathbf{u}}) = \| \mathbf{u} -\tilde{\mathbf{u}}\|^2 + \left\| \frac{\mathbf{u}}{\epsilon + \| \mathbf{u} \| } - \frac{\tilde{\mathbf{u}}}{\epsilon + \| \tilde{\mathbf{u}} \|} \right\|^2 \end{split} \end{equation*} where $\epsilon = 10^{-4}$ is a small constant to avoid singularities. The second term drives the USM-Net to accurately match the direction of the velocity. Without this term, indeed, the flow direction would not be captured well in the regions with low flow magnitude, due to the low contribution in the first term of the loss. Moreover, we augment the loss function with the following physics-based regularization term, aimed at enforcing the satisfaction of the Dirichlet boundary conditions: \begin{equation*} \begin{array}{l} \mathcal{R}(\mathbf{w}) = \frac{1}{N_{\text{BC}} \numPoints{\text{BC}}} \sum_{i=1}^{N_{\text{BC}}} \sum_{j=1}^{\numPoints{\text{BC}}} \| \mathbf{u}_{\text{BC},j}^i - \mathbf{v}_{\text{BC}}(\mathbf{x}_{\text{BC},j}^i) \|^2, \\ \mathbf{u}_{\text{BC},j}^i = \mathcal{NN}(\mathbf{x}_{\text{BC},j}^i, \boldsymbol{\mu}_p^{\text{BC}, i}, \boldsymbol{\mu}_g^{\text{BC}, i}; \mathbf{w}) \end{array} \end{equation*} where $\boldsymbol{\mu}_p^{\text{BC}, i} \in [10^{2}, 10^{4}]$ and $\boldsymbol{\mu}_g^{\text{BC}, i} \in [0.5, 2]$ are sample points, $\mathbf{x}_{\text{BC},j}^i$ is a set of points belonging to the boundary, and $\mathbf{v}_{\text{BC}}$ is the Dirichlet datum (see \eqref{eqn:cavity}). \subsection{Test Case 2: coronary bifurcation} \label{sec:test-cases:coronary} As a second test case, we consider the problem of predicting blood flow and pressure field within a coronary bifurcation in the presence of stenosis. More precisely, we consider a computational domain $\Omega$, corresponding to the 2D representation of a section of a coronary artery with a bifurcation. We denote by $\Gamma_{\text{in}}$ the portion of the boundary corresponding to the inlet, by $\Gamma_{\text{out}}$ the two outlets and by $\Gamma_{\text{wall}} = \Gamma_{\text{top}} \cup \Gamma_{\text{bottom}} \cup \Gamma_{\text{front}}$ the vessel wall. In this test case, we will consider many different computational domains, each representing a coronary bifurcation in a different virtual patient. An example domain is represented in Fig.~\ref{fig:coro_domain}. \begin{figure}[ht] \centering \includegraphics[width = \columnwidth]{img/coro_domain.pdf}\\ \caption{Test Case 2: example of computational domain and corresponding boundaries.} \label{fig:coro_domain} \end{figure} We consider the following stationary Navier-Stokes model, describing the steady-state fluid flow in the coronary bifurcation: \begin{equation} \label{eqn:coronary} \left\{ \begin{aligned} &- \nu \Delta \mathbf{v} + ( \mathbf{v} \cdot \nabla) \mathbf{v} + \frac{1}{\rho} \nabla p = \mathbf{0} && \quad \text{in }\Omega, \\ &\operatorname{div} \mathbf{v} = 0 && \quad \text{in } \Omega, \\ & \mathbf{v} = \mathbf{v}_{\text{in}} && \quad \text{on } \Gamma_{\text{in}}, \\ & \mathbf{v} = \mathbf{0} && \quad \text{on } \Gamma_{\text{wall}}, \\ & \nu \frac{\partial \mathbf{v}}{\partial \mathbf{n}} - \frac{1}{\rho} p \, \mathbf{n} = \mathbf{0} && \quad \text{on } \Gamma_{\text{out}}, \\ \end{aligned} \right. \end{equation} where $\nu = \SI{4.72}{\milli\liter\squared\per\second}$ is the kinematic viscosity of blood and $\rho = \SI{1060}{\kilogram\per\meter\cubed}$ its density. At the inlet, we consider a parabolic profile with a maximum velocity equal to $\SI{14}{\centi\meter\per\second}$. In Fig.~\ref{fig:coro_example_solution}, we show the numerical solution of \eqref{eqn:coronary} in the example computational domain of Fig.~\ref{fig:coro_domain}. \subsubsection{Geometrical variability and landmarks} The aim of Test Case 2 is to build a reduced model predicting the pressure and velocity fields in an arbitrary domain representing a coronary bifurcation. We synthetically generate a large number of different computational domains corresponding to many virtual patients. To do this, we use splines obtained by randomly varying their parameters in suitable intervals, defined to reflect the realistic variability observed among patients \cite{chiastra2016coronary}. Notice that the geometries thus obtained may present stenoses, of a more or less acute degree, located upstream of the bifurcation or in the two branches downstream of it. A subsample of the geometries obtained following this procedure is displayed in Fig.~\ref{fig:coro_geometries}. Due to the lack of an explicit parameterization of these geometries (a common issue when dealing with domains from real patients), we need to define geometrical landmarks to characterize each geometry. To this end, we use an operatively light procedure that can also be easily adopted in a clinical context. Specifically, we define landmarks as the coordinates $y$ of the vessel wall corresponding to a set of predefined coordinates $x$. These coordinates contain information regarding the lumen thickness at various levels and the possible presence of stenosis. Note that, in clinical practice, these landmarks can be easily derived directly from medical imaging, without the need to construct a computational mesh. In this test case, we will consider two sets of landmarks containing, respectively, 26 and 6 coordinates (see Fig.~\ref{fig:coro_landmarks}). Clearly, the first set is much more informative than the second one. The aim is to test the robustness of the proposed methods in the case where the landmarks provide a poor description of the geometry, and are not able to exhaustively capture its variability. \begin{figure}[ht] \centering \includegraphics[width = \columnwidth]{img/coro_landmarks.pdf}\\ \caption{Test Case 2: geometrical landmarks $\boldsymbol{\mu}_g$.} \label{fig:coro_landmarks} \end{figure} \begin{figure*} \centering \includegraphics[]{img/coro_example_solution_u.pdf}\\ \includegraphics[]{img/coro_example_solution_p.pdf}\\ \caption{Test Case 2: numerical solution of \eqref{eqn:coronary} on the computational domain of Fig.~\ref{fig:coro_domain}. Top: velocity field; bottom: pressure field.} \label{fig:coro_example_solution} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{img/coro_geometries.pdf}\\ \caption{Test Case 2: representation of some of the geometries $\Omega \in \mathcal{G}$ included in the training dataset.} \label{fig:coro_geometries} \end{figure*} \subsubsection{UC system} Differently than in Test Case 1, where, thanks to the simplicity of the domains of the $\mathcal{G}$ space, it was possible to explicitly define a UC system, in Test Case 2 the construction of a UC system is not an immediate task. We propose to rely on two Laplacian-based fields, which define the inlet-outlet and top-bottom directions, respectively. More precisely, given a geometry $\Omega \in \mathcal{G}$, we define the fields $\psi_{\mathrm{LR}}\colon\Omega \to \widehat{\Omega}$ and $\psi_{\mathrm{TD}}\colon\Omega \to \widehat{\Omega}$ as the solutions of the following differential problems: \begin{equation} \label{eqn:UCcoroLR} \left\{ \begin{aligned} &- \Delta \psi_{\mathrm{LR}} = 0 && \quad \text{in }\Omega, \\ & \psi_{\mathrm{LR}} = 0 && \quad \text{on } \Gamma_{\text{in}}, \\ & \psi_{\mathrm{LR}} = 1 && \quad \text{on } \Gamma_{\text{out}} \cup \Gamma_{\text{front}}, \\ & \psi_{\mathrm{LR}} = \frac{x - x_{\min}}{x_{\max} - x_{\min}} && \quad \text{on } \Gamma_{\text{top}} \cup \Gamma_{\text{bottom}}, \\ \end{aligned} \right. \end{equation} \begin{equation} \label{eqn:UCcoroTD} \left\{ \begin{aligned} &- \Delta \psi_{\mathrm{TD}} = 0 && \; \text{in }\Omega, \\ & \psi_{\mathrm{TD}} = + \alpha + (1 - \alpha)\frac{x - x_{\min}}{x_{\max} - x_{\min}} && \; \text{on } \Gamma_{\text{top}}, \\ & \psi_{\mathrm{TD}} = -\alpha - (1 - \alpha)\frac{x - x_{\min}}{x_{\max} - x_{\min}} && \; \text{on } \Gamma_{\text{bottom}}, \\ & \frac{\partial \psi_{\mathrm{TD}}}{\partial \mathbf{n}} = 0 && \; \text{on } \Gamma_{\text{in}} \cup \Gamma_{\text{out}} \cup \Gamma_{\text{front}}. \\ \end{aligned} \right. \end{equation} In Fig.~\ref{fig:coro_example_psi} we show the fields $\psi_{\mathrm{LR}}$ and $\psi_{\mathrm{TD}}$ obtained for the domain of Fig.~\ref{fig:coro_domain}. The field $\psi_{\mathrm{LR}}$ bridges the inlet region (i.e. $\Gamma_{\text{in}}$, where $\psi_{\mathrm{LR}} = 0$) with the frontal region of the domain (i.e. $\Gamma_{\text{out}} \cup \Gamma_{\text{front}}$, where $\psi_{\mathrm{LR}} = 1$). Conversely, $\psi_{\mathrm{TD}}$ defines the proximity of each lumen point to the upper wall relative to the lower wall. Setting a constant $\alpha < 1$ allows better differentiation of the $\psi_{\mathrm{TD}}$ field within each branch downstream of the bifurcation. Specifically, we set $\alpha = 0.1$. The UC system is thus defined as: \begin{equation*} \ptRef = \Phi_\geoSpace( \mathbf{x}, \Omega ) := \begin{pmatrix} \psi_{\mathrm{LR}}(\mathbf{x}; \Omega) \\ \psi_{\mathrm{TD}}(\mathbf{x}; \Omega) \end{pmatrix}. \end{equation*} In Fig.~\ref{fig:coro_UC} we show the reference domain $\widehat{\Omega}$ and the mutual correspondences between the boundary of the physical and reference domains. \begin{figure*} \centering \includegraphics[]{img/coro_example_psi_LR.pdf} \includegraphics[]{img/coro_example_psi_TD.pdf} \caption{Test Case 2: fields $\psi_{\mathrm{LR}}$ and $\psi_{\mathrm{TD}}$ associated with the domain of Fig.~\ref{fig:coro_example_psi}.} \label{fig:coro_example_psi} \end{figure*} \begin{figure*} \centering \includegraphics[width = 0.8\textwidth]{img/coro_UC.pdf} \caption{Test Case 2: representation of the UC system $\Phi_\geoSpace$. Left: physical domain $\Omega$; right: reference domain $\widehat{\Omega}$.} \label{fig:coro_UC} \end{figure*} \subsubsection{Training data generation} To generate training data, we employ the Finite Element solver described for Test Case 1 (see Sec.~\ref{sec:test-cases:cavity}). For space discretization, we consider triangular computational meshes with a space resolution of nearly $h = \SI{0.2}{\milli\meter}$. The UC coordinates are obtained by solving the differential problems \eqref{eqn:UCcoroLR} and \eqref{eqn:UCcoroTD} with $P1$ Finite Elements on the same computational mesh. \subsubsection{USM-Net architecture} In Test Case 2, we employ a FCNN connecting $\mathbf{x}$ or $\ptRef$ (depending on whether a PC-USM-Net or a UC-USM-Net is used), $\boldsymbol{\mu}_p$ and $\boldsymbol{\mu}_g$ into an approximation of $\mathbf{u}(\mathbf{x}; \Omega_H)$ where the solution $\mathbf{u} = (\mathbf{v}, p)$ is given by the pair velocity-pressure. Similarly to Test Case 2, we add a normalization layer before the input and after the output layers of the FCNN, to constrain each input and each output in the interval $[-1, 1]$. \subsubsection{Loss function} We employ a purely black-box loss function, as defined in \eqref{eqn:loss}, with a quadratic discrepancy metric $$ d(\mathbf{u}, \tilde{\mathbf{u}}) = \| \mathbf{u} -\tilde{\mathbf{u}}\|^2 $$ and without any regularization terms.
proofpile-arXiv_065-3292
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Entity Set Expansion (ESE) aims to expand from a set of seed entities (e.g., {“\emph{China}”, “\emph{America}”, “\emph{Japan}”}) to more new target entities (e.g., {“\emph{Russia}”, “\emph{Germany}”, ...}) that belong to the same semantic class (i.e., \texttt{Country}) as the seed entities. The ESE task can benefit kinds of NLP or IR downstream applications, such as knowledge graph construction~\citep{shi2021entity}, Web search~\citep{chen2016long}, and question answering~\citep{wang2008automatic}. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{Intro.pdf} \caption{Examples of hard negative entities in ESE.} \label{Intro_Figure} \end{figure} In recent years, kinds of iterative bootstrapping methods have gradually become the mainstream of ESE research. These methods~\citep{shen2017setexpan, yan2019learning, CaSE} mainly select the most confident candidate entities of the model to the expanded set iteratively. A core challenge for these methods is to avoid selecting \emph{hard negative entities} that are semantically ambiguous with the target entities~\citep{jindal2011learning, gupta-manning-2014-improved}. As shown in Figure~\ref{Intro_Figure}, when we want to expand target entities belong to class \texttt{US States}, a competitive model is likely to wrongly expand hard negative entities, such as “\emph{San Francisco}” and “\emph{Los Angeles}”. If judged according to \texttt{US States}, it is clear that these hard negative entities do not belong to the target semantic class. But if we follow a coarser granularity (i.e., \texttt{Location}), hard negative entities can be regarded as the same class as target entities. Furthermore, as the characteristic of the iterative expansion process, a small number of negative entities that are incorrectly expanded in the early iterations will cause errors to accumulate in later iterations, resulting in a gradual decrease in expansion performance. This is the long-term “semantic drift” problem faced by ESE methods~\citep{curran2007minimising, mcintosh-2010-unsupervised, shi-etal-2014-probabilistic}. To address the above challenge and problem, we propose to use contrastive learning~\citep{chen2020simple, robinson2020hard, gao2021simcse} to empower the ESE model to better deal with hard negative entities. Contrastive learning is a hot topic in self-supervised learning field, which is originally applied to learn visual representation and attracts more and more attention from NLP researchers. In general, contrastive learning aims to learn more effective representation by pulling samples belonging to similar classes and further pushing samples from different classes~\citep{HadsellCon}. Intuitively, the idea that we want the ESE model to better distinguish hard negative entities coincides with the motivation of contrastive learning naturally. In our study, contrastive learning provides the ESE model with clearer semantic boundary and make entity representation closer to semantics. Motivated by the above intuition, we propose a novel ESE method that consists of three parts: (1) \emph{Entity representation model}, an entity-level masked language model pre-trained under the entity prediction task we specially design for ESE. Then we apply contrastive learning to refine the semantic representation learned by our model, which can be utilized in later expansion process. (2) \emph{Model selection and ensemble}. Due to the randomness of training samples in the pre-training process, the model will be sensitive to the quality of the training context features. To alleviate this issue, we will pre-train multiple models mentioned in (1), then select and ensemble top models to avoid the randomness of single model. (3) \emph{Probabilistic expansion framework}, a novel framework that can utilize the ensemble model obtained in (2) through a window search algorithm and an entity re-ranking algorithm, both based on probabilistic representation similarity of the candidate entity and entity set. Through this framework, we can finally get the ideal target entities that we want to expand. In summary, our contributions are in three folds: \begin{itemize} \item We firstly apply contrastive learning in ESE to better handle hard negative entities and derive more effective entity representation in semantic space. \item We propose a novel ESE framework, ProbExpan{}, which can uniformly represent entity/entity set in the probability space and utilize the ESE model to expand target entities. \item We conduct extensive experiments and detailed analysis on three public datasets and get state-of-the-art performance. Solid results demonstrate the substantial improvement of our method over previous baseline methods. \end{itemize} \section{Related Work} \label{sec:rw} \noindent\textbf{Entity Set Expansion.} Recently, many corpus-based ESE methods have gradually become the mainstream paradigm. These corpus-based ESE methods can be divided into two main categories:(1) one-time ranking methods~\citep{mamou-etal-2018-term,CaSE,kushilevitz-etal-2020-two} which introduce pairwise semantic similarity into set expansion tasks and suffer a lot from the \emph{Entity Intrusion Error} problem, that is, they cannot clearly convey the semantic meaning of entities. (2) iterative pattern-based bootstrapping methods~\citep{Egoset,shen2017setexpan, AuxiliaryExpan} which aim to bootstrap the seed entities set by iteratively selecting context pattern and ranking expanded entities. But these methods usually are troubled by the \emph{Semantic Drift} problem, that is, the target semantic class will change gradually when noise arises during the iterations. \noindent\textbf{Language Representation.} Early representation methods focus on word-level embeddings, such as Word2Vec~\citep{goldberg2014word2vec} and Glove~\citep{pennington2014glove} which output a single embedding for each word in vocabulary. After that, researchers design many context-aware representation methods to utilize context information better. The outstanding representative of context-aware representation methods is the masked language models represented by BERT~\citep{devlin-etal-2019-bert}. It is noted that CGExpan~\citep{zhang-etal-2020-empower} has utilized BERT's representation to enhance ESE. But BERT can still only perform word-level representation and CGExpan only use the pre-trained BERT embeddings without any task-specific training. To obtain better representation in more complex tasks, ERNIE~\citep{zhang-etal-2019-ernie} is designed to learn entity/phrase-level representation. To the best of our knowledge, entity-level representation methods have not yet been applied to ESE. \noindent\textbf{Contrastive Learning.} Contrastive learning has been widely applied in self-supervised field~\citep{kim-etal-2021-self, qin-etal-2021-erica, wang-etal-2021-cline,li2022past}. The main motivation of contrastive learning is to attract the positive samples and repulse the negative samples~\citep{HadsellCon,chen2020simple,khosla2020supervised,gao2021simcse}. Recent work~\citep{robinson2020hard} shows that contrastive representation learning benefits from hard negative samples(those samples which are difficult to distinguish from positive samples). This idea coincides with the challenge we have observed in the existing ESE methods, that is, most expansion models cannot handle hard negative entities well. SynSetExpan~\citep{shen-etal-2020-synsetexpan} enhances the ESE task via another related task, Synonym Discovery. It is different from the idea of contrastive learning, hoping to suppress the effect of negative entities by obtaining more positive entities. NEG-FINDER~\citep{mcintosh-2010-unsupervised} is concerned with semantic negative classes same as our work, but it proposes to perform offline negative discovery and then utilize the pre-selected negative categories to alleviate the semantic drift of the bootstrapping algorithms. Unlike NEG-FINDER that is just a heuristic and untrainable algorithm, our study aims to pre-train a task-specific model which has better entity representation and clearer semantic boundaries for ESE by contrastive learning. \section{Methodology} \label{sec:method} In this section, we firstly introduce the entity representation model and the entity prediction task we design for ESE. Specially, we will discuss how we apply contrastive learning to refine entity representation. Then we will illustrate the mechanism of model selection and ensemble. Finally, we will describe the expansion framework and algorithm to expand target entities. The overview of our proposed method is shown in Figure~\ref{Method_Figure}. \begin{figure*}[] \centering \includegraphics[width=1.00\textwidth]{method.pdf} \caption{Overview of our proposed method. We jointly train the entity representation model to obtain clearer semantic boundaries through our designed entity prediction task and contrastive learning task. Based on multiple pre-trained entity representation models, we utilize the model selection and ensemble mechanism to avoid the randomness brought by a single model. Two simple yet effective algorithms, namely window-search and entity re-ranking algorithms, are used to search and sort entities to obtain ideal target entities, according to the similarity of probabilistic representations derived from the ensemble model.} \label{Method_Figure} \end{figure*} \subsection{Entity Representation Model} \label{sec:methodmlm} The entity representation model mainly contains an entity-level masked language model, which takes a tokenized sentence with entity masked as input and outputs a probability distribution describing which entity the masked token can be. The entity representation is defined as the average of the predicted entity distributions of all its sentences. And the representation of an entity set is defined as the average of the representation of all its entities. Our entity-level masked language model contains an encoder $\boldsymbol{g}$ and a classification head $\boldsymbol{f}$. To be specific, we initialize the encoder $\boldsymbol{g}$ with pre-trained parameters of $\text{BERT}_\text{BASE}$, so that grammatical and semantic knowledge learned from large-scale corpus by BERT can be utilized. The classification head $\boldsymbol{f}$ consists of two linear layers with GeLU activation and a softmax layer. We set biases of the classification head $\boldsymbol{f}$ to be 0 and initialize weights from Kaiming uniform distribution~\citep{He_2015_ICCV}. Concerning the masked entity prediction pre-training task, for every entity in the vocabulary, we replace its span with [MASK] to get a training sample for all the sentences that it appears. During each training epoch, we restrict the number of samples from every entity to be the average number of samples of all entities, for concern of sample imbalance and the facility of following ensemble learning. We choose the Label Smoothing loss function~\citep{szegedy2016rethinking} rather than traditional cross-entropy loss function, so that entities sharing similar semantic meaning with target entity will not be overly suppressed. The prediction loss is defined as: \begin{equation} \begin{aligned} loss_{pred} = - \frac{1}{N} \sum_i^N \sum_j^{V_e} (&\mathds{1}_{j=y_i}(1-\eta)\cdot \log \hat{\mathbf{y}}_i[j] \\&+ \mathds{1}_{j\neq y_i}\eta \cdot \log \hat{\mathbf{y}}_i[j]), \end{aligned} \end{equation} where $N$ is the size of mini-batch, $V_e$ is the size of entity vocabulary, $\eta$ is the smoothing factor and the larger the $\eta$ is, the higher the smoothness of the label is, $y_i$ is the index of the entity corresponding to the training sample $i$, $\hat{\mathbf{y}}_i$ is the output of $\boldsymbol{f}$. \subsection{Contrastive Representation Learning} \label{sec:methodcl} We apply contrastive learning to refine the semantic space learned by our model so that the representation of the same semantic class entities are pulled closer while the representation of different semantic class entities are pushed outward. To do this, we firstly generate positive/negative entities for each semantic class from seed sets and previous expansion results. Note that these previous expansion results are from the last iteration, since our expansion framework is an iterative process. Positive entities $\mathbb{E}_{pos}$ are defined as seed entities or entities that rank higher than a threshold $\text{thr}_{pos}$ in the expanded ranked lists. The entities that lie in a pre-defined interval $(\text{L}_{neg}, \text{U}_{neg})$ of the expanded ranked lists are automatically selected as negative entities $\mathbb{E}_{neg}$. \begin{equation} \mathbb{E}_{pos} = \left\{e|e \in \mathbb{E}_{seed}\ \ \text{or rank}(e) < \text{thr}_{pos} \right\}, \end{equation} \begin{equation} \label{Equ_Eneg} \mathbb{E}_{neg} = \left\{e| \text{L}_{neg} < \text{rank}(e) < \text{U}_{neg} \right\}, \end{equation} where these thresholds are the hyper-parameters for positive/negative entities selection. Additionally, it is worth noting that we determine these thresholds based on a reasonable assuming, i.e., hard negative entities would be ranked close to positive entities during the expansion process. Therefore, we set the $\mathbb{E}_{neg}$'s lower bound $\text{L}_{neg}$ a little larger than the size of all positive entities to select negative entities in practice. Inspired by~\citep{robinson2020hard}, we design the contrastive learning method which can concentrate on hard negative entities for ESE. Specifically, we initialize our models in the same way as we discuss above, while attaching an auxiliary projection head $\boldsymbol{p}$ on top of the encoder of our model. The projection head $\boldsymbol{p}$ maps the final hidden embedding of the masked entity into a normalized feature vector $\mathbf{z} \in \mathbb{R}^D$, where $D$ is the dimension of output. To calculate the contrastive loss, samples from positive/negative entities are paired up to form positive/negative sample pairs: \begin{equation} \mathbb{P}_{pos} = \left\{ (\mathbf{x}, \mathbf{x}')|ent(\mathbf{x}) \in \mathbb{E}_{pos}, ent(\mathbf{x}') \in \mathbb{E}_{pos} \right\}, \end{equation} \begin{equation} \mathbb{P}_{neg} = \left\{ (\mathbf{x}, \mathbf{x}')|ent(\mathbf{x})=ent(\mathbf{x}') \in \mathbb{E}_{neg}\right\}, \end{equation} where $ent(\mathbf{x})$ indicates the entity corresponding to the training sample $\mathbf{x}$. The contrastive loss is then defined as follow: \begin{equation} loss_{cl} =- \sum_{i=1}^{2N} \log \frac{S_{i}^{+}}{S_{i}^{+} + S_{i}^{-}}, \end{equation} \begin{equation} S_{i}^{+} = e^{\mathbf{z}_i^{\top} \cdot \mathbf{z}_{j(i)} / t}, \end{equation} \begin{equation} S_{i}^{-} = \max(\frac{-(2N-2)\cdot \tau^{+}\cdot S_{i}^{+} + \widetilde{S_{i}^{-}}}{1-\tau^{+}}, e^\frac{-1}{t}), \end{equation} \begin{equation} \label{equ_theoritically} \widetilde{S_{i}^{-}} = \frac{(2N-2)\sum_{k: k \neq i \neq j(i)} e^{(1+\beta)\mathbf{z}_i^{\top} \cdot \mathbf{z}_{k} / t} }{\sum_{k: k \neq i \neq j(i)} e^{\beta \mathbf{z}_i^{\top} \cdot \mathbf{z}_{k} / t}}, \end{equation} where ${S_{i}^{+}}$/${S_{i}^{-}}$ respectively reflects the similarity between two training samples from the same/different sample pair, and $j(i)$ indicates that the training samples corresponding to indexes $i$ and $j$ can form a positive/negative sample pair, that is, $(\mathbf{x}_i, \mathbf{x}_j) \in \mathbb{P}_{pos} \cup \mathbb{P}_{neg}$, $N$ is the size of mini-batch, $\tau^{+}$ is the class-prior probability which can be estimated from data or treated as a hyper-parameter, $\beta$ is the hyper-parameter controlling the level of concentration on hard negative samples, $t$ is the temperature scaling factor which we set as 0.5 in all our experiments. It is noted that the training process alternates between the prediction loss and contrastive loss. \subsection{Model Selection and Ensemble} \label{sec:methodev} It is reasonable to hypothesize that a model which has learned more common semantic meaning of a class will output more consistent representation of seed entities from that class. Under this hypothesis, we design a scoring function to estimate a model's expansion performance on a semantic class: \begin{equation} \text{sco}(\theta, cls) = - \frac{\sum_{i}^{M} \sum_{j:i \neq j }^{M} \text{KL}\_\text{Div}(r(e_i), r(e_j))}{M * (M-1)} , \end{equation} \begin{equation} r(e) = \frac{1}{|\mathbb{S}_{e}|} \sum_{\mathbf{x} \in \mathbb{S}_{e}} \boldsymbol{f}(\boldsymbol{g}(\mathbf{x} | \theta) | \theta) , \end{equation} \begin{equation} M = |\mathbb{E}_{seed}^{cls}|, \end{equation} where $\theta$ is parameters of the model, $\mathbb{E}_{seed}^{cls}$ is the set of seed entities of the class, $\mathbb{S}_{e}$ is the set of all samples of entity $e$, $r(e)$ is the probabilistic representation of entity $e$, $\text{KL}\_\text{Div}$ is KL Divergence. The overall score of a model on a dataset is then defined as the geometric mean of the model's scores on all the classes: \begin{equation} \widetilde{\text{sco}}(\theta) = - \left\lvert \sqrt[N_{cls}]{\prod_{i}^{N_{cls}} \text{sco}(\theta, cls_i)} \right\rvert. \label{score_function} \end{equation} With this scoring function, we are able to select top-k models $\Theta_{top}$ from multiple models with only information of seed sets of each class. We ensemble these models as follow: \begin{equation} \widetilde{\boldsymbol{f}(\boldsymbol{g}(\mathbf{x}))} = \frac{1}{|\Theta_{top}|} \sum_{\theta \in \Theta_{top}} \boldsymbol{f}(\boldsymbol{g}(\mathbf{x} | \theta)). \end{equation} The practical model training process and analysis on model efficiency are described in Appendix~\ref{Appendix_A}. \begin{algorithm}[t] \caption{Window Search} \label{alg:Window Search} \begin{flushleft} \hspace*{0.05in} {\bf Input:} candidate entity list $L$; current set $L_{cur}$; window size $w$; anchor distribution $\mathbf{d} \in \mathbb{R}^{V_e}$; entity representation $\mathbf{r} \in \mathbb{R}^{V_e}$; scaling factor $\alpha$; stage step $\tau$; counter $c$. \\ \hspace*{0.05in} {\bf Output:} target entity $e_{t}$. \end{flushleft} \begin{algorithmic}[1] \State $c \leftarrow 0$; \State $\text{s}_t \leftarrow -\infty$; \State $p \leftarrow \frac{1}{V_e}$; \For{$e$ in $L$} \If{$c \geq w$} \State \textbf{break} \EndIf \State $\mathbf{r} \leftarrow \frac{1}{|\mathbb{S}_{e}|} \sum_{x \in \mathbb{S}_{e}} \widetilde{\boldsymbol{f}(\boldsymbol{g}(x))}$; \State $\mathbf{d} \leftarrow [p]^{V_e}$; \State $\mathbf{d}[index(e)] \leftarrow \mathbf{r}[index(e)]$; \For{$i$ in $|L_{cur}|$} \State $\mathbf{d}[index(e_i)] \leftarrow p * \alpha * 2^{-\lfloor\frac{i}{\tau}\rfloor}$; \EndFor \State $\mathbf{d} \leftarrow \text{Softmax}(\mathbf{d})$; \State $\text{s}(e) \leftarrow -\text{KL}\_\text{Div} (\mathbf{r},\mathbf{d})$; \If{$\text{s}(e) > \text{s}_t$} \State $e_{t} \leftarrow e$; \State $\text{s}_t \leftarrow \text{s}(e)$; \EndIf \State $c \leftarrow c + 1$; \EndFor \State \Return $e_{t}$. \end{algorithmic} \end{algorithm} \begin{table*}[h] \centering \scalebox{1.00}{ \begin{tabular}{lccccccccc} \toprule \multirow{2}{*} { \textbf{Methods} } & \multicolumn{3}{c} { \textbf{Wiki} } & \multicolumn{3}{c} { \textbf{APR} } & \multicolumn{3}{c} { \textbf{SE2} } \\ \cmidrule(r){2-4} \cmidrule(r){5-7}\cmidrule(r){8-10} & MAP@10 & MAP@20 & MAP@50 & MAP@ 10 & MAP@20 & MAP@50 & MAP@10 & MAP@20 & MAP@50 \\ \midrule Egoset & 0.904 & 0.877 & 0.745 & 0.758 & 0.710 & 0.570 & 0.583 & 0.533 & 0.433 \\ SetExpan & 0.944 & 0.921 & 0.720 & 0.789 & 0.763 & 0.639 & 0.473 & 0.418 & 0.341 \\ SetExpander & 0.499 & 0.439 & 0.321 & 0.287 & 0.208 & 0.120 & 0.520 & 0.475 & 0.397 \\ CaSE & 0.897 & 0.806 & 0.588 & 0.619 & 0.494 & 0.330 & 0.534 & 0.497 & 0.420 \\ CGExpan & \textbf{\underline{0.995}} & \underline{0.978} & 0.902 & \underline{0.992} & \underline{0.990} & 0.955 & 0.601 & 0.543 & 0.438 \\ SynSetExpan & 0.991 & \underline{0.978} & \underline{0.904} & 0.985 & \underline{0.990} & \textbf{\underline{0.960}} & \underline{0.628} & \underline{0.584} & \underline{0.502}\\ \midrule ProbExpan{} & \textbf{0.995} & 0.982 & 0.926 & 0.993 & 0.990 & 0.934 & \textbf{0.683} & \textbf{0.633} & \textbf{0.541} \\ ProbExpan-CN & \textbf{0.995} & \textbf{0.983} & \textbf{0.929} & \textbf{1.000} & \textbf{0.996} & 0.955 & - & - & - \\ \midrule \end{tabular} } \caption{MAP@K(10/20/50) of different methods. The choices of $K$ value are exactly following the previous works~\citep{zhang-etal-2020-empower,shen-etal-2020-synsetexpan}. All baseline results are directly from other published paper. Note that the class name guidance step in CGExpan is proposed for relatively coarse-grained semantic classes, while the semantic classes of SE2 dataset are more fine-grained, so this method is not very operable on SE2 dataset. We underline the previous state-of-the-art performance on three datasets for convenient comparison.} \label{tab:allresult} \end{table*} \subsection{Probabilistic Entity Set Expansion} \label{sec:methodexpan} Our proposed ProbExpan{} is an iterative framework based on the probabilistic representation of entities and entity sets. At the beginning of expansion, we initialize the current set $L_{cur}$ as the given seed set. In every expansion step, we first calculate the probabilistic representation of current set $r(L_{cur})$ with our pre-trained ensemble model: \begin{equation} r(L_{cur}) = \frac{1}{|L_{cur}|} \sum_{e\in L_{cur}} \frac{1}{|\mathbb{S}_{e}|} \sum_{\mathbf{x} \in \mathbb{S}_{e}} \widetilde{\boldsymbol{f}(\boldsymbol{g}(\mathbf{x}))}. \end{equation} $r(L_{cur})$ is essentially the average of predicted entity distributions of all entities in current set, whose dimension is the size of the entity vocabulary. Sorting it and filtering out entities in current set give us a ranked candidate entity list $L$. The window search Algorithm~\ref{alg:Window Search} on $L$ is to expand the target entities of current set. The algorithm judges the quality of a candidate entity by the similarity between its representation $\mathbf{r} \in \mathbb{R}^{V_e}$ and the anchor distribution $\mathbf{d} \in \mathbb{R}^{V_e}$ of current set. Therefore, an entity that is not so prominent (i.e., long-tail entity) but shares more similar representation with current set will be expanded in current set. The anchor distribution $\mathbf{d}$ reflects entity distribution of current set, where seed entities and entities expanded earlier weigh heavier. The base of it is set as $\frac{1}{V_e}$, the average entity prediction probability. To make the anchor distribution robust to candidate entities, the anchor probability of candidate entity is set to be the same as the predicted probability of candidate entity. And the anchor probability of each entity in current set scales over $p$, where entities with higher ranks will get larger scale. Note that the anchor distribution $\mathbf{d}$ is transformed into a probability distribution by Softmax before calculating the $\text{KL}\_\text{Div}$. We increase window size $w$ according to the current set size, since the anchor distribution will be more concrete as the current set size grows larger: \begin{equation} w = w_0 + g * \lfloor \frac{|L_{cur}|}{s} \rfloor, \end{equation} where $w_0$ is the initial window size, $g$ is window growing rate, $s$ is window growing step. Once expanded set reaches target size $\text{S}_{tgt}$, we stop the expansion and run the entity re-ranking algorithm. In particular, for every entity $e$ in the expanded set, we first calculate its score $\text{s}(e)$ in the same way as we do in the window search algorithm. A ranked list $L_{rank}$ can be constructed according to these scores. The aggregation score of every expanded entity is then calculated as follow: \begin{equation} score(e_i) = \sqrt{\frac{1}{i} * \frac{1}{rank(e_i)}}, \quad i = 1 ... \text{S}_{tgt}, \end{equation} where $i$ is the expand order of entity $e_i$ in expanded set, $rank(e_i)$ is the rank of entity $e_i$ in $L_{rank}$. Sorting the expanded set according to these aggregation scores will get the final expansion results. \section{Experiments} \label{sec:exp} \subsection{Experiment Setup} \label{sec:ExperimentSetup} \noindent\textbf{1. Datasets.} To verify the correctness of our intuition and proposed method, we choose two public datasets widely used in previous work and an additional recently released larger and more challenging dataset~\citep{shen-etal-2020-synsetexpan}: \begin{enumerate} \item \textbf{Wiki} and \textbf{APR}, which contains 8 and 3 semantic classes respectively. Each semantic class has 5 seed sets and each seed set has 3 queries, following the previous work. \item \textbf{SE2}, which contains 60 semantic classes and 1200 seed queries. The scale of dataset shows that SE2 is more challenging. The datasets used in the experiment are detailed in Appendix~\ref{Appendix_B}. \end{enumerate} \noindent\textbf{2. Compared methods.} We will compare the following ESE methods in our experiments, the implementation details and hyper-parameter choices of our experiments are shown in Appendix~\ref{Appendix_C}: \begin{enumerate} \item \textbf{Egoset}~\citep{Egoset}: A multifaceted set expansion system based on skip-gram features, word2vec embeddings and WikiList. \item \textbf{SetExpan}~\citep{shen2017setexpan}: A method iteratively selects context features from the corpus and proposes an ensemble mechanism to rank entities. \item \textbf{SetExpander}~\citep{mamou-etal-2018-term}: A corpus-based model for expanding a seed entity set into a more complete entity set that belong to the same semantic class. \item \textbf{CaSE}~\citep{CaSE}: A framework that constructs candidate entities with lexical features and ranks candidates using the similarity of distributed representation. \item \textbf{CGExpan}~\citep{zhang-etal-2020-empower}: A method that generates the target semantic class name by querying a pre-trained language model and utilizes generated class names to expand new entities. \item \textbf{SynSetExpan}~\citep{shen-etal-2020-synsetexpan}: Current state-of-the-art method that jointly conducts two related tasks and utilizes synonym information to improve performance of ESE. \item \textbf{ProbExpan}: In our proposed framework, we first apply contrastive learning on entity representation model to obtain better entity semantic representation. Then we use model selection and ensemble to avoid the randomness of the pre-training process. Finally we run two novel algorithms to get expansion results. \item \textbf{ProbExpan-CN}: Because our proposed entity representation model is end-to-end trainable, we can combine it with the class name guidance step in CGExpan. \end{enumerate} \noindent\textbf{3. Evaluation Metrics.} The task objective of ESE is to expand a ranked list of entities belong to the same semantic class. Thus, to evaluate the ranked result, we choose to use the \textbf{Mean Average Precision at different top K positions} as: MAP@K $=\frac{1}{|Q|} \sum_{q \in Q} \mathrm{AP}_{K}\left(L_{q}, S_{q}\right)$, where $Q$ is the set of all seed queries and for each query $q$, we use $\mathrm{AP}_{K}\left(L_{q}, S_{q}\right)$to denote the traditional average precision at position $K$ given a ranked list of entities $L_q$ and a ground-truth set $S_q$. To ensure the fairness of experiment, we are completely consistent with the baseline methods' evaluation metric settings. \subsection{Experiment Results} \label{sec:ExperimentResults} We will first report the overall performance, then analyze and explain the experiment results comprehensively. \noindent\textbf{1. Overall Performance.} Table~\ref{tab:allresult} shows the overall performance of different ESE methods. We can see that ProbExpan{} along with its variant outperform all baselines including current state-of-the-art methods on three datasets, which demonstrates the effectiveness of our proposed method. It is also worth noting that the Wiki and APR are small and relatively easy, the baselines don't leave us much space for improvement. But even so, our methods still perform well compared to the baselines. \begin{table}[h] \centering \scalebox{1.00}{ \begin{tabular}{cc} \hline \textbf{Semantic Class} & \textbf{MAP@100} \\ \hline China Provinces & 0.824 - 0.728 = 0.096 \ \ $\uparrow$\\ Companies & 0.969 - 0.950 = 0.019 \ \ $\uparrow$\\ Countries & 0.930 - 0.941 = -0.011 \ \ $\downarrow$ \\ Disease & 0.959 - 0.948 = 0.011 \ \ $\uparrow$\\ Parties & 0.948 - 0.913 = 0.035 \ \ $\uparrow$\\ Sports Leagues & 1.000 - 0.909 = 0.091 \ \ $\uparrow$ \\ TV Channels & 0.888 - 0.875 = 0.013 \ \ $\uparrow$ \\ US States & 0.763 - 0.750 = 0.013 \ \ $\uparrow$ \\ \hline \textbf{Overall} & \textbf{0.033} \ \ $\uparrow$\\ \hline \end{tabular} } \caption{The improvement (MAP@100) of ProbExpan{} based on CGExpan under different classes.} \label{tab:fineresult} \end{table} \noindent\textbf{2. Performance Analysis.} (1) For different datasets, our methods stably perform at a competitive level while existing methods fluctuate fiercely. Especially on SE2, which has more entities and semantic classes, our model's advantage is more obvious. (2) For different semantic classes, Table~\ref{tab:fineresult} shows that ProbExpan{} outperforms previous work under most classes, even though we use more challenging evaluation metric such as MAP@100. (3) For flexibility and expandability, the performance improvement of ProbExpan-CN compared with ProbExpan{} suggests that our proposed method can be combined with other methods friendly. \begin{figure}[tp] \centering \subfigure[Wiki Dataset-$\text{L}_{neg}$] { \label{wikil} \includegraphics[height = 0.40 \columnwidth,width=0.46\columnwidth]{Figures/wiki_L.pdf} } \subfigure[Wiki Dataset-$\text{U}_{neg}$] { \label{wikiu} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/wiki_U.pdf} } \subfigure[APR Dataset-$\text{L}_{neg}$] { \label{aprl} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/APR_L.pdf} } \subfigure[APR Dataset-$\text{U}_{neg}$] { \label{apru} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/APR_U.pdf} } \subfigure[SE2 Dataset-$\text{L}_{neg}$] { \label{se2l} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/SE2_L.pdf} } \subfigure[SE2 Dataset-$\text{U}_{neg}$] { \label{se2u} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/SE2_U.pdf} } \caption{Sensitivity analysis of $\text{L}_{neg}$ / $\text{U}_{neg}$ in ProbExpan{}.} \label{sensitive} \end{figure} \begin{table*}[h] \centering \scalebox{1.10}{ \begin{tabular}{lcccccc} \toprule \multirow{2}{*} { \textbf{Methods} } & \multicolumn{3}{c} { \textbf{Wiki} } & \multicolumn{3}{c} { \textbf{APR} } \\ \cmidrule(r){2-4} \cmidrule(r){5-7} & MAP@10 & MAP@20 & MAP@50 & MAP@ 10 & MAP@20 & MAP@50 \\ \midrule CGExpan-NoCN & 0.968 & 0.945 & 0.859 & 0.909 & 0.902 & 0.787 \\ ProbExpan{}-NoCLEN & 0.983 & 0.974 & 0.910 & 0.990 & 0.977 & 0.898 \\ ProbExpan{}-NoEN & 0.989 & 0.980 & 0.899 & 0.992 & 0.981 & 0.912 \\ ProbExpan{}-NoCL & 0.991 & 0.980 & 0.917 & \textbf{0.993} & 0.984 & 0.910 \\ ProbExpan{} & \textbf{0.995} & \textbf{0.982} & \textbf{0.926} & \textbf{0.993} & \textbf{0.990} & \textbf{0.934} \\ \bottomrule \end{tabular} } \caption{Ablation studies of ProbExpan{} and its variants on two datasets. We arrange the results from top to bottom in the order of increasing components of the model.} \label{tab:ablationresult} \end{table*} \subsection{Parameter Studies} In Section~\ref{sec:methodcl}, we propose to automatically select negative entities using a pre-defined interval $(\text{L}_{neg}, \text{U}_{neg})$, according to the Equation~\ref{Equ_Eneg}. Furthermore, to select those really hard negative entities as accurately as possible, we will manually ensure that the value of $\text{L}_{neg}$ is slightly larger than the size of positive entities when we determine the values of these two hyper-parameters. Therefore, it is reasonable to suspect that the values of $(\text{L}_{neg}$ and $\text{U}_{neg})$ will affect the hardness of the selected negative entities, thereby affecting the performance of the ProbExpan{}. But we can prove both theoretically and empirically that such a phenomenon that parameters affect performance does not exist in our proposed framework. \textbf{Theoretically}, even if we set an inappropriate and large $\text{L}_{neg}$, it will not cause a drop in the overall performance of the ProbExpan{}, because our proposed contrastive loss can adaptively focus on really hard entities in a training batch. The negative entities that are more similar to the positive entities will receive higher weight when calculating loss through Equation~\ref{equ_theoritically}. \textbf{Empirically}, we carry out the parameter studies as shown in Figure~\ref{sensitive} to verify the insensitivity of ProbExpan{} to these two hyper-parameters. Specifically, we fix one of $(\text{L}_{neg}$ and $\text{U}_{neg})$ and change the value of the other, and run the ProbExpan{} on different datasets to test its performance. From Figure~\ref{sensitive}, we can see that the performance of our proposed ProbExpan{} is not very sensitive to their specific values when these two parameters are within a reasonable range, because as $\text{L}_{neg}$ or $\text{U}_{neg}$ changes, the model performance (MAP@K) does not change very significantly. \textbf{To sum up}, the values of $(\text{L}_{neg}$ and $\text{U}_{neg})$ will indeed determine what entities we select as the hard negative entities, but due to the design of other structures and training strategy of our model, their values will not affect the overall performance of the model significantly. \subsection{Ablation Studies} \label{sec:AbaltionStudy} To provide a detailed analysis of how our proposed method works on ESE, we perform a series of ablation experiments to see how each component affects the model's expansion performance. Besides, the ablation results will also provide empirical proofs for our intuitions. Because the full method of CGExpan leverages some fixed patterns well manually designed by researchers(i.e., Hearst patterns~\citep{hearst-1992-automatic}), to ensure ablation studies' fairness, we will compare ProbExpan's variants with CGExpan-NoCN~\citep{zhang-etal-2020-empower}, which mainly consists of a traditional pre-trained language model such as BERT. The ProbExpan's variants include: \begin{enumerate} \item ProbExpan{}-NoCLEN: The ablation of ProbExpan{} without contrastive learning and model selection and ensemble. \item ProbExpan{}-NoEN: The ablation of ProbExpan{} without model selection and ensemble. \item ProbExpan{}-NoCL: The ablation of ProbExpan{} without contrastive learning. \end{enumerate} The results of these methods are shown in Tabel~\ref{tab:ablationresult}. \noindent\textbf{1. Can Entity Representation Model Empower ESE?} From Table~\ref{tab:ablationresult} we can see that ProbExpan{}-NoCLEN has a great improvement compared to CGExpan-NoCN, especially for the MAP@50. The significant improvement of ProbExpan{}-NoCLEN indicates the entity-level masked language model can represent entities better. Besides, it is worth noting that the ProbExpan{}-NoCLEN's results on APR are better than results on Wiki, which is exactly the opposite of CGExpan-NoCN. Because CGExpan-NoCN incorporates the average $\text{BERT}$ representation to select entities and the $\text{BERT}$ is pre-trained on Wikipedia corpus which is similar to the corpus of Wiki dataset in ESE. Therefore, CGExpan-NoCN cannot handle other source corpus, which also reflects that the entity representation model we design is not sensitive to the source corpus and has good generalization performance. \begin{figure} \centering \subfigure[Wiki Dataset] { \label{wiki} \includegraphics[scale=0.25]{wiki.pdf} } \subfigure[APR Dataset] { \label{apr} \includegraphics[scale=0.25]{apr.pdf} } \caption{Correlation analysis of model score and performance on Wiki and APR datasets.} \label{score_map} \end{figure} \noindent\textbf{2. Can Contrastive Learning Divide A Clearer Semantic Boundary?} The comparison between ProbExpan{}-NoEN and ProbExpan{}-NoCLEN shows that contrastive learning effectively refines the entity representation. According to our observation, previous works such as CGExpan already have competitive performance, the most error-prone case is that they face entities that are semantically ambiguous. This is also the motivation we choose contrastive learning to handle these hard negative entities. The performance results of Table~\ref{tab:ablationresult} and the case study in Figure~\ref{caseresult} together show that contrastive learning can indeed divide a clearer semantic boundary. \noindent\textbf{3. Can Model Selection And Ensemble Strategy Work?} The results about ensemble method in Table~\ref{tab:ablationresult} show that the model selection and ensemble step we design can bring remarkable improvement. Especially for the ProbExpan{}'s results, we are pleasantly surprised to find that on the basis of ProbExpan{}-NoEN, application of model selection and ensemble strategy can still improve further. In addition, to verify the validity of the Equation~\ref{score_function}, we analyze the correlation between model score and performance. For the convenience of display, we normalize the model score. The positive correlation results presented in Figure~\ref{score_map} show that the Equation~\ref{score_function} can effectively evaluate the model. \begin{figure*}[tp] \centering \includegraphics[width=1.00\textwidth]{Case_Study.pdf} \caption{Results of two seed entity sets with different semantic classes. We mark the wrong entities in red.} \label{caseresult} \end{figure*} \subsection{Case Studies} \label{sec:CaseStudy} We will present different models' representative expansion cases as further verification of our methods' advantages. Figure~\ref{caseresult} shows some expansion results of ProbExpan{}'s variants for several queries from different semantic classes. We see that even though ProbExpan{}-NoCLEN has achieved very good overall performance (as can be seen from Table~\ref{tab:ablationresult}), it still occasionally has difficulty distinguishing some hard negative samples. For example, municipal administrative regions such as “\emph{Wuhan}”, “\emph{Hangzhou}”, and “\emph{Guangzhou}” are likely to have great similarities in context with provincial administrative regions such as “\emph{Shanghai}” and “\emph{Zhejiang}” when training a language model, because they all actually belong to \texttt{Location} entities. Therefore, ProbExpan{}-NoCLEN cannot represent these entities in a more fine-grained manner at the semantic level. As shown in the comparison between ProbExpan{}-NoCLEN and ProbExpan{}-NoEN columns of Figure~\ref{caseresult}, ProbExpan{}-NoEN can recall more entities belonging to the correct target semantic class. So we can know that contrastive learning can divide a tighter and clearer boundary for the target semantic class through by extending the distance between negative and positive samples and shortening the distance between positive samples in the semantic space. From the ProbExpan{}-NoEN column of Figure~\ref{caseresult}, we can see contrastive learning still can not solve some extreme situations. For example, suppose a person does not have any external background knowledge, then when he/she sees “\emph{St Kilda Football Club}”, he/she must be easy to literally classify it as \texttt{Sports Leagues}. Therefore, we design the model selection and ensemble mechanism to get better expanded entities on the basis of ProbExpan{}-NoEN and the mechanism's effectiveness can be reflected from the ProbExpan{} column of Figure~\ref{caseresult}. From the whole Figure~\ref{caseresult} we can know that the effect of ProbExpan{}-NoEN is better than ProbExpan{}-NoCLEN, and ProbExpan{} can be further improved based on ProbExpan{}-NoEN. Such experimental results are in line with our design expectations. \section{Conclusions} \label{sec:cls} In this paper, we introduce to pre-train an entity-level masked language model with the entity prediction task. Then we firstly empower the ESE model to better handle hard negative entities with contrastive learning task. To utilize our pre-trained entity representation model, we propose the ProbExpan{}, a novel probabilistic ESE framework that consists of two simple yet effective algorithms, namely window-search and entity re-ranking algorithms. In the future, we will further study how to apply our pre-trained ESE model in cross-domain scenarios to better exploit its generalization ability. Combining various domain adaptation methods with our model will be an interesting direction. Moreover, it is also a worthy and promising research direction to study how to automatically measure the hardness of negative entities, so that the really hard negative entities can be better directly selected. \section{Acknowledgements} This research is supported by the Shenzhen General Research Project (Grant No. JCYJ20190808182805919) and the 173 program (Grant No. 2021-JCJQ-JJ-0029), National Natural Science Foundation of China (Grant No. 6201101015), Beijing Academy of Artificial Intelligence(BAAI), Natural Science Foundation of Guangdong Province (Grant No. 2021A1515012640), the Basic Research Fund of Shenzhen City (Grant No. JCYJ20210324120012033), National Key R\&D Program of China (No. 2021ZD0112905) and Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (Grant No. HW2021008). \clearpage \section{Introduction} \label{sec:intro} Entity Set Expansion (ESE) aims to expand from a set of seed entities (e.g., {“\emph{China}”, “\emph{America}”, “\emph{Japan}”}) to more new target entities (e.g., {“\emph{Russia}”, “\emph{Germany}”, ...}) that belong to the same semantic class (i.e., \texttt{Country}) as the seed entities. The ESE task can benefit kinds of NLP or IR downstream applications, such as knowledge graph construction~\citep{shi2021entity}, Web search~\citep{chen2016long}, and question answering~\citep{wang2008automatic}. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{Intro.pdf} \caption{Examples of hard negative entities in ESE.} \label{Intro_Figure} \end{figure} In recent years, kinds of iterative bootstrapping methods have gradually become the mainstream of ESE research. These methods~\citep{shen2017setexpan, yan2019learning, CaSE} mainly select the most confident candidate entities of the model to the expanded set iteratively. A core challenge for these methods is to avoid selecting \emph{hard negative entities} that are semantically ambiguous with the target entities~\citep{jindal2011learning, gupta-manning-2014-improved}. As shown in Figure~\ref{Intro_Figure}, when we want to expand target entities belong to class \texttt{US States}, a competitive model is likely to wrongly expand hard negative entities, such as “\emph{San Francisco}” and “\emph{Los Angeles}”. If judged according to \texttt{US States}, it is clear that these hard negative entities do not belong to the target semantic class. But if we follow a coarser granularity (i.e., \texttt{Location}), hard negative entities can be regarded as the same class as target entities. Furthermore, as the characteristic of the iterative expansion process, a small number of negative entities that are incorrectly expanded in the early iterations will cause errors to accumulate in later iterations, resulting in a gradual decrease in expansion performance. This is the long-term “semantic drift” problem faced by ESE methods~\citep{curran2007minimising, mcintosh-2010-unsupervised, shi-etal-2014-probabilistic}. To address the above challenge and problem, we propose to use contrastive learning~\citep{chen2020simple, robinson2020hard, gao2021simcse} to empower the ESE model to better deal with hard negative entities. Contrastive learning is a hot topic in self-supervised learning field, which is originally applied to learn visual representation and attracts more and more attention from NLP researchers. In general, contrastive learning aims to learn more effective representation by pulling samples belonging to similar classes and further pushing samples from different classes~\citep{HadsellCon}. Intuitively, the idea that we want the ESE model to better distinguish hard negative entities coincides with the motivation of contrastive learning naturally. In our study, contrastive learning provides the ESE model with clearer semantic boundary and make entity representation closer to semantics. Motivated by the above intuition, we propose a novel ESE method that consists of three parts: (1) \emph{Entity representation model}, an entity-level masked language model pre-trained under the entity prediction task we specially design for ESE. Then we apply contrastive learning to refine the semantic representation learned by our model, which can be utilized in later expansion process. (2) \emph{Model selection and ensemble}. Due to the randomness of training samples in the pre-training process, the model will be sensitive to the quality of the training context features. To alleviate this issue, we will pre-train multiple models mentioned in (1), then select and ensemble top models to avoid the randomness of single model. (3) \emph{Probabilistic expansion framework}, a novel framework that can utilize the ensemble model obtained in (2) through a window search algorithm and an entity re-ranking algorithm, both based on probabilistic representation similarity of the candidate entity and entity set. Through this framework, we can finally get the ideal target entities that we want to expand. In summary, our contributions are in three folds: \begin{itemize} \item We firstly apply contrastive learning in ESE to better handle hard negative entities and derive more effective entity representation in semantic space. \item We propose a novel ESE framework, ProbExpan{}, which can uniformly represent entity/entity set in the probability space and utilize the ESE model to expand target entities. \item We conduct extensive experiments and detailed analysis on three public datasets and get state-of-the-art performance. Solid results demonstrate the substantial improvement of our method over previous baseline methods. \end{itemize} \section{Related Work} \label{sec:rw} \noindent\textbf{Entity Set Expansion.} Recently, many corpus-based ESE methods have gradually become the mainstream paradigm. These corpus-based ESE methods can be divided into two main categories:(1) one-time ranking methods~\citep{mamou-etal-2018-term,CaSE,kushilevitz-etal-2020-two} which introduce pairwise semantic similarity into set expansion tasks and suffer a lot from the \emph{Entity Intrusion Error} problem, that is, they cannot clearly convey the semantic meaning of entities. (2) iterative pattern-based bootstrapping methods~\citep{Egoset,shen2017setexpan, AuxiliaryExpan} which aim to bootstrap the seed entities set by iteratively selecting context pattern and ranking expanded entities. But these methods usually are troubled by the \emph{Semantic Drift} problem, that is, the target semantic class will change gradually when noise arises during the iterations. \noindent\textbf{Language Representation.} Early representation methods focus on word-level embeddings, such as Word2Vec~\citep{goldberg2014word2vec} and Glove~\citep{pennington2014glove} which output a single embedding for each word in vocabulary. After that, researchers design many context-aware representation methods to utilize context information better. The outstanding representative of context-aware representation methods is the masked language models represented by BERT~\citep{devlin-etal-2019-bert}. It is noted that CGExpan~\citep{zhang-etal-2020-empower} has utilized BERT's representation to enhance ESE. But BERT can still only perform word-level representation and CGExpan only use the pre-trained BERT embeddings without any task-specific training. To obtain better representation in more complex tasks, ERNIE~\citep{zhang-etal-2019-ernie} is designed to learn entity/phrase-level representation. To the best of our knowledge, entity-level representation methods have not yet been applied to ESE. \noindent\textbf{Contrastive Learning.} Contrastive learning has been widely applied in self-supervised field~\citep{kim-etal-2021-self, qin-etal-2021-erica, wang-etal-2021-cline,li2022past}. The main motivation of contrastive learning is to attract the positive samples and repulse the negative samples~\citep{HadsellCon,chen2020simple,khosla2020supervised,gao2021simcse}. Recent work~\citep{robinson2020hard} shows that contrastive representation learning benefits from hard negative samples(those samples which are difficult to distinguish from positive samples). This idea coincides with the challenge we have observed in the existing ESE methods, that is, most expansion models cannot handle hard negative entities well. SynSetExpan~\citep{shen-etal-2020-synsetexpan} enhances the ESE task via another related task, Synonym Discovery. It is different from the idea of contrastive learning, hoping to suppress the effect of negative entities by obtaining more positive entities. NEG-FINDER~\citep{mcintosh-2010-unsupervised} is concerned with semantic negative classes same as our work, but it proposes to perform offline negative discovery and then utilize the pre-selected negative categories to alleviate the semantic drift of the bootstrapping algorithms. Unlike NEG-FINDER that is just a heuristic and untrainable algorithm, our study aims to pre-train a task-specific model which has better entity representation and clearer semantic boundaries for ESE by contrastive learning. \section{Methodology} \label{sec:method} In this section, we firstly introduce the entity representation model and the entity prediction task we design for ESE. Specially, we will discuss how we apply contrastive learning to refine entity representation. Then we will illustrate the mechanism of model selection and ensemble. Finally, we will describe the expansion framework and algorithm to expand target entities. The overview of our proposed method is shown in Figure~\ref{Method_Figure}. \begin{figure*}[] \centering \includegraphics[width=1.00\textwidth]{method.pdf} \caption{Overview of our proposed method. We jointly train the entity representation model to obtain clearer semantic boundaries through our designed entity prediction task and contrastive learning task. Based on multiple pre-trained entity representation models, we utilize the model selection and ensemble mechanism to avoid the randomness brought by a single model. Two simple yet effective algorithms, namely window-search and entity re-ranking algorithms, are used to search and sort entities to obtain ideal target entities, according to the similarity of probabilistic representations derived from the ensemble model.} \label{Method_Figure} \end{figure*} \subsection{Entity Representation Model} \label{sec:methodmlm} The entity representation model mainly contains an entity-level masked language model, which takes a tokenized sentence with entity masked as input and outputs a probability distribution describing which entity the masked token can be. The entity representation is defined as the average of the predicted entity distributions of all its sentences. And the representation of an entity set is defined as the average of the representation of all its entities. Our entity-level masked language model contains an encoder $\boldsymbol{g}$ and a classification head $\boldsymbol{f}$. To be specific, we initialize the encoder $\boldsymbol{g}$ with pre-trained parameters of $\text{BERT}_\text{BASE}$, so that grammatical and semantic knowledge learned from large-scale corpus by BERT can be utilized. The classification head $\boldsymbol{f}$ consists of two linear layers with GeLU activation and a softmax layer. We set biases of the classification head $\boldsymbol{f}$ to be 0 and initialize weights from Kaiming uniform distribution~\citep{He_2015_ICCV}. Concerning the masked entity prediction pre-training task, for every entity in the vocabulary, we replace its span with [MASK] to get a training sample for all the sentences that it appears. During each training epoch, we restrict the number of samples from every entity to be the average number of samples of all entities, for concern of sample imbalance and the facility of following ensemble learning. We choose the Label Smoothing loss function~\citep{szegedy2016rethinking} rather than traditional cross-entropy loss function, so that entities sharing similar semantic meaning with target entity will not be overly suppressed. The prediction loss is defined as: \begin{equation} \begin{aligned} loss_{pred} = - \frac{1}{N} \sum_i^N \sum_j^{V_e} (&\mathds{1}_{j=y_i}(1-\eta)\cdot \log \hat{\mathbf{y}}_i[j] \\&+ \mathds{1}_{j\neq y_i}\eta \cdot \log \hat{\mathbf{y}}_i[j]), \end{aligned} \end{equation} where $N$ is the size of mini-batch, $V_e$ is the size of entity vocabulary, $\eta$ is the smoothing factor and the larger the $\eta$ is, the higher the smoothness of the label is, $y_i$ is the index of the entity corresponding to the training sample $i$, $\hat{\mathbf{y}}_i$ is the output of $\boldsymbol{f}$. \subsection{Contrastive Representation Learning} \label{sec:methodcl} We apply contrastive learning to refine the semantic space learned by our model so that the representation of the same semantic class entities are pulled closer while the representation of different semantic class entities are pushed outward. To do this, we firstly generate positive/negative entities for each semantic class from seed sets and previous expansion results. Note that these previous expansion results are from the last iteration, since our expansion framework is an iterative process. Positive entities $\mathbb{E}_{pos}$ are defined as seed entities or entities that rank higher than a threshold $\text{thr}_{pos}$ in the expanded ranked lists. The entities that lie in a pre-defined interval $(\text{L}_{neg}, \text{U}_{neg})$ of the expanded ranked lists are automatically selected as negative entities $\mathbb{E}_{neg}$. \begin{equation} \mathbb{E}_{pos} = \left\{e|e \in \mathbb{E}_{seed}\ \ \text{or rank}(e) < \text{thr}_{pos} \right\}, \end{equation} \begin{equation} \label{Equ_Eneg} \mathbb{E}_{neg} = \left\{e| \text{L}_{neg} < \text{rank}(e) < \text{U}_{neg} \right\}, \end{equation} where these thresholds are the hyper-parameters for positive/negative entities selection. Additionally, it is worth noting that we determine these thresholds based on a reasonable assuming, i.e., hard negative entities would be ranked close to positive entities during the expansion process. Therefore, we set the $\mathbb{E}_{neg}$'s lower bound $\text{L}_{neg}$ a little larger than the size of all positive entities to select negative entities in practice. Inspired by~\citep{robinson2020hard}, we design the contrastive learning method which can concentrate on hard negative entities for ESE. Specifically, we initialize our models in the same way as we discuss above, while attaching an auxiliary projection head $\boldsymbol{p}$ on top of the encoder of our model. The projection head $\boldsymbol{p}$ maps the final hidden embedding of the masked entity into a normalized feature vector $\mathbf{z} \in \mathbb{R}^D$, where $D$ is the dimension of output. To calculate the contrastive loss, samples from positive/negative entities are paired up to form positive/negative sample pairs: \begin{equation} \mathbb{P}_{pos} = \left\{ (\mathbf{x}, \mathbf{x}')|ent(\mathbf{x}) \in \mathbb{E}_{pos}, ent(\mathbf{x}') \in \mathbb{E}_{pos} \right\}, \end{equation} \begin{equation} \mathbb{P}_{neg} = \left\{ (\mathbf{x}, \mathbf{x}')|ent(\mathbf{x})=ent(\mathbf{x}') \in \mathbb{E}_{neg}\right\}, \end{equation} where $ent(\mathbf{x})$ indicates the entity corresponding to the training sample $\mathbf{x}$. The contrastive loss is then defined as follow: \begin{equation} loss_{cl} =- \sum_{i=1}^{2N} \log \frac{S_{i}^{+}}{S_{i}^{+} + S_{i}^{-}}, \end{equation} \begin{equation} S_{i}^{+} = e^{\mathbf{z}_i^{\top} \cdot \mathbf{z}_{j(i)} / t}, \end{equation} \begin{equation} S_{i}^{-} = \max(\frac{-(2N-2)\cdot \tau^{+}\cdot S_{i}^{+} + \widetilde{S_{i}^{-}}}{1-\tau^{+}}, e^\frac{-1}{t}), \end{equation} \begin{equation} \label{equ_theoritically} \widetilde{S_{i}^{-}} = \frac{(2N-2)\sum_{k: k \neq i \neq j(i)} e^{(1+\beta)\mathbf{z}_i^{\top} \cdot \mathbf{z}_{k} / t} }{\sum_{k: k \neq i \neq j(i)} e^{\beta \mathbf{z}_i^{\top} \cdot \mathbf{z}_{k} / t}}, \end{equation} where ${S_{i}^{+}}$/${S_{i}^{-}}$ respectively reflects the similarity between two training samples from the same/different sample pair, and $j(i)$ indicates that the training samples corresponding to indexes $i$ and $j$ can form a positive/negative sample pair, that is, $(\mathbf{x}_i, \mathbf{x}_j) \in \mathbb{P}_{pos} \cup \mathbb{P}_{neg}$, $N$ is the size of mini-batch, $\tau^{+}$ is the class-prior probability which can be estimated from data or treated as a hyper-parameter, $\beta$ is the hyper-parameter controlling the level of concentration on hard negative samples, $t$ is the temperature scaling factor which we set as 0.5 in all our experiments. It is noted that the training process alternates between the prediction loss and contrastive loss. \subsection{Model Selection and Ensemble} \label{sec:methodev} It is reasonable to hypothesize that a model which has learned more common semantic meaning of a class will output more consistent representation of seed entities from that class. Under this hypothesis, we design a scoring function to estimate a model's expansion performance on a semantic class: \begin{equation} \text{sco}(\theta, cls) = - \frac{\sum_{i}^{M} \sum_{j:i \neq j }^{M} \text{KL}\_\text{Div}(r(e_i), r(e_j))}{M * (M-1)} , \end{equation} \begin{equation} r(e) = \frac{1}{|\mathbb{S}_{e}|} \sum_{\mathbf{x} \in \mathbb{S}_{e}} \boldsymbol{f}(\boldsymbol{g}(\mathbf{x} | \theta) | \theta) , \end{equation} \begin{equation} M = |\mathbb{E}_{seed}^{cls}|, \end{equation} where $\theta$ is parameters of the model, $\mathbb{E}_{seed}^{cls}$ is the set of seed entities of the class, $\mathbb{S}_{e}$ is the set of all samples of entity $e$, $r(e)$ is the probabilistic representation of entity $e$, $\text{KL}\_\text{Div}$ is KL Divergence. The overall score of a model on a dataset is then defined as the geometric mean of the model's scores on all the classes: \begin{equation} \widetilde{\text{sco}}(\theta) = - \left\lvert \sqrt[N_{cls}]{\prod_{i}^{N_{cls}} \text{sco}(\theta, cls_i)} \right\rvert. \label{score_function} \end{equation} With this scoring function, we are able to select top-k models $\Theta_{top}$ from multiple models with only information of seed sets of each class. We ensemble these models as follow: \begin{equation} \widetilde{\boldsymbol{f}(\boldsymbol{g}(\mathbf{x}))} = \frac{1}{|\Theta_{top}|} \sum_{\theta \in \Theta_{top}} \boldsymbol{f}(\boldsymbol{g}(\mathbf{x} | \theta)). \end{equation} The practical model training process and analysis on model efficiency are described in Appendix~\ref{Appendix_A}. \begin{algorithm}[t] \caption{Window Search} \label{alg:Window Search} \begin{flushleft} \hspace*{0.05in} {\bf Input:} candidate entity list $L$; current set $L_{cur}$; window size $w$; anchor distribution $\mathbf{d} \in \mathbb{R}^{V_e}$; entity representation $\mathbf{r} \in \mathbb{R}^{V_e}$; scaling factor $\alpha$; stage step $\tau$; counter $c$. \\ \hspace*{0.05in} {\bf Output:} target entity $e_{t}$. \end{flushleft} \begin{algorithmic}[1] \State $c \leftarrow 0$; \State $\text{s}_t \leftarrow -\infty$; \State $p \leftarrow \frac{1}{V_e}$; \For{$e$ in $L$} \If{$c \geq w$} \State \textbf{break} \EndIf \State $\mathbf{r} \leftarrow \frac{1}{|\mathbb{S}_{e}|} \sum_{x \in \mathbb{S}_{e}} \widetilde{\boldsymbol{f}(\boldsymbol{g}(x))}$; \State $\mathbf{d} \leftarrow [p]^{V_e}$; \State $\mathbf{d}[index(e)] \leftarrow \mathbf{r}[index(e)]$; \For{$i$ in $|L_{cur}|$} \State $\mathbf{d}[index(e_i)] \leftarrow p * \alpha * 2^{-\lfloor\frac{i}{\tau}\rfloor}$; \EndFor \State $\mathbf{d} \leftarrow \text{Softmax}(\mathbf{d})$; \State $\text{s}(e) \leftarrow -\text{KL}\_\text{Div} (\mathbf{r},\mathbf{d})$; \If{$\text{s}(e) > \text{s}_t$} \State $e_{t} \leftarrow e$; \State $\text{s}_t \leftarrow \text{s}(e)$; \EndIf \State $c \leftarrow c + 1$; \EndFor \State \Return $e_{t}$. \end{algorithmic} \end{algorithm} \begin{table*}[h] \centering \scalebox{1.00}{ \begin{tabular}{lccccccccc} \toprule \multirow{2}{*} { \textbf{Methods} } & \multicolumn{3}{c} { \textbf{Wiki} } & \multicolumn{3}{c} { \textbf{APR} } & \multicolumn{3}{c} { \textbf{SE2} } \\ \cmidrule(r){2-4} \cmidrule(r){5-7}\cmidrule(r){8-10} & MAP@10 & MAP@20 & MAP@50 & MAP@ 10 & MAP@20 & MAP@50 & MAP@10 & MAP@20 & MAP@50 \\ \midrule Egoset & 0.904 & 0.877 & 0.745 & 0.758 & 0.710 & 0.570 & 0.583 & 0.533 & 0.433 \\ SetExpan & 0.944 & 0.921 & 0.720 & 0.789 & 0.763 & 0.639 & 0.473 & 0.418 & 0.341 \\ SetExpander & 0.499 & 0.439 & 0.321 & 0.287 & 0.208 & 0.120 & 0.520 & 0.475 & 0.397 \\ CaSE & 0.897 & 0.806 & 0.588 & 0.619 & 0.494 & 0.330 & 0.534 & 0.497 & 0.420 \\ CGExpan & \textbf{\underline{0.995}} & \underline{0.978} & 0.902 & \underline{0.992} & \underline{0.990} & 0.955 & 0.601 & 0.543 & 0.438 \\ SynSetExpan & 0.991 & \underline{0.978} & \underline{0.904} & 0.985 & \underline{0.990} & \textbf{\underline{0.960}} & \underline{0.628} & \underline{0.584} & \underline{0.502}\\ \midrule ProbExpan{} & \textbf{0.995} & 0.982 & 0.926 & 0.993 & 0.990 & 0.934 & \textbf{0.683} & \textbf{0.633} & \textbf{0.541} \\ ProbExpan-CN & \textbf{0.995} & \textbf{0.983} & \textbf{0.929} & \textbf{1.000} & \textbf{0.996} & 0.955 & - & - & - \\ \midrule \end{tabular} } \caption{MAP@K(10/20/50) of different methods. The choices of $K$ value are exactly following the previous works~\citep{zhang-etal-2020-empower,shen-etal-2020-synsetexpan}. All baseline results are directly from other published paper. Note that the class name guidance step in CGExpan is proposed for relatively coarse-grained semantic classes, while the semantic classes of SE2 dataset are more fine-grained, so this method is not very operable on SE2 dataset. We underline the previous state-of-the-art performance on three datasets for convenient comparison.} \label{tab:allresult} \end{table*} \subsection{Probabilistic Entity Set Expansion} \label{sec:methodexpan} Our proposed ProbExpan{} is an iterative framework based on the probabilistic representation of entities and entity sets. At the beginning of expansion, we initialize the current set $L_{cur}$ as the given seed set. In every expansion step, we first calculate the probabilistic representation of current set $r(L_{cur})$ with our pre-trained ensemble model: \begin{equation} r(L_{cur}) = \frac{1}{|L_{cur}|} \sum_{e\in L_{cur}} \frac{1}{|\mathbb{S}_{e}|} \sum_{\mathbf{x} \in \mathbb{S}_{e}} \widetilde{\boldsymbol{f}(\boldsymbol{g}(\mathbf{x}))}. \end{equation} $r(L_{cur})$ is essentially the average of predicted entity distributions of all entities in current set, whose dimension is the size of the entity vocabulary. Sorting it and filtering out entities in current set give us a ranked candidate entity list $L$. The window search Algorithm~\ref{alg:Window Search} on $L$ is to expand the target entities of current set. The algorithm judges the quality of a candidate entity by the similarity between its representation $\mathbf{r} \in \mathbb{R}^{V_e}$ and the anchor distribution $\mathbf{d} \in \mathbb{R}^{V_e}$ of current set. Therefore, an entity that is not so prominent (i.e., long-tail entity) but shares more similar representation with current set will be expanded in current set. The anchor distribution $\mathbf{d}$ reflects entity distribution of current set, where seed entities and entities expanded earlier weigh heavier. The base of it is set as $\frac{1}{V_e}$, the average entity prediction probability. To make the anchor distribution robust to candidate entities, the anchor probability of candidate entity is set to be the same as the predicted probability of candidate entity. And the anchor probability of each entity in current set scales over $p$, where entities with higher ranks will get larger scale. Note that the anchor distribution $\mathbf{d}$ is transformed into a probability distribution by Softmax before calculating the $\text{KL}\_\text{Div}$. We increase window size $w$ according to the current set size, since the anchor distribution will be more concrete as the current set size grows larger: \begin{equation} w = w_0 + g * \lfloor \frac{|L_{cur}|}{s} \rfloor, \end{equation} where $w_0$ is the initial window size, $g$ is window growing rate, $s$ is window growing step. Once expanded set reaches target size $\text{S}_{tgt}$, we stop the expansion and run the entity re-ranking algorithm. In particular, for every entity $e$ in the expanded set, we first calculate its score $\text{s}(e)$ in the same way as we do in the window search algorithm. A ranked list $L_{rank}$ can be constructed according to these scores. The aggregation score of every expanded entity is then calculated as follow: \begin{equation} score(e_i) = \sqrt{\frac{1}{i} * \frac{1}{rank(e_i)}}, \quad i = 1 ... \text{S}_{tgt}, \end{equation} where $i$ is the expand order of entity $e_i$ in expanded set, $rank(e_i)$ is the rank of entity $e_i$ in $L_{rank}$. Sorting the expanded set according to these aggregation scores will get the final expansion results. \section{Experiments} \label{sec:exp} \subsection{Experiment Setup} \label{sec:ExperimentSetup} \noindent\textbf{1. Datasets.} To verify the correctness of our intuition and proposed method, we choose two public datasets widely used in previous work and an additional recently released larger and more challenging dataset~\citep{shen-etal-2020-synsetexpan}: \begin{enumerate} \item \textbf{Wiki} and \textbf{APR}, which contains 8 and 3 semantic classes respectively. Each semantic class has 5 seed sets and each seed set has 3 queries, following the previous work. \item \textbf{SE2}, which contains 60 semantic classes and 1200 seed queries. The scale of dataset shows that SE2 is more challenging. The datasets used in the experiment are detailed in Appendix~\ref{Appendix_B}. \end{enumerate} \noindent\textbf{2. Compared methods.} We will compare the following ESE methods in our experiments, the implementation details and hyper-parameter choices of our experiments are shown in Appendix~\ref{Appendix_C}: \begin{enumerate} \item \textbf{Egoset}~\citep{Egoset}: A multifaceted set expansion system based on skip-gram features, word2vec embeddings and WikiList. \item \textbf{SetExpan}~\citep{shen2017setexpan}: A method iteratively selects context features from the corpus and proposes an ensemble mechanism to rank entities. \item \textbf{SetExpander}~\citep{mamou-etal-2018-term}: A corpus-based model for expanding a seed entity set into a more complete entity set that belong to the same semantic class. \item \textbf{CaSE}~\citep{CaSE}: A framework that constructs candidate entities with lexical features and ranks candidates using the similarity of distributed representation. \item \textbf{CGExpan}~\citep{zhang-etal-2020-empower}: A method that generates the target semantic class name by querying a pre-trained language model and utilizes generated class names to expand new entities. \item \textbf{SynSetExpan}~\citep{shen-etal-2020-synsetexpan}: Current state-of-the-art method that jointly conducts two related tasks and utilizes synonym information to improve performance of ESE. \item \textbf{ProbExpan}: In our proposed framework, we first apply contrastive learning on entity representation model to obtain better entity semantic representation. Then we use model selection and ensemble to avoid the randomness of the pre-training process. Finally we run two novel algorithms to get expansion results. \item \textbf{ProbExpan-CN}: Because our proposed entity representation model is end-to-end trainable, we can combine it with the class name guidance step in CGExpan. \end{enumerate} \noindent\textbf{3. Evaluation Metrics.} The task objective of ESE is to expand a ranked list of entities belong to the same semantic class. Thus, to evaluate the ranked result, we choose to use the \textbf{Mean Average Precision at different top K positions} as: MAP@K $=\frac{1}{|Q|} \sum_{q \in Q} \mathrm{AP}_{K}\left(L_{q}, S_{q}\right)$, where $Q$ is the set of all seed queries and for each query $q$, we use $\mathrm{AP}_{K}\left(L_{q}, S_{q}\right)$to denote the traditional average precision at position $K$ given a ranked list of entities $L_q$ and a ground-truth set $S_q$. To ensure the fairness of experiment, we are completely consistent with the baseline methods' evaluation metric settings. \subsection{Experiment Results} \label{sec:ExperimentResults} We will first report the overall performance, then analyze and explain the experiment results comprehensively. \noindent\textbf{1. Overall Performance.} Table~\ref{tab:allresult} shows the overall performance of different ESE methods. We can see that ProbExpan{} along with its variant outperform all baselines including current state-of-the-art methods on three datasets, which demonstrates the effectiveness of our proposed method. It is also worth noting that the Wiki and APR are small and relatively easy, the baselines don't leave us much space for improvement. But even so, our methods still perform well compared to the baselines. \begin{table}[h] \centering \scalebox{1.00}{ \begin{tabular}{cc} \hline \textbf{Semantic Class} & \textbf{MAP@100} \\ \hline China Provinces & 0.824 - 0.728 = 0.096 \ \ $\uparrow$\\ Companies & 0.969 - 0.950 = 0.019 \ \ $\uparrow$\\ Countries & 0.930 - 0.941 = -0.011 \ \ $\downarrow$ \\ Disease & 0.959 - 0.948 = 0.011 \ \ $\uparrow$\\ Parties & 0.948 - 0.913 = 0.035 \ \ $\uparrow$\\ Sports Leagues & 1.000 - 0.909 = 0.091 \ \ $\uparrow$ \\ TV Channels & 0.888 - 0.875 = 0.013 \ \ $\uparrow$ \\ US States & 0.763 - 0.750 = 0.013 \ \ $\uparrow$ \\ \hline \textbf{Overall} & \textbf{0.033} \ \ $\uparrow$\\ \hline \end{tabular} } \caption{The improvement (MAP@100) of ProbExpan{} based on CGExpan under different classes.} \label{tab:fineresult} \end{table} \noindent\textbf{2. Performance Analysis.} (1) For different datasets, our methods stably perform at a competitive level while existing methods fluctuate fiercely. Especially on SE2, which has more entities and semantic classes, our model's advantage is more obvious. (2) For different semantic classes, Table~\ref{tab:fineresult} shows that ProbExpan{} outperforms previous work under most classes, even though we use more challenging evaluation metric such as MAP@100. (3) For flexibility and expandability, the performance improvement of ProbExpan-CN compared with ProbExpan{} suggests that our proposed method can be combined with other methods friendly. \begin{figure}[tp] \centering \subfigure[Wiki Dataset-$\text{L}_{neg}$] { \label{wikil} \includegraphics[height = 0.40 \columnwidth,width=0.46\columnwidth]{Figures/wiki_L.pdf} } \subfigure[Wiki Dataset-$\text{U}_{neg}$] { \label{wikiu} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/wiki_U.pdf} } \subfigure[APR Dataset-$\text{L}_{neg}$] { \label{aprl} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/APR_L.pdf} } \subfigure[APR Dataset-$\text{U}_{neg}$] { \label{apru} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/APR_U.pdf} } \subfigure[SE2 Dataset-$\text{L}_{neg}$] { \label{se2l} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/SE2_L.pdf} } \subfigure[SE2 Dataset-$\text{U}_{neg}$] { \label{se2u} \includegraphics[height = 0.40 \columnwidth, width=0.46\columnwidth]{Figures/SE2_U.pdf} } \caption{Sensitivity analysis of $\text{L}_{neg}$ / $\text{U}_{neg}$ in ProbExpan{}.} \label{sensitive} \end{figure} \begin{table*}[h] \centering \scalebox{1.10}{ \begin{tabular}{lcccccc} \toprule \multirow{2}{*} { \textbf{Methods} } & \multicolumn{3}{c} { \textbf{Wiki} } & \multicolumn{3}{c} { \textbf{APR} } \\ \cmidrule(r){2-4} \cmidrule(r){5-7} & MAP@10 & MAP@20 & MAP@50 & MAP@ 10 & MAP@20 & MAP@50 \\ \midrule CGExpan-NoCN & 0.968 & 0.945 & 0.859 & 0.909 & 0.902 & 0.787 \\ ProbExpan{}-NoCLEN & 0.983 & 0.974 & 0.910 & 0.990 & 0.977 & 0.898 \\ ProbExpan{}-NoEN & 0.989 & 0.980 & 0.899 & 0.992 & 0.981 & 0.912 \\ ProbExpan{}-NoCL & 0.991 & 0.980 & 0.917 & \textbf{0.993} & 0.984 & 0.910 \\ ProbExpan{} & \textbf{0.995} & \textbf{0.982} & \textbf{0.926} & \textbf{0.993} & \textbf{0.990} & \textbf{0.934} \\ \bottomrule \end{tabular} } \caption{Ablation studies of ProbExpan{} and its variants on two datasets. We arrange the results from top to bottom in the order of increasing components of the model.} \label{tab:ablationresult} \end{table*} \subsection{Parameter Studies} In Section~\ref{sec:methodcl}, we propose to automatically select negative entities using a pre-defined interval $(\text{L}_{neg}, \text{U}_{neg})$, according to the Equation~\ref{Equ_Eneg}. Furthermore, to select those really hard negative entities as accurately as possible, we will manually ensure that the value of $\text{L}_{neg}$ is slightly larger than the size of positive entities when we determine the values of these two hyper-parameters. Therefore, it is reasonable to suspect that the values of $(\text{L}_{neg}$ and $\text{U}_{neg})$ will affect the hardness of the selected negative entities, thereby affecting the performance of the ProbExpan{}. But we can prove both theoretically and empirically that such a phenomenon that parameters affect performance does not exist in our proposed framework. \textbf{Theoretically}, even if we set an inappropriate and large $\text{L}_{neg}$, it will not cause a drop in the overall performance of the ProbExpan{}, because our proposed contrastive loss can adaptively focus on really hard entities in a training batch. The negative entities that are more similar to the positive entities will receive higher weight when calculating loss through Equation~\ref{equ_theoritically}. \textbf{Empirically}, we carry out the parameter studies as shown in Figure~\ref{sensitive} to verify the insensitivity of ProbExpan{} to these two hyper-parameters. Specifically, we fix one of $(\text{L}_{neg}$ and $\text{U}_{neg})$ and change the value of the other, and run the ProbExpan{} on different datasets to test its performance. From Figure~\ref{sensitive}, we can see that the performance of our proposed ProbExpan{} is not very sensitive to their specific values when these two parameters are within a reasonable range, because as $\text{L}_{neg}$ or $\text{U}_{neg}$ changes, the model performance (MAP@K) does not change very significantly. \textbf{To sum up}, the values of $(\text{L}_{neg}$ and $\text{U}_{neg})$ will indeed determine what entities we select as the hard negative entities, but due to the design of other structures and training strategy of our model, their values will not affect the overall performance of the model significantly. \subsection{Ablation Studies} \label{sec:AbaltionStudy} To provide a detailed analysis of how our proposed method works on ESE, we perform a series of ablation experiments to see how each component affects the model's expansion performance. Besides, the ablation results will also provide empirical proofs for our intuitions. Because the full method of CGExpan leverages some fixed patterns well manually designed by researchers(i.e., Hearst patterns~\citep{hearst-1992-automatic}), to ensure ablation studies' fairness, we will compare ProbExpan's variants with CGExpan-NoCN~\citep{zhang-etal-2020-empower}, which mainly consists of a traditional pre-trained language model such as BERT. The ProbExpan's variants include: \begin{enumerate} \item ProbExpan{}-NoCLEN: The ablation of ProbExpan{} without contrastive learning and model selection and ensemble. \item ProbExpan{}-NoEN: The ablation of ProbExpan{} without model selection and ensemble. \item ProbExpan{}-NoCL: The ablation of ProbExpan{} without contrastive learning. \end{enumerate} The results of these methods are shown in Tabel~\ref{tab:ablationresult}. \noindent\textbf{1. Can Entity Representation Model Empower ESE?} From Table~\ref{tab:ablationresult} we can see that ProbExpan{}-NoCLEN has a great improvement compared to CGExpan-NoCN, especially for the MAP@50. The significant improvement of ProbExpan{}-NoCLEN indicates the entity-level masked language model can represent entities better. Besides, it is worth noting that the ProbExpan{}-NoCLEN's results on APR are better than results on Wiki, which is exactly the opposite of CGExpan-NoCN. Because CGExpan-NoCN incorporates the average $\text{BERT}$ representation to select entities and the $\text{BERT}$ is pre-trained on Wikipedia corpus which is similar to the corpus of Wiki dataset in ESE. Therefore, CGExpan-NoCN cannot handle other source corpus, which also reflects that the entity representation model we design is not sensitive to the source corpus and has good generalization performance. \begin{figure} \centering \subfigure[Wiki Dataset] { \label{wiki} \includegraphics[scale=0.25]{wiki.pdf} } \subfigure[APR Dataset] { \label{apr} \includegraphics[scale=0.25]{apr.pdf} } \caption{Correlation analysis of model score and performance on Wiki and APR datasets.} \label{score_map} \end{figure} \noindent\textbf{2. Can Contrastive Learning Divide A Clearer Semantic Boundary?} The comparison between ProbExpan{}-NoEN and ProbExpan{}-NoCLEN shows that contrastive learning effectively refines the entity representation. According to our observation, previous works such as CGExpan already have competitive performance, the most error-prone case is that they face entities that are semantically ambiguous. This is also the motivation we choose contrastive learning to handle these hard negative entities. The performance results of Table~\ref{tab:ablationresult} and the case study in Figure~\ref{caseresult} together show that contrastive learning can indeed divide a clearer semantic boundary. \noindent\textbf{3. Can Model Selection And Ensemble Strategy Work?} The results about ensemble method in Table~\ref{tab:ablationresult} show that the model selection and ensemble step we design can bring remarkable improvement. Especially for the ProbExpan{}'s results, we are pleasantly surprised to find that on the basis of ProbExpan{}-NoEN, application of model selection and ensemble strategy can still improve further. In addition, to verify the validity of the Equation~\ref{score_function}, we analyze the correlation between model score and performance. For the convenience of display, we normalize the model score. The positive correlation results presented in Figure~\ref{score_map} show that the Equation~\ref{score_function} can effectively evaluate the model. \begin{figure*}[tp] \centering \includegraphics[width=1.00\textwidth]{Case_Study.pdf} \caption{Results of two seed entity sets with different semantic classes. We mark the wrong entities in red.} \label{caseresult} \end{figure*} \subsection{Case Studies} \label{sec:CaseStudy} We will present different models' representative expansion cases as further verification of our methods' advantages. Figure~\ref{caseresult} shows some expansion results of ProbExpan{}'s variants for several queries from different semantic classes. We see that even though ProbExpan{}-NoCLEN has achieved very good overall performance (as can be seen from Table~\ref{tab:ablationresult}), it still occasionally has difficulty distinguishing some hard negative samples. For example, municipal administrative regions such as “\emph{Wuhan}”, “\emph{Hangzhou}”, and “\emph{Guangzhou}” are likely to have great similarities in context with provincial administrative regions such as “\emph{Shanghai}” and “\emph{Zhejiang}” when training a language model, because they all actually belong to \texttt{Location} entities. Therefore, ProbExpan{}-NoCLEN cannot represent these entities in a more fine-grained manner at the semantic level. As shown in the comparison between ProbExpan{}-NoCLEN and ProbExpan{}-NoEN columns of Figure~\ref{caseresult}, ProbExpan{}-NoEN can recall more entities belonging to the correct target semantic class. So we can know that contrastive learning can divide a tighter and clearer boundary for the target semantic class through by extending the distance between negative and positive samples and shortening the distance between positive samples in the semantic space. From the ProbExpan{}-NoEN column of Figure~\ref{caseresult}, we can see contrastive learning still can not solve some extreme situations. For example, suppose a person does not have any external background knowledge, then when he/she sees “\emph{St Kilda Football Club}”, he/she must be easy to literally classify it as \texttt{Sports Leagues}. Therefore, we design the model selection and ensemble mechanism to get better expanded entities on the basis of ProbExpan{}-NoEN and the mechanism's effectiveness can be reflected from the ProbExpan{} column of Figure~\ref{caseresult}. From the whole Figure~\ref{caseresult} we can know that the effect of ProbExpan{}-NoEN is better than ProbExpan{}-NoCLEN, and ProbExpan{} can be further improved based on ProbExpan{}-NoEN. Such experimental results are in line with our design expectations. \section{Conclusions} \label{sec:cls} In this paper, we introduce to pre-train an entity-level masked language model with the entity prediction task. Then we firstly empower the ESE model to better handle hard negative entities with contrastive learning task. To utilize our pre-trained entity representation model, we propose the ProbExpan{}, a novel probabilistic ESE framework that consists of two simple yet effective algorithms, namely window-search and entity re-ranking algorithms. In the future, we will further study how to apply our pre-trained ESE model in cross-domain scenarios to better exploit its generalization ability. Combining various domain adaptation methods with our model will be an interesting direction. Moreover, it is also a worthy and promising research direction to study how to automatically measure the hardness of negative entities, so that the really hard negative entities can be better directly selected. \section{Acknowledgements} This research is supported by the Shenzhen General Research Project (Grant No. JCYJ20190808182805919) and the 173 program (Grant No. 2021-JCJQ-JJ-0029), National Natural Science Foundation of China (Grant No. 6201101015), Beijing Academy of Artificial Intelligence(BAAI), Natural Science Foundation of Guangdong Province (Grant No. 2021A1515012640), the Basic Research Fund of Shenzhen City (Grant No. JCYJ20210324120012033), National Key R\&D Program of China (No. 2021ZD0112905) and Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (Grant No. HW2021008). \clearpage
proofpile-arXiv_065-3308
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Unlike the convolutional or recurrent neural networks (CNN or RNN) \cite{Goodfellow-et-al-2016}, {\it transformers} are the models based completely or partially on the {\it attention} mechanisms. They are originally proposed to learn global dependency for sequence transduction tasks \cite{vaswani2017attention}, and have obtained better performance and training efficiency. Besides its success in language models, transformers have also been widely studied in computer vision tasks. One of the directions is to replace the CNN backbones by transformers. In other words, transformers are used to extract features from images, and the features are processed by various heads to solve various tasks after that. Among these transformer based models, ViT \cite{dosovitskiy2020image}, DeiT \cite{touvron2021training} and Swin Transformer \cite{liu2021swin} have achieved high performance in multiple tasks like image classification, object detection, segmentation, etc. The general architecture of the transformer for sequence modeling is composed of an encoder module and a subsequent decoder module. The encoder module is a stack of a few sequential encoder blocks, with each of them containing a {\it self-attention} (SA) layer and a fully connected {\it feed-forward network}, with a residual structure \cite{he2016deep}, and a layernorm applied after the summation of the shortcut and the residual. While the feed-forward network consists simply of two fully connected layers, the self-attention layer is computed through a {\it multi-head attention} (MSA) mechanism \cite{vaswani2017attention}, which is more complicated and usually requires more computational resources than the convolution operations used in CNNs. Therefore, pruning methods \cite{zhu2021vision, yang2021nvit, yu2022width} have been proposed to construct efficient vision transformers. However, most of them only consider pruning DeiT on the image classification task. In this paper, we present a pruning method for transformer backbone which is valid on both image classification and object detection tasks. Since our method aims to search for the intrinsic dimensions (i.e., the possible lowest dimensions to maintain network performance) of transformers, we name it SiDT in the rest of the paper. Although SiDT is inspired by previous pruning methods like Network Slimming \cite{liu2017learning} and Vision Transformer Pruning (VTP) \cite{zhu2021vision}, it has its own merits: \begin{itemize} \item • SiDT can prune transformers for not only classification tasks, but also other vision tasks like object detection. \item We have analyzed the computational complexity of the unpruned and the pruned models. \item The models with 20\% or 40\% dimensions pruned perform similarly or even better than the unpruned model. \item SiDT prunes the dimensions of linear embeddings, different from the feature pruning of VTP. \end{itemize} \section{Related Work} \subsection{Vision Transformers} Vision Transformer (ViT) \cite{dosovitskiy2020image} is among the vision models whose backbones are purely transformers. ViT has partitioned the input image into small patches to mimic the tokens in the language transformers. Instead of pixels, these patches are embedded into features of certain dimensions, serving as the input of the attention module. Since its job is to learn representations, ViT has included the encoder module only, i.e., a stack of multi-head self-attentions. Despite ViT's high accuracy on image classification, there are some concerns about its quadratic computational complexity on the number of queries $n$. That means the complexity is also quadratic on the input resolution $H\times W$, whereas the convolution operation has linear complexity. ViT has also been restricted to image classification, since pixel-level tasks like segmentation typically need to deal with high resolution features. A window-based transformer called Swin Transformer \cite{liu2021swin} has then been proposed for these more complicated vision tasks. Similar to ViT, Swin has also provided a series of backbones which are based purely on transformers, especially the transformer encoders. The first advantage of Swin is that it can generate hierarchical features so that they can be used to solve semantic segmentation and object detection tasks with suitable heads. To obtain features of different resolutions, Swin has merged $2\times 2=4$ image patches into 1 patch at the end of each architecture stage. Since the size of patches is fixed, the image height and the width are both reduced by a half after merging. The overall transformer architecture is divided into one initial stage without merging and three intermediate stages with merging, and hence it can produce features of four resolution levels. Another advantage comes from the window-based multi-head self-attention (W-MSA) with shifting. Compared with the quadratic complexity of MSA, W-MSA has achieved a linear complexity from computing the attentions locally, within a small window of patches. Global information across different windows is then exchanged via shifting the window partitions. \subsection{Dimension Pruning}\label{formulation} The dimension/channel pruning problem of CNNs can be solved by adding group sparsity to the convolutional weights \cite{yuan2006model, wen2016learning}, or formulated as a neural architecture search problem \cite{zoph2018learning,liu2019darts}. Among them, a method called Network Slimming (NetSlim) has been proposed based on learning the channel scaling factors \cite{liu2017learning}, which is able to reduce the model complexity and computational cost, and preserve the accuracy at the same time. These channel scaling factors are simply defined to be the learnable scale parameters $\gamma$ of the batch normalization layer, and the channels corresponding to low scales are pruned. To learn sparse scales, the $\ell_1$ regularization of these scale parameters is added to the loss during training. After being trained with $\ell_1$ sparsity and the channels with low scales pruned, the model is further fine-tuned to achieve better performance. We shall be aware that the regularization term is not added to the convolutional weights, but directly to the scale parameters, which play a similar role as the architecture parameters in the differentiable neural architecture search context \cite{liu2019darts}. That is why searching for dimensions is indeed dimension pruning. Similar to the channel pruning in CNNs, there are also some studies for vision transformer pruning. Inspired by NetSlim, VTP \cite{zhu2021vision} has assigned scoring parameters to the features before the linear embedding or projection layers and pruned the dimensions of these features which are corresponding to low scores. Since the dimension of the linear layers depend on the dimension of the input features, the parameters of these layers are also reduced. Another pruning method has been proposed in NViT \cite{yang2021nvit}, which is based on the scores of grouped structural parameters. The scores are different from those of VTP as they are computed directly from the weight parameters. NViT has taken pruning the number of heads and the latency on hardware into account. Moreover, it has been pointed out that having the same dimensions across all layers in the conventional transformer design might not be optimal \cite{yang2021nvit}, which inspires the studies of automated transformer architecture design. These pruning methods have obtained high pruning ratio with a very small accuracy loss for vision transformers like DeiT \cite{touvron2021training}, on the image classification tasks. It would be natural to consider pruning Swin or other light transformer backbones for multiple computer vision tasks. WDPruning \cite{yu2022width} is a direct pruning method for Swin on ImageNet classification, without the fine-tuning stage. It has also provided an option for depth pruning, and an automated learned pruning ratio based on learnable thresholds of saliency scores. However, experimental results has shown worse accuracy of the pruned models, as it has not been fine-tuned. Inspired by these previous studies, we consider pruning Swin backbone as dimension search in this paper. Before we specify the details of each stage, we summarize a general framework for searching the dimensions of operations \cite{liu2017learning,zhu2021vision} (see also Fig. \ref{diag}(a)): \begin{itemize} \item Specify the architecture parameters for representing the dimensions of the operations. \item Set up a loss function which involves the architecture parameters and the other learnable parameters. \item Optimize the loss via gradient descent and prune the network based on the values of the architecture parameters. \item Fine-tune the pruned network. \end{itemize} \begin{figure} \centering {\includegraphics[width=1\textwidth]{fig1.pdf}} \caption{(a) The stages of transformer pruning. (b) Assign the scoring matrix $\textbf{\emph{A}} = \mathrm{diag}(\alpha)$ to the output dimensions of multi-head queries, keys and values. } \label{diag} \end{figure} \section{Method} \textbf{Architecture parameters.} For the dimension search of transformers, we still follow the four stages summarized in Section \ref{formulation}. Since the searching, pruning and fine-tuning stages are similar, the key difference is how we set up the architecture parameters. Whereas we prune convolution operations in CNNs, there are a few types of operations for different transformers. So we discuss in detail the strategies of setting up architectures parameters for MSA, W-MSA and {\it multilayer perceptron} (MLP) \cite{liu2021swin}. Suppose again the batch size is $N=1$, $\textbf{\emph{X}}\in\mathbb{R}^{d\times H\times W}$ is the input feature map with $H$ and $W$ the resolution and $d$ the dimension of the feature. Set $n=H\times W$, we obtain the transformed input feature $\textbf{\emph{X}}\in\mathbb{R}^{n\times d}$. For SA \cite{vaswani2017attention}, $\textbf{\emph{X}}$ is linearly embedded into the query $\textbf{\emph{Q}}$, key $\textbf{\emph{K}}$ and value $\textbf{\emph{V}}$ of the same shapes: \begin{align*} \textbf{\emph{Q}} = \textbf{\emph{X}}\textbf{\emph{W}}_Q,\,\, \textbf{\emph{K}} = \textbf{\emph{X}}\textbf{\emph{W}}_K,\,\, \textbf{\emph{V}} = \textbf{\emph{X}}\textbf{\emph{W}}_V, \end{align*} where the embedding matrices $\textbf{\emph{W}}_Q, \textbf{\emph{W}}_K, \textbf{\emph{W}}_V \in \mathbb{R}^{ d\times d}$, if the embedding dimensions for the query, key and value are also equal to $d$. Then the attention map $a$ is computed via the softmax function $\sigma$ of the scaled product of the query and the key: \begin{align*} a(\textbf{\emph{Q}}, \textbf{\emph{K}}) = \sigma(\textbf{\emph{Q}}\textbf{\emph{K}}^{T}/\sqrt{d}) \in \mathbb{R}^{n\times n}, \end{align*} and assigned to the value to compute the output of SA: \begin{align*} SA(\textbf{\emph{Q}}, \textbf{\emph{K}}, \textbf{\emph{V}}) = \sigma(\textbf{\emph{Q}}\textbf{\emph{K}}^{T}/\sqrt{d})\textbf{\emph{V}} \in \mathbb{R}^{n\times d}. \end{align*} Note that the output of SA has the same shape as the input $\textbf{\emph{X}}$. To set up the architecture parameters, we apply a uniform score matrix $\textbf{\emph{A}}$ for $\textbf{\emph{Q}}$, $\textbf{\emph{K}}$ and $\textbf{\emph{V}}$ via matrix multiplication: \begin{align*} \widetilde{\textbf{\emph{Q}}} = \textbf{\emph{Q}}\textbf{\emph{A}},\,\, \widetilde{\textbf{\emph{K}}} = \textbf{\emph{K}}\textbf{\emph{A}},\,\, \widetilde{\textbf{\emph{V}}} = \textbf{\emph{V}}\textbf{\emph{A}}, \end{align*} where $\textbf{\emph{A}} \in\mathbb{R}^{d\times d}$ is a diagonal matrix whose diagonal elements are the architecture parameters $\alpha_i$ for $i=1,2,...,d$. That is to say, we assign a score $\alpha_i$ to the $i$-th dimension of the $d$-dimensional query, and also to the key and value at the same $i$-th dimension. Then we compute the SA module based on the scored query, key and value, and obtain $SA(\widetilde{\textbf{\emph{Q}}}, \widetilde{\textbf{\emph{K}}}, \widetilde{\textbf{\emph{V}})}$. For MSA, we need to compute multiple SA modules and each of them is a {\it head}. Let $h$ be the number of heads. For $j=1,...,h$, we also compute $\textbf{\emph{Q}}_j$, $\textbf{\emph{K}}_j$ and $\textbf{\emph{V}}_j\in \mathbb{R}^{n\times d/h}$ through linear embedding of $\textbf{\emph{X}}$ via $\textbf{\emph{W}}_{Q,j}$, $\textbf{\emph{W}}_{K,j}$ and $\textbf{\emph{W}}_{V,j}\in \mathbb{R}^{d\times d/h}$ like that of SA, and obtain the heads: \begin{align*} \textbf{\emph{H}}_j = SA(\textbf{\emph{Q}}_j, \textbf{\emph{K}}_j, \textbf{\emph{V}}_j)\in \mathbb{R}^{n\times d/h}. \end{align*} With $\textbf{\emph{Q}}$, $\textbf{\emph{K}}$ and $\textbf{\emph{V}}$ the concatenations of $\textbf{\emph{Q}}_j$, $\textbf{\emph{K}}_j$ and $\textbf{\emph{V}}_j$, the output of the MSA module is computed by concatenating the heads and projecting linearly via $\textbf{\emph{W}}_O\in \mathbb{R}^{d\times d}$: \begin{align*} MSA(\textbf{\emph{Q}}, \textbf{\emph{K}}, \textbf{\emph{V}}) = [\textbf{\emph{H}}_1, \textbf{\emph{H}}_2, ..., \textbf{\emph{H}}_h]\textbf{\emph{W}}_O \in \mathbb{R}^{n\times d}. \end{align*} We use a stronger scoring matrix $\textbf{\emph{A}}\in \mathbb{R}^{d/h\times d/h}$ for MSA, which is not only uniform over the query, key and value, but also over all the heads: \begin{align*} \widetilde{\textbf{\emph{Q}}}_j = \textbf{\emph{Q}}_j\textbf{\emph{A}},\,\, \widetilde{\textbf{\emph{K}}}_j = \textbf{\emph{K}}_j\textbf{\emph{A}},\,\, \widetilde{\textbf{\emph{V}}}_j = \textbf{\emph{V}}_j\textbf{\emph{A}}, \end{align*} for $j=1,2,...,h$. Then we compute the new MSA module and obtain $\widetilde{\textbf{\emph{H}}}_j = SA(\widetilde{\textbf{\emph{Q}}}_j, \widetilde{\textbf{\emph{K}}}_j, \widetilde{\textbf{\emph{V}}}_j)$ and: \begin{align*} MSA(\widetilde{\textbf{\emph{Q}}}, \widetilde{\textbf{\emph{K}}}, \widetilde{\textbf{\emph{V}}}) = [\widetilde{\textbf{\emph{H}}}_1, \widetilde{\textbf{\emph{H}}}_2, ..., \widetilde{\textbf{\emph{H}}}_h]\textbf{\emph{W}}_O . \end{align*} For W-MSA, the input features $\textbf{\emph{X}}\in\mathbb{R}^{n\times d}$ are divided into a few windows of size $M\times M$, and MSA is computed locally within these windows. That is to say, we reshape $\textbf{\emph{X}}$ to be a tensor in $\mathbb{R}^{n/M^2 \times M^2 \times d}$, and obtain $\textbf{\emph{Q}}_j$, $\textbf{\emph{K}}_j$ and $\textbf{\emph{V}}_j\in \mathbb{R}^{n/M^2 \times M^2 \times d/h}$ for $j=1,2,...,h$ after embedding of multi-head. Here $\textbf{\emph{Q}}_j$, $\textbf{\emph{K}}_j$ and $\textbf{\emph{V}}_j$ can be viewed as the concatenations of $\textbf{\emph{Q}}_{j,l}$, $\textbf{\emph{K}}_{j,l}$ and $\textbf{\emph{V}}_{j,l}\in\mathbb{R}^{M^2 \times d/h}$ for $l=1,2,...,n/M^2$. For each window, we compute the MSA module and obtain $\textbf{\emph{W}}_{,l} = MSA(\textbf{\emph{Q}}_{,l}, \textbf{\emph{K}}_{,l}, \textbf{\emph{V}}_{,l})\in \mathbb{R}^{M^2\times d}$. Finally, we rearrange the outputs of these windows and obtain: \begin{align*} W\text{-}MSA(\textbf{\emph{Q}}, \textbf{\emph{K}}, \textbf{\emph{V}}) = [\textbf{\emph{W}}_{,1}, \textbf{\emph{W}}_{,2}, ..., \textbf{\emph{W}}_{,n/M^2}] \in \mathbb{R}^{n\times d}. \end{align*} To set up the architecture parameters for W-MSA, again we use a uniform {\it scoring matrix} $\textbf{\emph{A}}\in \mathbb{R}^{d/h\times d/h}$ for the query, key and value, over all the heads and windows: \begin{align*} \widetilde{\textbf{\emph{Q}}}_{j,l} = \textbf{\emph{Q}}_{j,l}\textbf{\emph{A}},\,\, \widetilde{\textbf{\emph{K}}}_{j,l} = \textbf{\emph{K}}_{j,l}\textbf{\emph{A}},\,\, \widetilde{\textbf{\emph{V}}}_{j,l} = \textbf{\emph{V}}_{j,l}\textbf{\emph{A}}. \end{align*} Then we have $\widetilde{\textbf{\emph{W}}}_{,l} = MSA(\widetilde{\textbf{\emph{Q}}}_{,l}, \widetilde{\textbf{\emph{K}}}_{,l}, \widetilde{\textbf{\emph{V}}}_{,l})$ and \begin{align*} W\text{-}MSA(\widetilde{\textbf{\emph{Q}}}, \widetilde{\textbf{\emph{K}}}, \widetilde{\textbf{\emph{V}}}) = [\widetilde{\textbf{\emph{W}}}_{,1}, \widetilde{\textbf{\emph{W}}}_{,2}, ..., \widetilde{\textbf{\emph{W}}}_{,n/M^2}]. \end{align*} The last module to be discussed is MLP \cite{liu2021swin}, which simply contains two linear layers with activation. Suppose $\textbf{\emph{X}}\in\mathbb{R}^{n\times d}$ is the input feature, and $d_m$ represents the dimensions of the hidden state. Suppose further $\textbf{\emph{W}}_1\in\mathbb{R}^{d\times d_m}$ and $\textbf{\emph{W}}_2\in\mathbb{R}^{d_m\times d}$ are two matrices for linear embedding, $\sigma_{MLP}$ is the activation. Then we have: \begin{align*} MLP(\textbf{\emph{X}}) = \sigma_{MLP}(\textbf{\emph{X}}\textbf{\emph{W}}_1)\textbf{\emph{W}}_2 \in \mathbb{R}^{n\times d}. \end{align*} The scoring matrix $\textbf{\emph{A}}$ is applied immediately after $\textbf{\emph{W}}_1$ through matrix multiplication, and get $\sigma_{MLP}(\textbf{\emph{X}}\textbf{\emph{W}}_1\textbf{\emph{A}})\textbf{\emph{W}}_2$. Here $\textbf{\emph{A}}$ can be viewed as the scores for the dimensions of the hidden state. \textbf{Pruning.} The four-stage pruning procedure is summarized in Fig. \ref{diag}. During the searching stage, the elements in the scoring matrix $\textbf{\emph{A}}$ are regularized by $\ell_1$ norm like NetSlim \cite{liu2017learning} , and involved in the overall loss: \begin{align*} L=l(\textbf{\emph{X}},\textbf{\emph{T}};\textbf{\emph{W}}) + \gamma\ell_1(\textbf{\emph{A}}), \end{align*} where $l$ is the classification or detection loss, $l_1$ is the $\ell_1$ loss, $\textbf{\emph{X}}$, $\textbf{\emph{T}}$ and $\textbf{\emph{A}}$ are the input, target and the architecture parameters, and $\textbf{\emph{W}}$ represents the other learnable parameters. $\gamma$ is a scale hyperparameter to be set up in the section of experiments. The architecture parameters $\textbf{\emph{A}}$ are updated via gradient descent or architecture search algorithms \cite{liu2019darts}, together with the elements of the embedding matrices $\textbf{\emph{W}}$. After the completion of searching, we rank the diagonal elements of the scoring matrix $\textbf{\emph{A}}$ according to their absolute values. The dimensions of the embedding matrices are pruned if their corresponding scores are ranked low. Suppose the remaining ratio of the dimensions after pruning is $\rho$. Then only $\rho d$ dimensions with higher scores are left in the pruned matrices. For MSA, we have $\textbf{\emph{W}}_{Q,j}$, $\textbf{\emph{W}}_{K,j}$ and $\textbf{\emph{W}}_{V,j}\in \mathbb{R}^{d\times \rho d/h}$ after pruning, and hence $\textbf{\emph{Q}}_j$, $\textbf{\emph{K}}_j$ and $\textbf{\emph{V}}_j\in \mathbb{R}^{n\times \rho d/h}$. Since we have not pruned the query or key number $n$, the attention map still belongs to $\mathbb{R}^{n\times n}$, and the head $\textbf{\emph{H}}_j \in \mathbb{R}^{n\times \rho d/h}$. This leads to the projection matrix $\textbf{\emph{W}}_O\in \mathbb{R}^{\rho d\times d}$, and the output of the pruned MSA in $\mathbb{R}^{n\times d}$, with the same shape as the unpruned model. One can easily see that the original unpruned MSA module has $O(4d^2)$ parameters and a computational complexity of $O(4nd^2+2n^2d)$. For the pruned MSA, the number of parameters is reduced to $O(4\rho d^2)$, and the computational complexity is reduced to $O(4\rho nd^2+2\rho n^2d)$. Similarly, the unpruned W-MSA module has $O(4d^2)$ parameters and a computational complexity of $O(4nd^2+2nM^2d)$. For the pruned W-MSA, the number of parameters is reduced to $O(4\rho d^2)$, and the computational complexity is reduced to $O(4\rho nd^2+2\rho nM^2d)$. Finally, the unpruned MLP has $O(2dd_m)$ parameters and a computational complexity of $O(2ndd_m)$. For the pruned MLP, the number of parameters is reduced to $O(2\rho dd_m)$, and the computational complexity is reduced to $O(2\rho ndd_m)$. This is because $\textbf{\emph{W}}_1\in\mathbb{R}^{d\times \rho d_m}$ and $\textbf{\emph{W}}_2\in\mathbb{R}^{\rho d_m\times d}$ after pruning. One shall note that our settings of architecture parameters are different from those of VTP \cite{zhu2021vision}. VTP's scoring matrix $\textbf{\emph{A}}$ is applied directly to the input feature $\textbf{\emph{X}}$, whereas ours is applied to $\textbf{\emph{Q}}$, $\textbf{\emph{K}}$ and $\textbf{\emph{V}}$. In other words, VTP prunes the features but we prune the linear embeddings. As we apply the same matrix $\textbf{\emph{A}}$ to the embedding dimensions of multiple heads, we have only $d/h$ such architecture parameters, making the model easier to train. Moreover, VTP is applied to DeiT on the classification task only, whereas our method prunes Swin Transformer, which serves as a backbone for multiple vision tasks. Finally, we have also provided the complexity analysis of the unpruned and pruned operations, which is missing in previous works. \section{Experiments} We conduct SiDT for Swin Transformer on CIFAR-100 image classification \cite{krizhevsky2009learning}. We prune its tiny version (Swin-T), which has 27.53M parameters and a complexity of 4.49G FLOPS. The settings of the search stage are similar to those for training the unpruned baseline \footnote{When setting up the architecture parameters, we refer to the code at https://github.com/Cydia2018/ViT-cifar10-pruning}, with batch size = 256, patch size = 4, window size = 7, embedding dimension = 96, initial learning rate = 0.00025, momentum =0.9, weight decay = 0.05, epochs = 160, and the sparsity scale $\gamma = 0.0001$ for $\ell_1$ regularization. After searching, we obtain the scores of all the dimensions and rank them according to their absolute values. Next, the dimensions with lower scores are pruned, based on predefined pruning ratios of 20\%, 40\%, 60\% and 80\%. Finally, the pruned model is trained again with a warm start, using the same settings as the search stage. Table \ref{swin_class} shows that the number of parameters and computational costs can be greatly reduced after pruning, while preserving the accuracy at the same time, compared to the baseline \cite{nested2021}. After pruning 80\% of the dimensions, the accuracy is only around 2\% lower than the recovered baseline. The model with 20\% or 40\% dimensions pruned has an accuracy which is even higher than the baseline model. This can be explained by the relatively larger size of Swin-T on easier datasets like CIFAR, as over-parameterized models can cause overfitting. Additionally, we have also pruned the Swin-T backbone for the COCO object detection task \cite{lin2014microsoft}, following the settings in the Swin paper \cite{liu2021swin}. That is, batch size = 16, initial learning rate = 0.0001, weight decay = 0.05, epochs = 36, and all the other settings of the backbone are the same as the Swin-T for CIFAR classification discussed above. We use Cascade Mask R-CNN \cite{cai2018cascade} as the detection head, in accordance with that of the Swin-T baseline. Again we follow the steps in Fig. \ref{diag}, and prune the model with pruning ratios of 20\% and 40\%. During the search stage, we also start training with a pretrained Swin-T object detection model. Table \ref{swin_det} indicates that the model with 20\% dimensions of the backbone pruned has a similar performance of box mAP and mask mAP as the unpruned model. Here mAP means the mean average precision over all categories. The box or mask indicates that mAP is computed over bounding boxes or masks. Even if 40\% dimensions of the backbone are pruned, the loss in AP is still less than 1.5\%. This is a fair result since the detection task is more complicated than the classification task, and pruning a detection model can lead to a slightly larger accuracy decline. \begin{table} \begin{center} \caption{Prune Swin-T via SiDT on CIFAR-100 classification task. PR = Pruning Ratio. Acc = accuracy. Para. = number of parameters. $\diamond$ This baseline is recovered on our device of one RTX 3090 GPU. } \label{swin_class} \begin{tabular}{l c c c} \toprule \specialrule{0em}{1pt}{1pt} PR & Acc (\%) & Para. (M) & FLOPS (G) \\ \specialrule{0em}{1pt}{1pt} \midrule \specialrule{0em}{1pt}{1pt} 0\% (Baseline \cite{nested2021}) & 78.07 & - & -\\ 0\% (Baseline $\diamond$) & 81.78 & 27.60 & 4.49\\ \specialrule{0em}{1pt}{1pt} \midrule \specialrule{0em}{1pt}{1pt} 20\% SiDT & \textbf{82.75} & 23.28 & 3.53 \\ 40\% SiDT & 82.11 & 17.89 & 2.60 \\ 60\% SiDT & 80.81 & 11.92 & 1.73 \\ 80\% SiDT & 79.35 & \textbf{7.17} & \textbf{0.92} \\ \specialrule{0em}{1pt}{1pt} \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Prune Swin-T backbone via SiDT on COCO object detection task. PR = Pruning Ratio. } \label{swin_det} \begin{tabular}{l c c c c} \toprule \specialrule{0em}{1pt}{1pt} \multirow{2}{*}{\makecell{PR}} & \multicolumn{2}{c}{mAP} & \multicolumn{2}{c}{Para. (M)}\\ & Box & Mask & Total & Backbone \\ \specialrule{0em}{1pt}{1pt} \midrule \specialrule{0em}{1pt}{1pt} 0\% (Baseline \cite{liu2021swin}) & \textbf{50.5} & \textbf{43.7} & 86 & 28\\ \specialrule{0em}{1pt}{1pt} \midrule \specialrule{0em}{1pt}{1pt} 20\% SiDT & 50.4 & \textbf{43.7} & 80 & 22\\ 40\% SiDT & 49.2 & 42.9 & \textbf{74} & \textbf{16}\\ \specialrule{0em}{1pt}{1pt} \bottomrule \end{tabular} \end{center} \end{table} \section{Conclusion} We have developed SiDT, a method for searching for the intrinsic dimensions of transformers, and provided its complexity analysis. Experiments on multiple vision tasks have shown that SiDT can promote the efficiency of vision transformers with little accuracy loss. This method will be applied to more computer vision tasks in future work. \section{Acknowledgements} The work was partially supported by NSF grants DMS-1854434, DMS-1952644, and a Qualcomm Faculty Award. The authors would like to thank Dr. Shuai Zhang and Dr. Jiancheng Lyu for helpful discussions. \bibliographystyle{abbrv}
proofpile-arXiv_065-3320
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Improvements in deep learning algorithms and availability of large annotated datasets have been critical in the gains achieved in computer vision tasks including segmentation and classification. However, the application of these algorithms remains a challenge for medical imaging, as computer vision algorithms usually require a large amount of well-annotated datasets \cite{rajpurkar2017chexnet}. In medical practice, annotated datasets are expensive to curate \cite{doi:10.1148/radiol.2020192224}, limited by HIPAA/GDPR and other regulations \cite{10.1001/archinte.165.10.1125}, and often focused on a specific medical condition limiting generalizability. Deep learning approaches that perform classification and segmentation tasks require large annotated datasets. Usually, a deep learning neural network will go over this large dataset multiple times (called epochs) to continuously adjust the weights of the network nodes during the training process. A contrarian approach to this traditional (big-shot) approach, which many researchers are working on under the terms, few-shot learning (FSL), one-shot learning, zero-shot learning \cite{peng2015learning}, is to use fewer epochs or fewer images in the neural network training \cite{ravi2016optimization}. In medical imaging, the big-shot approach is tedious, labor-intensive, and thus expensive because radiologists' time is costly \cite{chartrand2017deep}. \begin{figure*}[!bp] \centering \includegraphics[width=0.75 \linewidth]{FSL-triplet.png} \caption{Pneumothorax detection - triplets} \label{Fig:Triplets} \end{figure*} Few-Shot Learning on label limited datasets in medical imaging is not new \cite{pmlr-v106-prabhu19a, medela2019few, paul2021discriminative}. In our proposed approach, we work with limited data and inter-annotator differences. We evaluate our approach using the CheXpert \cite{DBLP:journals/corr/abs-1901-07031} Chest X-rays dataset. Our approach of blending an FSL algorithm with image triplets, which we call Triplet Few-Shot Learning (TFSL), is practical and novel. \textbf{\textit{Image Triplets}} is a set of three images - a false positive (FP) or false negative (FN) generated by the model, a true positive (TP), and a true negative (TN). \section{Related Works} Various deep learning techniques including Convolutional Neural Networks (CNN), Long-Shot Term Memory Networks, fast AnoGAN (f-AnoGAN), and Multi-model fusion strategies are applied to medical imaging tasks \cite{gao2018fully,du2016overview,DBLP:journals/corr/KamnitsasLNSKMR16,DBLP:journals/corr/abs-1808-01200,SCHLEGL201930}. Regardless of the deep learning technique applied, large annotated data sets are required to train AI algorithms, and the model outputs often include false inferences.FSL is applied in computer vision detection and segmentation tasks allowing robust model generation from small data sets, as extreme as One Shot Learning utilizing one positive/negative image pair \cite{chanda2019face, chen2019selfsupervised, hu2020leveraging, luo2019fewshot} FSL is a form of meta-learning which allows generalization to new classes on small data sets. FSL has been applied in medical imaging for many tasks. In an ISBI 2019 paper, Medela et al. applied FSL to reduce the need for labeled biological datasets in histopathology \cite{medela2019few}. Ali et al. (2020) applied it for classifying endoscopy images \cite{ali2020additive}. More recently, FSL has been applied to interactive training during segmentation, similar to our vision of using it in human-in-the-loop situations \cite{feng2021interactive}, in COVID-19 diagnosis from CT scans \cite{chen2021momentum}, and detecting rare pathologies from fundus images that were collected for a different purpose like Diabetic Retinopathy \cite{quellec2020automatic}. FSL can perform better than other frameworks such as transfer learning and Siamese networks to detect rare conditions usually represented with few images \cite{quellec2020automatic, feyjie2020semi, kim2017few}. \section{Technical Description} The TFSL algorithm is designed using MarginRankingLoss to reduce the number of false inferences made by the model. The TFSL algorithm is built on the best-performing pre-trained model submitted to the CheXpert competition. The pre-trained model used in the paper for experiments is trained using CheXpert data and is available at - \url{https://github.com/gaetandi/cheXpert}. An inference of this model is run to create a baseline model. \subsection{Triplet Data Creation} CheXpert dataset is used to train and evaluate the TFSL approach on image triplets. The first image in the image triplet is randomly selected from the failed inference images of the baseline model. Theoretically, the inference failure can be either an FP or FN. The second image in the image triplet is a TP and the third image is a TN. While collecting the \textit{training image triplets} (Fig \ref{Fig:Triplets}), a checking label is also collected. The checking label is -1 if the first image of the triplet is FN and 1 if the random image is an FP. The choice of -1 and 1 as labels is explained in Section \ref{modeling-choices}. We tested sets of 50/100/150 image triplets and the fine-tuning model improved performance over the baseline model at 150 image triplets. The randomly selected 150 image triplets were used for training the TFSL algorithm, but all the failed inference images (except the ones that are used for training) were collected and used to validate the algorithm. False inference images are randomly selected from all the failed inference images. \subsection{Baseline Model} The pre-trained model uses a DenseNet121 algorithm to classify pathologies from an image. Before training, images are converted to RGB, resized to (320x320) and further cropped to 224. The image data is converted into a PyTorch DataLoader. Adam optimizer and BCELoss (Binary Cross-Entropy Loss) are used to build the pre-trained model. Inference (evaluation) of the pre-trained model was used as the baseline model in this paper. We also implemented the FSL algorithm by following the guidelines outlined \cite{cermelli2020guidelines} and refer to it as \textbf{Incremental FSL Training} in this paper. \subsection{Triplet Few-Shot Learning (TFSL) Model} The pre-trained classification model was modified by replacing the final layer (Linear layer and Sigmoid activation function) with a Linear layer of 128 units and PReLU (Parametric Rectified Linear Unit) \cite{he2015delving} activation function to create 128-dimensional vectors for every image in the image triplets. The architecture of data set creation and modeling the TFSL algorithm is summarized in Fig \ref{Fig:Architecture}. \begin{minipage}[b]{0.9\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{TFSL.png}} \label{Fig:Architecture} \end{minipage} Each image in the image triplet was transformed into a 128-dimensional vector. These 128-dimensional vectors are used to calculate the distance between the images. If the false inference image is FN, then the image should be closer to the TP image from the triplet in an n-dimensional space; conversely if the false inference is FP, then the image should be closer to the TN image from the triplet in an n-dimensional space. We use the pre-trained classification model to create the n-dimensional vectors. The TFSL model is trained for five epochs using Adam Optimizer, a learning rate of 0.0001, and a weight decay of 1e-5. Positive Predictive Value (PPV) and Negative Predictive Value (NPV) are used as evaluation metrics. Margin Ranking Loss is used as the loss function. The training approach is slightly different from the Incremental FSL approach, in which the model was trained and validated on all 14 pathologies at once. \subsection{Modeling Choices} \label{modeling-choices} Margin Ranking Loss function takes two inputs x1, x2 and returns 1 if the first input is ranked higher and returns -1 if the first input is ranked lower. We chose -1 and 1 as labels while creating training image triplets to mimic this pattern. The euclidean distance between first image vector and second image vector will be x1 and the euclidean distance between first image vector and third image vector will be x2. The mathematical form of Margin Ranking Loss function (\ref{MRLF_Formula}) is provided below - where x1, x2 are the loss inputs and y is the label tensor. \begin{equation} \label{MRLF_Formula} loss(x1, x2, y) = max(0, -y*(x1-x2) + margin) \end{equation} PPV and NPV are chosen as evaluation metrics for the FSL because they aid in identifying the decrease in False Positives and False Negatives, respectively. PPV increases as the FPs are converted to TNs. NPV increases as FNs are converted to TPs. \section{Experiments and Results} \label{sec:experiments} All the models designed, trained, and evaluated in this paper uses the PyTorch. We open-sourced the dataset creation, training and validation code \footnote{The code for this paper is at - \url{https://github.com/iupui-soic/Radiology_FSL}}. We compare the baseline DenseNet Model to TFSL model and incremental FSL models. The results are provided in Table \ref{Results}. \begin{table}[!ht] \small \vspace{-2mm} \caption{Results from the Baseline, TFSL and Incremental FSL Models}\label{Results} \setlength{\tabcolsep}{0.5pt} \begin{center} \vspace{-4mm} \begin{tabular}{|*{8}{c|}} \hline \multicolumn{1}{|c}{}& \multicolumn{2}{|c|}{\shortstack{\\\textbf{Baseline} \\ \textbf{Model}}} & \multicolumn{2}{|c|}{\shortstack{\\\textbf{TFSL} \\ \textbf{Model} }} & \multicolumn{2}{|c|}{\shortstack{\\\textbf{Incremental} \\ \textbf{FSL Model}}}\\ \hline \multicolumn{1}{|c}{Pathology} & \multicolumn{1}{|c}{PPV} & \multicolumn{1}{|c}{NPV} & \multicolumn{1}{|c}{PPV} & \multicolumn{1}{|c}{NPV} & \multicolumn{1}{|c}{PPV} & \multicolumn{1}{|c|}{NPV} \\ \hline No Finding & 89.99 & 10.35 & 94.06 & 44.38 & \textbf{94.53} & \textbf{47.91} \\ \hline \shortstack{\\Enlarged \\ Cardiomediastinum} & 100.0 & 0.0 & 100.0 & 40.0 & 100.0 & \textbf{53.28} \\ \hline Cardiomegaly & 88.63 & 11.30 & 93.88 & 47.87 & \textbf{93.93} & \textbf{51.32} \\ \hline Lung Opacity & 71.83 & 28.51 & \textbf{86.14} & 61.23 & 85.53 & \textbf{63.56} \\ \hline Lung Lesion & 99.91 & 0.0 & 99.97 & 44.14 & 99.98 & \textbf{48.16} \\ \hline Edema & 80.49 & 20.01 & 89.33 & 50.49 & \textbf{89.80} & \textbf{54.14} \\ \hline Consolidation & 99.98 & 0.0 & 100.0 & 42.79 & 100.0 & \textbf{46.97} \\ \hline Pneumonia & 99.48 & 0.57 & \textbf{99.82} & \textbf{48.48} & 99.70 & 47.69 \\ \hline Atelectasis & 94.48 & 5.74 & \textbf{97.31} & 47.29 & 97.11 & \textbf{50.04} \\ \hline Pneumothorax & 99.92 & 0.0 & \textbf{99.94} & 42.46 & 99.90 & \textbf{48.64} \\ \hline Pleural Effusion & 79.94 & 20.25 & \textbf{89.70} & 52.08 & 88.56 & \textbf{55.85} \\ \hline Pleural Other & 100.0 & 0.0 & 100.0 & \textbf{49.47} & 100.0 & 46.31 \\ \hline Fracture & 100.0 & 0.0 & 100.0 & 50.0 & 100.0 & \textbf{50.73} \\ \hline Support Devices & 75.19 & 23.31 & \textbf{87.49} & 57.63 & 86.96 & \textbf{58.13} \\ \hline \end{tabular} \end{center} \vspace{-2mm} \end{table} \section{Results} \label{sec:results} The baseline model has high PPV and low NPV values. The TFSL model reduced FP and FN outputs, indicated by an increase in PPV and NPV, respectively. By performing a statistical test, we concluded that the TFSL algorithm and Incremental FSL algorithm results improved over the baseline results. After performing a similar test on TFSL and Incremental FSL algorithms, we concluded that the PPV values did not significantly differ from each other. The Incremental FSL NPV was higher than both the TFSL model with a statistical significance value (p-value) of 0.007. We used the dependent t-test model from \textit{statsmodel} in the \textit{Scipy} package to perform the above statistical tests. The time taken to train and validate the TFSL model on any pathology is around 8-9 minutes on a Nvidia Quadro P6000 GPU with 24 GB memory. This provides us with the ability to label and train the model rapidly. The Incremental FSL also consumed the same amount (8-9 minutes) to train and validate the algorithm on any pathology. \section{Discussion} \label{sec:discussion} FSL architectures are growing within medical imaging \cite{PAUL2021101911}. This work builds on FSL architecture through the use of triplets as shown in Figure 2. TFSL requires less training data and time compared to previous approaches. While a two-step process of using a saliency-based classifier with a discriminative autoencoder ensemble \cite{PAUL2021101911} has better performance in FSL compared to our approach, the simplicity and speed of our approach are important advantages to be considered. Our approach for fine-tuning models can be taken to edge devices, as has been shown in the non-medical imaging domain \cite{lungu2020siamese}. Additionally, all previous FSL architectures consider a singular ground truth label for images. Image labeling can have variability, particularly among radiologists when annotating studies \cite{cabitza_bridging_2020,saha_breast_2018}. Our approach is able to use triplets that are determined by human-in-the-loop's annotations and train a model that is specific to these new labels. Thus, the TFSL approach can deal with rapid re-training required on false inference images as determined by the user radiologist. The baseline model failed to identify true negative inference images frequently in pathologies such as Enlarged Cardiomediastinum, Lung Lesion, Consolidation, Pleural Other and Fracture, and at other times failed to identify true positives. Fine-tuning in TFSL and Incremental FSL substantially improved the identification of TP and TN inference images. A TFSL and human-in-the-loop system can be retrained easily through transfer learning methods made difficult by other approaches. \section{Conclusion} \label{sec:conclusion} In the paper, we presented a comparison of results between baseline and fine-tuned models, providing a conclusive evidence that the TFSL algorithm outperformed the baseline model. The use of Margin Ranking as the loss function, performance gains in limited datasets with quick retraining, and the simplicity of our approach are important characteristics of the TFSL approach. In the future, we plan to test this approach on different modalities and non-medical/natural image data sets. In summary, the major contributions of our paper are as follows: \begin{enumerate} \item We present a modified Few-Shot Learning algorithm to effectively improve the results of predicting pathologies on images whose inferences failed. \item We present a comparison of results between a fine-tuned model, our Few-Shot Learning model and a previously published few short learning algorithm trained in an incremental fashion. Our model out-performs the fine-tuned model and achieves a higher NPV in all classes, with close performance to the incremental FSL. We demonstrate that we can get good performance at lower computation. \item We experiment with MarginRankingLoss and TripletMarginLoss function as loss functions. Despite the assumption that TripletMarginLoss would perform better for image triplets, we found that MarginRankingLoss is more appropriate for our use case. This is not described in any of the previous works. \item Previous works have evaluated Few-Shot Learning on fewer classes within one or multiple datasets. We present our experiments using the CheXpert dataset to improve the predictions on 14 pathologies on Chest X-rays. \end{enumerate} \section*{Institutional Review Board (IRB)} This research work performed in this paper does not require IRB approval as the data is an open-source dataset.
proofpile-arXiv_065-3324
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Treewidth is a graph-theoretic parameter that measures the resemblance of a graph to a tree. We begin by recalling the definition of treewidth. \begin{definition}[Tree Decomposition] A tree decomposition of a graph $G = (V, E)$ is a pair $(T, X)$, where $X$ is a collection of subsets of $V$, called bags, and $T$ a tree on vertices $X$ satisfying the properties below: \end{definition} \begin{enumerate}[noitemsep] \item The union of all sets $X_i \in X$ is $V$. \item For all edges $(u, v) \in E$, there exists some bag $X_i$ which contains both $u$ and $v$. \item If both $X_i$ and $X_j$ contain some vertex $u \in V$, then all bags $X_k$ on the unique path from $X_i$ and $X_j$ in $T$ also contain $u$. \end{enumerate} \begin{definition}[Treewidth] The width of a tree decomposition $(T, X)$ is one less than cardinality of the largest bag. More formally, we can express this as \begin{align*} \max_i |X_i| - 1. \end{align*} The treewidth of a graph $G = (V, E)$ is the minimum width among all tree decompositions of $G$. \end{definition} Many graph-theoretic problems that are NP-hard on general instances admit polynomial-time algorithms on graph families whose treewidth is sufficiently slowly growing as a function of the number of vertices~\cite{kloks1994treewidth}. There is vast literature concerned with finding methods to relate the treewidth of graphs to other well-studied combinatorial parameters and leveraging this to devise efficient algorithms for algorithmic problems in graphs with constant or logarithmic treewidth~\cite{cygan2015parameterized}. An excellent introduction to the concept of treewidth as well as brief survey of the work of Robertson and Seymour in establishing this concept can be found in Chapter 12 of~\cite{diestel2005graph}. These treewidth-based algorithmic methods, however, have historically found limited applicability in random graphs. Sparse random graphs $G(n, d/n)$ where every edge occurs independently with probability $d/n$, for some $d > 1$, exhibit striking contrast between their local and global properties---and this contrast is apparent when looking at treewidth. Locally, these graphs appear tree-like with high probability\footnote{Given a random graph model, we say an event happens with high probability if it occurs with probability tending to $1$ as $n$ tends to infinity.} (w.h.p.): the ball of radius $O(\log_d n)$ around every vertex looks like a tree plus a constant number of additional edges. Globally, however, these graphs have w.h.p. treewidth $\Omega(n)$. For example, the super-critical random graph $G(n, \frac{1+\delta}{n})$ has w.h.p. treewidth $\Omega(n)$~\cite{do2022note,perarnau2014tree,lee2012rank}. As a result of this global property, conventional techniques used to exploit low treewidth to derive efficient algorithms do not apply directly for random graphs. In this paper, we attempt to take advantage of the local tree-like structure of random graphs by trying to understand the \emph{local} behavior of treewidth in random graphs. Central to our approach is the following definition. \begin{definition}[Local Treewidth]\label{def:local} Let $G$ be an undirected $n$-vertex graph. Given $k \leq n$ we denote by $t_k(G)$ the largest treewidth of a subgraph of cardinality $k$ of $G$. \end{definition} In words, the local treewidth of an $n$-vertex graph, with locality parameter $k$, is the maximum possible treewidth across all subgraphs of size $k$. We study two random graphs models, starting with the familiar binomial random graph $G(n, p)$. While the binomial random graph $G(n,p)$ lacks many of the characteristics of empirically observed networks such as skewed degree distributions, studying algorithmic problems on random graphs can nevertheless lead to interesting algorithms. One reason is that algorithms for $G(n, p)$ can often apply to a larger family of networks; Indeed, the algorithms we develop for $G(n,p)$ also work for noisy trees. \begin{definition}[Noisy Trees]\label{def:noisy} Let $T$ be an $n$ vertex tree. The noisy tree $T'$ obtained from $T$ is a random graph model where every non edge of $T$ is added to $T$ independently, with probability $1/n$. \end{definition} Here we assume $p=1/n$ for convenience; all our results regarding noisy trees also hold when the perturbation probability $p$ satisfies $p=\epsilon/n$ for $\epsilon <1$. Noisy trees are related to small world models of random networks~\cite{newman1999renormalization,newman2011scaling}, where adding a few random edges to a graph of high diameter such as a path results with a graph of logarithmic diameter w.h.p.~\cite{krivelevich2015smoothed}. Sparse connected graphs that are perturbed with a small number of random edges have received attention in various domains~\cite{newman1999renormalization}. Below, we give an informal description of the concepts we study and sketch our main results; we defer discussion of formal results until Section~\ref{sec:formal} and later in the paper. Our main result is a nearly tight bound holding w.h.p. for the maximum treewidth of a $k$-vertex subgraph of $G(n, p)$ assuming $k \leq n^{1-\epsilon}$ for $\epsilon \in (0, 1)$ and $p=d/n$ for $d > 1$ a sufficiently large constant.\footnote{Our \emph{upper} bound on the local treewidth holds for any $d$.} In the notation introduced earlier, this provides a bound for $t_k(G)$. We also obtain nearly tight bounds for the local treewidth of noisy trees as well. Our upper bounds on the local treewidth are motivated by algorithmic problems focused on containing the spread of a contagious process over undirected graphs by deleting edges. Such problems might naturally arise, among other applications~\cite{enright2018deleting,enright2021deleting}, in railways and air routes, where the goal might be to prevent spread while also minimizing interference to transportation. In this context, edge deletion may correspond to removing a transportation link altogether or introducing special requirements (such as costly checks) to people between the the endpoints. Edge removal can be also viewed as a \emph{social distancing} measure to control an epidemic outbreak~\cite{babay2022controlling}. One can also study the problem of removing \emph{vertices} to control the spread of an epidemic which is related to vaccinations: making nodes immune to infection and removing them from the network~\cite{sambaturu2020designing}. While we focus here on edge deletions our algorithmic ideas apply to vertex removals as well. We focus on the bootstrap percolation contagious process, detailed later (Definition~\ref{def:boot}), starting with a set of initially infected vertices, $A$. We then consider two edge-removal problems: \emph{Stopping Contagion} and \emph{Minimizing Contagion}. Letting $k = |A|$, we obtain exact algorithms for these problems in random graphs and noisy trees with running time upper bounded by $2^{o(k)}\poly(n)$ for $n$-vertex graphs, assuming $k, d,$ and the maximum degree of the tree to which noise is added to, $\Delta$, are not too large. Roughly speaking, and ignoring low order terms, we can achieve running time $2^{O(k/ \log n)}\poly(n)$. Note that our algorithms do not achieve polynomial time, even for $k$ that is poly-logarithmic in $n$; whether there exists a polynomial time algorithm for minimizing contagion and stopping contagion in $G(n, p)$ for every value of $k$ is an open question. Nonetheless, the dependency of our algorithm on $k$ is better (assuming $k \leq n^{\epsilon}$ for an appropriate constant $\epsilon>0$) than the dependency of $k$ in the running time of the best known algorithms for minimizing contagion\footnote{We are not aware of previous algorithms for the stopping contagion problem.} in the worst case~\cite{cordasco2021parameterized}: these algorithms have running time $2^{\Omega(k)}\poly(n)$. Our algorithms are based on the following three observations: \begin{enumerate}[noitemsep] \item The local treewidth of binomial random graphs and noisy trees is sublinear in $k$. \item There exist fast algorithms for minimizing and stopping contagion in graphs of bounded treewidth. \item The set of seeds $A$ has what we call the \emph{bounded spread} property: w.h.p. at most $c|A|$ additional vertices are infected from $A$ for some constant\footnote{For $G(n,d/n)$, our constant $c:=c(d)$ is a function of $d$. When $d$ is a constant independent of $n$ so is $c$.} $c$. Bounded spread allows us to solve minimizing contagion and stopping contagion on subgraphs that have small (sublinear in $k$) treewidth. \end{enumerate} For the sake of brevity and readability we focus on \emph{edge} deletion problems. We note that our algorithms can be easily adapted for the analogous problems of minimizing and stopping contagion by deleting \emph{vertices} rather than edges. The reason is that our algorithms for minimizing/stopping contagion on bounded treewidth graphs work (with the same asymptotic running time guarantees) for vertex deletion problems. Combining algorithms for bounded treewidth with the bounded spread property as well the upper bound on the local treewidth yields algorithms for the vertex deletion versions of minimizing and stopping contagion. Our main contribution is studying the concept of local treewdith for random graphs and connecting it to algorithmic problems involving stopping contagion in networks. Our calculations are standard and the contribution is conceptual rather than introducing a new technique. Our results for noisy trees regarding the bounded spread property are interesting as in contrast to other infection models considered in the literature~\cite{becchetti2021sharp}, the influence of adding random ``long range'' edges to the total spread of sufficiently small is minor in the sense that increases the total spread only by a constant factor. \section{Our results}\label{sec:formal} \subsection{Local treewidth bounds} Recall we define the local treewidth of a graph $G$, denoted $t_k(G)$, to be the greatest treewidth among along subgraphs of size $k$. Trivially, for any graph with at least one edge and $k \leq n$, $1 \leq t_k(G) \leq k$. Consider as an illustrative example the random graph $G = G(n, 1/2)$: with high probability, $t_k(G)=\Omega(k)$ for all values of $k$. For $k \leq 1.9 \log n$ this follows as there is a clique of size $k$ in $G$ w.h.p. For $k > 1.9 \log n$ this follows as a randomly chosen subset of size $k$ has, with high probability, minimum degree $\Omega(k)$, and a graph with treewidth $r$ has a vertex of degree at most $r$. We can now state our bounds for $t_k$ in the random graph models we consider. From here onward, $\epsilon>0$ is taken to be a positive constant in $(0,1 )$. We give somewhat compressed statements; reference to the full Theorems are provided throughout this section. \begin{theorem} Let $G = G(n, p)$ with $p=d/n$ and $k \leq n^{1-\epsilon}$. Then, with high probability: \begin{align*} t_k(G) \leq 3+ O\left(\frac{k \log d}{\log n}\right). \end{align*} \end{theorem} Since we always know $t_k(G)\leq k$, the upper bound in the Theorem above becomes trivial if $d \geq n^{\Omega(1)}$. Also observe that the Theorem does not hold for arbitrary $k \leq n$, as for $k = n, t_k(G)=\Omega(n)$ w.h.p. In terms of lower bounds, we have the following: \begin{theorem} Suppose $p=d/n$ and $d$ is a sufficiently large constant. Suppose $k \leq O(n/d)$; then, w.h.p. \begin{align*} t_k(G) \geq \Omega\left(\frac{k \log d}{\log n}\right). \end{align*} \end{theorem} More details can be found in Section~\ref{sec:random}. In contrast to the upper bound which holds for arbitrary $d$, our proofs of the lower bound are tailored to the case of sparse random graphs where $d=O(1)$. We leave the study of $d$ diverging with $n$ to future work. Our upper and lower bounds for the local treewidth of $G(n, d/n)$ also extend to the random $d$-regular graph $G(n, d)$--details can be found in Subsection~\ref{sec:random-reg}. For noisy trees, we have the following results. \begin{theorem} Let $T$ be an $n$-vertex tree with maximum degree $\Delta$. Let $T'$ be a noisy tree obtained from $T$. Then w.h.p. \begin{align*} t_k(T') \leq 3 + O\left(\frac{k( \log k+\log \Delta)}{\log n}\right). \end{align*} \end{theorem} Observe that the upper bound in the Theorem is trivial if $k, \Delta$ are $n^{\Omega(1)}$. As a result, in our proofs we will assume $k, \Delta \leq n^{\epsilon},$ for sufficiently small $\epsilon>0$. Our results can be extended to the case where each non-edge is added with probability $c/n$ for $c>1$. Similar ideas (which are omitted) yield the upper bound: \begin{align*} t_k(T') \leq 3 + O\left(\frac{k( \log k+\log \Delta+\log c)}{\log n}\right). \end{align*} We also provide a lower bound, showing that up to the $\log k, \log \Delta$ terms, the upper bound above is tight. Namely, the noisy path has w.h.p. local treewidth of order $\Omega(k/ \log n)$. For more details on the lower and upper bounds please see Section~\ref{sec:noisy}. \subsection{Contagious process and edge deletion problems}\label{sub:contagion} The local treewidth results outlined above prove useful in the context of two edge deletion problems we study. These problems arise when considering the evolution of a contagious processes over an undirected graph. We focus on the $r$-neighbor bootstrap percolation model \cite{chalupa1979bootstrap}. \begin{definition}\label{def:boot} In $r$-neighbor bootstrap percolation we are given an undirected graph $G=(V,E)$ and an integer \emph{threshold} $r\geq 1$. Every vertex is either \emph{active} (we also use the term infected) or \emph{inactive}; a set of vertices composed entirely of active vertices is called active. Initially, a set of vertices called \emph{seeds}, $A_0$, is activated. A contagious process evolves in discrete steps, where for integral $i > 0$, \begin{align*} A_i=A_{i-1}\cup \{v \in V:|N(v)\cap A_{i-1}|\geq r\}. \end{align*} Here, $N(v)$ is the set of neighbors of $v$. In words, a vertex becomes active in a given step if it has at least $r$ active neighbors. An active vertex remains active throught the process and cannot become inactive. Set \begin{align*} \langle A_0 \rangle=\bigcup_i A_i. \end{align*} The set $\langle A_0 \rangle$ is the set of nodes that eventually get infected from $A_0$ in $G$. Clearly, $\langle A_0 \rangle$ depends on the graph $G$, so we sometimes write $\langle A_0 \rangle_G$ to call attention to the underlying graph. We say a vertex $v \in V$ gets \emph{activated} or \emph{infected} from a set of seeds $A_0$ if $v \in \langle A_0 \rangle$. \end{definition} It is straightforward to extend this definition to the case where every vertex $v$ has its own threshold $t(v)$ and a vertex is infected only if it has at least $t(v)$ active neighbors at some point. As is customary in bootstrap percolation models, we usually assume that all thresholds are larger than $1$. Now, given a network with an evolving contagious process, we introduce the stopping contagion problem: \begin{definition}[Stopping Contagion] In the stopping contagion problem, we are given as input a graph $G=(V,E)$ along with two disjoints sets of vertices, $A, B \subseteq V$. Given that all vertices in $A$ are active, the goal is to compute the minimum number of edge deletions necessary to ensure that no vertices from $B$ are infected. In other words, we want to make sure $\langle A \rangle_{G'} \cap B=\phi$, where $G'$ is the graph obtained from $G$ after edge deletions. Given an additional target parameter, $\ell$, the corresponding decision problem asks whether it is possible to ensure no vertices from $B$ are infected by deleting at most $\ell$ edges. \end{definition} Next we consider the setting where given a set of infected nodes we want to remove the minimal number of edges to ensure no more than $k$ additional vertices are infected. \begin{definition} In the minimizing contagion problem, we are given a graph $G=(V,E)$, a subset of vertices $A \subseteq V$ and a parameter $s$. Given that all vertices in $A$ are active, we want to compute the minimum number of edge deletions required to ensure at most $s$ vertices in $V \setminus A$ are infected. If $G'$ is the graph obtained from $G$ by edge deletions, then this condition is equivalent to requiring $|\langle A \rangle_{G'}| \leq |A|+ s$. In the decision problem, we want to decide if it is possible to ensure $ |\langle A \rangle_{G'}| \leq |A|+ s$ with at most $\ell$ edge deletions. \end{definition} Both stopping contagion and minimizing contagion are NP-complete, and stopping contagion remains NP-hard even if $|A|=2$ and $|B|=1$. For complete proofs, please refer to the Appendix. \subsection{Algorithmic results} For minimizing contagion, current algorithmic ideas~\cite{cordasco2021parameterized} can be used to prove that if $|A|$ and the optimal solution are of size $O(k)$ the problem can be solved in time $2^{O(k)}\poly(n)$ on $n$-vertex graphs. No such algorithm, parameterized by $|A|$ and the size of the optimal solution, is known for stopping contagion. Using our upper bounds for local treewidth, however, we can prove: \begin{theorem} Let $\epsilon$ be a constant in $(0,1)$. Suppose that $k \leq n^{1-\epsilon}$ and that every vertex has threshold greater than $1$. Let $G:=G(n,p)$ where $p=d/n$. Assuming $d$ is a constant, we have that w.h.p. both minimizing contagion and stopping contagion can be solved in $G$ in time $2^{o(k)}\poly(n)$. \end{theorem} \begin{theorem} Suppose that $k \leq n^{\epsilon}$ and every vertex has a threshold greater than $1$. Let $T'$ be a noisy tree where the base tree $T$ has maximum degree $\Delta=O(1)$. Then w.h.p. both minimizing contagion and stopping contagion can be solved in $T'$ in time $2^{o(k)}\poly(n)$. \end{theorem} The dependence of the running time on $n, k, d$ and $\Delta$ can be made explicit: for precise statements, please see Section~\ref{sec:algorithms}. Algorithms for grids and planar graphs are presented in Section~\ref{sec:algorithms} as well. For our purpose, to translate local treewidth bounds to algorithmic results, we need an algorithm for solving stopping contagion and minimizing contagion on graphs of low treewidth. We provide such an algorithm that runs in exponential time in the treewidth, assuming the maximum degree is constant, using ideas from~\cite{cordasco2021parameterized}. More details can be found in Section~\ref{sec:treewidth}. \subsection{Our techniques} Our upper bounds for the local treewidth build on a simple ``edge excess principle": a $k$-vertex connected graph with $k+r$ edges has treewidth at most $r+1$. As the treewidth of a set of connected components is the maximum treewidth of a component, it suffices to analyze the number of edges in connected subgraphs of the random graphs we study. For $G(n, p)$ this is straightforward, but for noisy trees it is somewhat more involved. We find it easier to first analyze the edge excess of connected subgraphs, before considering connecting edges that allow us bound the excess of arbitrary subgraphs. A key component in our lower bound is the simple fact that if $H$ is a minor of $G$ then $\tw(G) \geq \tw(H)$. We first prove the existence of large treewidth subgraphs that are minors w.h.p. of random graphs and noisy trees. For $G(n,p)$ we show that sufficiently good \emph{edge} expanders\footnote{See Subsection~\ref{sec:def} for the precise definition of edge expanders.} contain large graphs with linear treewidth as minors. Recall that a graph is called an $\alpha$-vertex expander if there exists $\alpha \in (0,1)$ such that every subset $S$ of vertices with at most $|V|/2$ vertices has at least $\alpha |S|$ neighbors not in $S$. Previous approaches for proving the existence of desired graphs as minors in expanders~\cite{kleinberg1996short,krivelevich2019expanders} obtain an embedding\footnote{See Subsection~\ref{sec:def} for further details on minor-theoretic concepts we use.} of a graph $H$ as a minor in $G$ where every connected subgraph of $G$ corresponding to a vertex in $H$ is of size $O(\log n)$. To achieve nearly tight lower bounds we need every connected subgraph to be of size $O(\log n/ \log d)$. We obtain this stronger upper bound by adapting the approach of Krivelevich and Nenadov~\cite{krivelevich2019expanders} (which considers \emph{vertex} expanders) to give the desired bounds for graphs that have sufficiently strong \emph{edge} expansion properties. We then rely on recent results of Krivlevich~\cite{krivelevich2018finding} showing that locally sparse graphs with some additional properties contain large expanding subgraphs. Similar ideas are also used to prove the existence of large minors with linear treewidth in the noisy path. Our algorithms for minimizing contagion and stopping contagion in graphs of bounded treewidth build on techniques designed to exploit the tree-like nature of low treewidth graphs, sharing similarities to algorithms for target set selection in ~\cite{ben2011treewidth}---target set selection is the problem of finding a minimal set that infects an entire graph under the bootstrap percolation model. More directly, our problem resembles the Influence Diffusion Minimization (IDM) studied in \cite{cordasco2021parameterized}, where the goal is to minimize the spread of the $r$-neighbor bootstrap percolation process by preventing spread through vertices. After subdividing edges, minimizing contagion essentially reduces to IDM, albeit with additional restrictions on the vertices we can immunize (only vertices that belong to the "middle" of a subdivided edge can be deleted); we therefore solve a generalization of the IDM problem, see Definition~\ref{def:GIDM}, and use this to provide efficient algorithms for the problems we care about. At a high-level, our algorithm works by solving the stopping contagion recursively on subgraphs and then combining these solutions via dynamic-programming until we have a solution for the whole graph. To combine subproblems successfully, at each step we explicitly compute solutions for all possible states of vertices in a bag. While this could take exponential time in general, this approach provides an efficient algorithm in graphs with bounded treewidth. Our proof of bounded spread in noisy trees builds works by proving that small subsets of such trees contain few edges~\cite{coja2014contagious,feige2017contagious}. Since every non seed vertex needs at least two vertices to get infected, small contagious sets require small subsets that contain too many edges. Therefore, one can prove that small sets of seeds cannot infect too many vertices; the proof of small trees' local sparsity is similar to the proof that w.h.p. such noisy trees have small local treewidth. \subsection{Related work} While the idea to remove edges or vertices to contain an epidemic has been studied before~\cite{ren2019generalized,braunstein2016network,aspnes2006inoculation}, most of these works focus using edge or vertex deletions that break the graph to connected components of sublinear (or even constant) size~\cite{enright2018deleting,ren2019generalized,braunstein2016network}. Recently approximation algorithms for minimizing contagion problem has been studied in~\cite{babay2022controlling} for the SIR epidemic model. In particular,~\cite{babay2022controlling} studies the problem of deleting a set of edges of weight at most $B$ that minimizes the set of infected nodes after edges deletions. All these works consider a different contagion model from the $r\geq 2$ bootstrap percolation model studied here. Bootstrap percolation was first introduced by statistical physicists~\cite{chalupa1979bootstrap} and has been studied on a variety of graphs such as grids~\cite{balogh2012sharp}, hypercubes~\cite{balogh2006bootstrap,morrison2018extremal}, random graphs~\cite{janson2012bootstrap,feige2017contagious}, graphs with a power law degree distribution~\cite{amini2014bootstrap,schoenebeck2016complex}, Kleinberg's small world model~\cite{ebrahimi2015complex} and trees~\cite{riedl2012largest}. The fixed parameter tractability of minimizing contagion with respect to \emph{vertex} deletions, as opposed to edge deletions, has been thoroughly investigated with respect to various parameters such as the maximum degree, treewidth, and the size of the seed set $k$ in~\cite{cordasco2021parameterized}. The authors of ~\cite{cordasco2021parameterized} present efficient algorithms for minimizing contagion for graphs of bounded maximum degree and treewidth. With respect to $k$, using ideas from FPT algorithms for cut problems~\cite{fomin2013parameterized}, they give a $2^{k+\ell}\poly(n)$ algorithm for the case where the set of seeds is of size $k$ and there is a solution of size $\ell$ to the problem. Their algorithm can be easily adapted to the case of edge deletions: see Theorem~\ref{thm:exp}. We are not aware of the stopping contagion problem studied before, nor are we aware of previous studies of the minimizing contagion problem in random graphs. In order to deal with both stopping contagion and minimizing contagion for graphs of bounded treewidth, we build on algorithmic ideas from~\cite{ben2011treewidth}. The NP-hardness of minimizing contagion with respect to vertex deletion is proved in~\cite{cordasco2021parameterized}---our proof for the NP-hardness of the edge deletion version of minimizing contagion was found concurrently and independently; the proof is different than that in~\cite{cordasco2021parameterized}. There are two regimes of interest for the study of treewidth in sparse random graphs. For the subcritical regime $p \leq d/n$ with $d<1$, $G(n, p)$ has w.h.p. unicyclic connected components of size $O(\log n)$~\cite{erdHos1960evolution} and hence has treewidth at most $2$. For the supercritical regime with $p \geq d/n$ and $d>1$, $G(n,p)$ has w.h.p. a giant component of size $\Omega(n)$~\cite{erdHos1960evolution} and determining the treewidth is more complicated. Kloks~\cite{kloks1994treewidth} proved that the treewidth of $G(n, d/n)$ is $\Omega(n)$ w.h.p. for $d \geq 2.36$. His result was improved by Gao~\cite{gao2012treewidth} who showed that for $d \geq 2.16$, the treewidth of $G(n,d/n)$ is $\Omega(n)$ with high probability. Gao asked if his result can be strengthened to prove that $G(n, d/n)$ has treewidth linear in $n$ w.h.p. for any $d>1$; this was later shown in in~\cite{lee2012rank}. A different and somewhat simplified proof establishing that the treewidth of $G(n,d/n)$ is $\Omega(n)$ w.h.p. was given in~\cite{perarnau2014tree}. Finally, the fine-grained behavior of treewidth of $G(n, (1+\epsilon)/n)$ was studied in~\cite{do2022note} where it was shown that for sufficiently small $\epsilon$, the treewidth of $G(n, (1+\epsilon)/n)$ is w.h.p. \begin{align*} \Omega\left(\frac{\epsilon^3}{\log{1/\epsilon}}\right)n. \end{align*} The first lower bound for the treewidth of random regular graphs appears to be from~\cite{perarnau2014tree}: the authors prove that for every constant $d > d_0$ where $d_0$ is a sufficiently large constant, the treewidth of the random regular graph $G(n,d)$ is $\Omega(n)$ w.h.p. In~\cite{feige2016giant} it was also shown that random graphs with a given degree sequence (with bounded maximum degree) that ensure the existence of a giant component w.h.p. (namely a degree sequence satisfying the Molloy-Reed criterion~\cite{molloy1995critical}) have linear treewidth as well, which implies, using a different argument than in ~\cite{perarnau2014tree}, that $G(n,d)$ for $d>2$ has linear treewidth w.h.p. A different proof for the linear lower bound of the treewidth of $G(n,d)$ for $d>2$ is given in~\cite{do2022note}. Several works have examined the use of local treewidth in devising algorithms for problems such as subgraph isomorphism~\cite{eppstein2002subgraph,hajiaghayi2001algorithms}. These works primarily focus on planar graphs and graphs avoiding a fixed minor---they do not seem to apply to random graphs. The only work we are aware of that has examined the local treewidth of random graphs is that of~\cite{dreier2018local}. Their main goal is to demonstrate that the treewidth of balls of radius $r$ around a given vertex depends only on $r$, as opposed to analyzing the local treewidth as function of $n,d$ and $k$ as we do here. We employ a similar edge excess argument to the one in~\cite{dreier2018local} although there are some differences in the analysis and the results: please see Section~\ref{sec:random} for more details. We are not aware of previous work lower bounding the local treewidth of random graphs. Embedding minors in expanders has received attention in combinatorics~\cite{krivelevich2009minors} and theoretical computer science, finding applications in proof complexity~\cite{austrin2022perfect}. Kleinberg and Rubinfeld~\cite{kleinberg1996short} proved that if $G=(V,E)$ is a $\alpha$-expander with maximum degree $\Delta$, then every graph with $n/\log^{\kappa} n$ vertices and edges is a minor of $G$ for a constant $\kappa>1$ depending on $\Delta$ and $\alpha$. Later it was stated~\cite{chuzhoy2019large} that $\kappa(\Delta, \alpha)=\Omega(\log^2(d)/ \log^2(1/\alpha))$. Krivelevich~\cite{krivelevich2019expanders} together with Nandov proved that if $G$ is an $\alpha$-vertex expander then it contains every graph with $cn/\log n$ edges and vertices for some universal constant $c>0$. The sparsity of random graphs as well as randomly perturbed trees was used in showing that these families have w.h.p. bounded expansion.\footnote{Bounded expansion should not be confused with the edge expansion of a graph. For a precise definition please see~\cite{nevsetvril2012characterisations,nevsetvril2012sparsity}.}~\cite{nevsetvril2012characterisations,demaine2014structural} These results are incomparable with our treewidth results: it is known that graphs with bounded maximum degree have bounded expansion and that $G(n,d/n)$ has bounded expansion w.h.p.~\cite{nevsetvril2012characterisations,nevsetvril2012sparsity} In contrast, there exist $3$-regular graphs with linear treewidth and as previously mentioned the treewidth of $G(n,d/n)$ is $\Omega(n)$. \subsection{Future directions} Our work raises several questions. We believe our upper bounds on the local treewidth of noisy trees can be made independent of the maximum degree of the tree; namely, for arbitrary trees, the local treewidth should be upper bounded w.h.p. by $O(k /\log n)$ assuming $k$ is not too large. Proving or disproving this however remains open. Understanding how well one can approximate minimizing contagion and stopping contagion in general graphs, as well as graphs with certain structural properties (e.g. planar graphs) is a potential direction for future research as well. Finally, it could be of interest to study if our bounds for local treewidth could lead to improved running time for additional algorithmic problems in random graphs. \subsection{Preliminaries}~\label{sec:def} Throughout the paper $\log$ denotes the logarithm function with base $2$; we omit floor and ceiling signs to improve readability. All graphs considered are undirected and have no parallel edges. Given a graph $G=(V,E)$ and two disjoint sets of vertices $A,B$ we denote by $E(A,B)$ the set of edges connecting a vertex in $A$ to a vertex in $B$. For $A,B$ as above we denote by $N_G(A,B)$ the set of vertices in $B$ with a neighbor in $A$. A graph $H$ is a \emph{minor} of $G$ if $H$ can be obtained from $G$ by repeatedly doing one of three operations: deleting an edge, contracting an edge or deleting a vertex. We keep our graphs simple and remove any parallel edges that may form during contractions. It can be verified~\cite{nevsetvril2012sparsity} that a graph $H$ with $k$ vertices is a minor of $G$ if and only if there are $k$ vertex disjoint connected subgraphs of $G,C_1 \ldots C_k$ such that for every edge $(v_i,v_j)$ of $H$, there is an edge connecting a vertex in $C_i$ to a vertex of $C_j$. We refer to the map mapping every vertex of $H,v_j$ to $C_j$ as an \emph{embedding} of $H$ in $G$; the maximum vertex cardinality of $C_i,1\leq i \leq k$ is called the \emph{width} of the embedding. We shall be relying on the well-known fact~\cite{nevsetvril2012sparsity,diestel2005graph} that if $H$ is a minor of $G$ than the treewidth of $G$ is lower bounded by the treewidth of $H$. We will also need the following definition of an edge expander: \begin{definition} Let $\beta \in (0,1)$ and $\gamma \geq 0$. An $(\beta,\gamma)$-edge expander is an $n$-vertex graph such that all sets of vertices $S$ with $|S| \leq \beta n$ satisfy $|E(S, V \setminus S)| > \gamma |S|$. \end{definition} Note that $\beta, \gamma$ may depend on $n$ or the average degree of $G$. For simplicity, we sometimes omit the edge part and simply refer to an edge expander as an expander. \section{Local treewidth of random graphs}\label{sec:random} In this section we prove both an upper and lower bound for $t_k(G(n,p))$ that with high probability. We assume $k \leq n^{1-\epsilon}$ for a constant $\epsilon>0$. \subsection{Upper bound} From now on we focus on the case where $p=d/n$, assuming $d$ is a sufficiently large constant. Here and elsewhere it is likely that our results extend to the case where $d$ grows sufficiently slowly as a function of $n$, but we leave the study of $d:=d(n)$ diverging with $n$ for future work. Our main idea in upper bounding $t_k(G)$ is to leverage the fact that $G(n,p)$ is locally sparse and that if a few edges are added on top of a tree, the treewidth of the resulting graph cannot grow too much. \begin{lemma}\label{lem:excess} Let $G$ be a connected graph with $n$ vertices and $n-2+\ell$ edges. Then $\tw(G) \leq \ell$. \end{lemma} \begin{proof} Since $G$ is connected, it must have a spanning tree $T$ with $n$ vertices and $n - 1$ edges. The graph $G$ has exactly $\ell - 1$ additional edges; since adding an edge can increase a graph's treewidth by at most $1$, we immediately get the desired bound. \begin{align*} \tw(G) &\le \tw(T) + \ell - 1 = \ell \end{align*} \end{proof} We can now prove: \begin{theorem}\label{thm:upperbound} Suppose that $k\leq n^{1-\epsilon}$. Then for $G = G(n, p)$ we have that w.h.p. that for every $m \leq k$: \begin{align*} t_m(G) \leq 3 + O\left(\frac{m\log d}{\log n} \right). \end{align*} \end{theorem} \begin{proof} Since the Theorem is obvious for $d=n^{\Omega(1)}$ we assume that $d \leq n^{\epsilon/2}$. We first prove the statement for $m=k$. Given a graph $G$ with treewidth $t$, it is always possible to find a connected subgraph of $G$ with identical treewidth to $G$. In that spirit, rather than bounding the probability there exists some $k$-vertex subgraph of $G$ with treewidth exceeding some $r$, we bound the probability some subgraph on $s \le k$ vertices is connected and has treewidth greater than $r$ in $G$. Fix some $S \subseteq V$ with exactly $s$ vertices. Note there are $s^{s - 2}$ possible spanning trees which could connect the vertices in $S$, each requiring $s - 1$ edges. While the resulting subgraph would be connected, its treewidth is only $1$. Therefore, $r$ additional edges would also be required to produce a subgraph with treewidth at least $r + 1$. Accounting for the ways to choose these edges, the probability the subgraph induced on $S$ is connected and has treewidth greater than $r$ is at most \begin{align*} s^{s - 2} \binom{\binom{s}{2}}{r} \left(\frac{d}{n}\right)^{r + s - 1}. \end{align*} This follows since each edge occurs independently with probability $p=d/n$. Now, we bound the probability that any such subset $S$ with at most $k$ vertices exists. To that end, we take a union bound over all $\binom{n}{s}$ possible subsets of $s$ vertices, letting $s$ range from $1$ to $k$. Putting this together and using the inequality ${a \choose b} \leq (ea/b)^b$ yields \begin{align*} \sum_{s = 1}^k \binom{n}{s} \times s^{s - 2} \binom{\binom{s}{2}}{r} \left(\frac{d}{n}\right)^{r + s - 1} &\le \frac{d^r}{n^{r - 1}} \sum_{s = 1}^k e^s \left(\frac{es^2}{2r}\right)^{r} d^s \\ &\le \frac{d^r}{n^{r - 1}} k e^k \left(\frac{ek^2}{r}\right)^{r} d^k \end{align*} To complete the proof, notice this probability can be made to be at most $n^{-1}$ (using $k \leq n^{1-\epsilon}$ and $d \leq n^{\epsilon/2}$) when $r$ is taken to be \begin{align*} 2 + O\left( \frac{k\log d}{\log n} \right). \end{align*} The Theorem now follows for $m=k$ from Lemma~\ref{lem:excess}. Using the above proof along with a simple union bound over all $m \leq k \leq n^{1-\epsilon}$ implies the statement for all $m \leq k$. \end{proof} Notice the approach above yields a sharper bound than if we solely attempted to bound the treewidth by counting the number of excess edges above $k - 1$. To explain, notice a $k$-vertex subgraph can have treewidth $r$ only if it has at least $r + k - 1$ edges. A simple union bound over all possible subsets of $k$ vertices, upper bounds the probability we are interested in. \begin{align*} \binom{n}{k} \binom{\binom{k}{2}}{r + k - 1} \left(\frac{d}{n}\right)^{r + k - 1} \le \frac{k^{2k} k^{2r} d^k d^r}{n^{r -1}} \end{align*} This is implicitly used in~\cite{dreier2018local} to bound the treewidth of balls of radius $r$ in $G(n, p)$; as mentioned above, our method improves on this result. More concretely, since the upper bound now has a additional $k^k$ factor in the numerator, using this in our application would yield the weaker upper bound \begin{align*} t_k(G) = 3 + O\left( \frac{k(\log k + \log d)}{\log n} \right). \end{align*} \subsection{Lower bound} Throughout this section we assume that $d>1$ is a large enough constant. Here we use the fact that $G(n,d/n)$ contains with high probability a large expanding subgraph (which in turn, contains large minors with large treewidth) to prove lower bounds on the local treewidth of $G(n,d/n)$. We need the following result relating the diameter of a graph to it's edge expansion. This seems to be a folklore result: we give a sketch of the proof for completeness. \begin{lemma}~\label{lem:diameter} Let $\alpha>0$ be a fixed constant. Suppose that $G$ has maximum degree $d$ and is an $(1/2,\Omega(d^{\alpha}))$-edge expander. Then the diameter of $G$ is $O(\log _d n)$. \end{lemma} \begin{proof} Assume for simplicity that $G$ has edge expansion exactly $d^{\alpha}$. Consider two arbitrary vertices $u,v$ in $G$. Look at the balls $B_r(u)$ of radius $r=1,2, \ldots $ around $u$ until we reach the first $r$ with either $|B_r(u)| \geq n/2$ or the subgraph induced on $B_r(u)$ spans more than $nd/2-nd^{\alpha}/2$ edges. By the edge expansion properties of the graph we have that for $r=O(\frac{\log n}{\alpha\log d})$ one of these two cases must happen. Now do the same for $v$. When the balls around $u$ and $v$ either contain at least $n/2$ vertices or span more than $nd/2-nd^{\alpha}/2$ edges they must intersect or be connected by an edge. This proves the desired upper bound on the diameter of $G$. \end{proof} We now prove that \emph{edge} expanders contain large minors. Our proof is similar to a proof by Krivelevich and Nenadov~\cite{krivelevich2019expanders} (Theorem 8.1) who prove a similar result for vertex expanders. \begin{theorem}\label{thm:minor_edge_expander} Let $G$ be a graph with maximum degree $d$. Suppose that $G$ is an $(2/3,\Omega(d/\log d))$-edge expander. Let $\epsilon>0$ be a sufficiently small constant. Then $G$ contains every graph $H$ with $k= \epsilon n / \log n$ vertices and edges as a minor. Furthermore $H$ can be embedded to a minor of $G$ such that every vertex of $H$ is mapped to connected subgraph of $G$ of size $O(\log n/\log d)$. \end{theorem} \begin{proof} As is standard we can and will assume $H$ has maximum degree $3$. It is well known (see e.g., Krivelevich; Kleinberg and Rubinfeld) that this assumption is without loss of generality. Recall that we assume $G$ is a $(2/3,b\cdot d/\log d)$-expander for some $b>0$. To simplify notation set we assume with no loss of generality that $b=1$. We now give an iterative algorithm that given a graph $G=(V,E)$ and $H$ as above outputs either a nonexpanding subset of vertices or a minor embedding of $H$ into $G$. The algorithm maintains a partition of $G$ into three subsets $A,B,C$ where either $A$ will contain a large nonexpanding set or $B$ will contain a minor embedding of $H$. The invariant $|E(A,C)| \leq \frac{d|A|}{6\log d}$ will be maintained throughout the algorithm. The set $C$ will contain the remaining vertices that will be moved in future iterations to either to $A$ or $B$. In addition, we keep track of a subset $I_0$ of $[k]$ that records the current subset of vertices of $H$ that have been embedded into $G$. This embedding is realized by connected pairwise disjoint subsets $B_i \subseteq B$ with $i \in I_0.$ Furthermore for every $B_i, |B_i|\leq c\log n/\log d$ for some constant $c>0$. The sets are initialized to be $A=B=\phi,C=V,I_0=\phi.$ We repeat the following loop as long as $I_0$ does not equal $[k]$. Choose an arbitrary index $i$ in $[k] \setminus I_0.$ Let $X$ be the set of neighbors of $i$ in $I_0$. By our assumption $|X| \leq 3.$ Suppose first that $X$ is not empty. For $j \in X$, choose $v_j \in N_G(B_j,C)$ arbitrarily. If for some $j$ there is no such vertex, we remove $B_j$ to $A$ and update: $A \leftarrow A \cup B_j, B \leftarrow B \setminus B_j, I_0 \leftarrow I_0 \setminus\{j\}.$ Otherwise, consider the subgraph induced on $C, G(C)$. If $G(C)$ is {\bf not} a $(1/2, \frac{d}{6\log d})$-edge expander we find a subset $U$ of $C$ whose size is at most $|C|/2$ such that $|E(U,C \setminus U)| \leq \frac{d |U|}{6\log d}$ and update $A \leftarrow A \cup U, C \leftarrow C \setminus U$. If $G(C)$ is a $(1/2, \frac{d}{6\log d})$-edge expander then it has diameter at most $a\log n/ \log d$ for some constant $a>0$. In this case we choose a subset $Y$ of $C$ of size $|Y| \leq 2\text{diam(G)} \leq 2a \log n/ \log d$ such that $G(Y)$ is connected and contains the set of vertices $v_j, j \in X$. This can be done by choosing one $v_j$ as the ``center" vertex and connecting the other $v_j$'s to the center vertices with paths of length at most diam$(G)$. We then update $B \leftarrow B \cup Y, C \leftarrow C \setminus Y,B_i \leftarrow Y, I_0 \leftarrow I_0 \cup\{i\}.$ Observe that $B_i$ is connected and has an edge connecting it to $B_j, j \in X$. If $X$ is empty simply choose an arbitrary vertex in $C$ and do the same update. We now record a few facts regarding the algorithm. First, as we always add to $A$ a subset of vertices $U$ satisfying $|E(U,C \setminus U)| \leq \frac{d |U|}{6\log d}$ and as $|C|$ decreases during the algorithm, we always have that $|E(A, C\setminus A)| \leq \frac{d|A|}{6\log d}.$ Second, taking $k=\frac{\delta n}{\log n}$, the cardinality of $B$ can be made to be at most $\epsilon n/\log d$, where $\epsilon>0$ is an arbitrarily small constant. This follows as the size of each $B_i$ is at most $c \log n/\log d$. By taking $\delta$ to be sufficiently small we get that $|B|<\epsilon n/\log d.$ When the algorithm terminates one of two things happen. Either $I_0=[k]$ or $|A| \geq n/3.$ If $I_0=[k]$ we are home: we have found an embedding of $H$ as a minor of $G$ with the desired properties. Else, consider the first time where $|A| \geq n/3$. As $|C| \leq 2n/3$, we have that $|A| \leq 2n/3$. Furthermore, $$|E(A,V \setminus A)| \leq d|B|+|E(A,C)| \leq \frac{\epsilon n d}{\log d} + \frac{d|A|}{6\log d}.$$ By taking $\epsilon$ to be sufficiently small we have (using that fact that $|A| \geq n/3$) that $|E(A,V \setminus A)| \leq d|A|/\log d$. As $|A| \leq 2n/3$ this contradicts the fact that $G$ is a $(2/3,d/\log d)$-expander. It follows that this second case can never happen, concluding the proof. \end{proof} We wish to apply Theorem~\ref{thm:minor_edge_expander} to the random graph $G:=G(n,d/n)$, for a (large enough) constant $d$. However, there are several obstacles. In the first place, $G$ is not connected: for example, w.h.p. it has isolated vertices. Furthermore the maximum degree of $G(n,d/n)$ is not $O(d)$. Rather with high probability it is $\Omega(\log n/ \log \log n)$. We can deal with these issues and prove the existence of a subgraph of $G(n,d/n)$ which is an $(\Omega(d/ \log d),2/3)$-edge expander using known results which we record below. First, we need the following result from~\cite{krivelevich2018finding}: \begin{theorem}\label{thm:sparse} Let $G:=G(n,d/n)$, suppose that $\delta:=\delta(d) \leq \frac{1}{10d}$. Then w.h.p. every set of $\frac{\delta n}{\ln(1/ \delta)}$ vertices in $G$ touches at most $\delta n$ edges. \end{theorem} A direct corollary of Theorem~\ref{thm:sparse} is that in $G(n,d/n)$, a set of $e^{-\Omega(d)}n$ vertices touches (w.h.p.) $e^{-\Omega(d)}n$ edges. Second, we need the following fact from ~\cite{balogh2010large} showing that sets of vertices in $G(n,d/n)$ that are not too large are very sparse. \begin{theorem}\label{thm:samotij} Let $k \geq 2$ and let $c \geq 10 k \log k$. Then w.h.p. every subset $A$ of vertices of $G(n,c/n)$ satisfying $|A|\leq n/(ek)$ spans less than $c|A|/k$ edges. \end{theorem} A direct implication is that for sufficiently large $d$, in $G(n,d/n)$ w.h.p. every subset $A$ of vertices with $|A| \leq O(\log d/d)n$ spans at most $O(\log d)|A|$ edges. Finally we need a result of Krivelevich~\cite{krivelevich2018finding} about expansion properties of locally sparse graphs. \begin{theorem}\label{thm:kriv} Let $c_1>c_2>1$ and $\alpha >0$. Suppose that $G=(V,E)$ is an $n$-vertex graph satisfying: \begin{enumerate} \item $\frac{|E|}{|V|}\geq c_1;$ \item Every subset $W$ of $V$ of size less than $\alpha n$ spans less than $c_2|W|$ edges. \end{enumerate} Then $G$ contains an induced subgraph $H$ on at least $\alpha n$ vertices that is an $(\frac{c_1-c_2}{\log_{3/2} (1/\alpha)},2/3)$-edge expander. \end{theorem} Note: the original proof of Krivlevich shows the existence of subgraph of $\alpha n$ vertices that is a $(\frac{c_1-c_2}{\log_{2} (1/\alpha)},1/2)$-edge expander The same proof gives the result in Theorem~\ref{thm:kriv} about an $(\frac{c_1-c_2}{\log_{3/2} (1/\alpha)},2/3)$-expander. Armed with these observation we can prove that w.h.p. $G(n,d/n)$ contains a large edge expanding subgraph. \begin{theorem}\label{thm:subgraph} Let $G:=G(n,d/n)$. Assuming $d$ is sufficiently large, w.h.p., $G$ contains a subgraph $H$ with at least $cn$ vertices which is an $(\Omega(\frac{d}{\log d}),2/3)$ edge exapnder. Furthermore, the maximum degree of $H$ is $O(d)$. Here $c:=c(d)>0$ is a constant depending only on $d$. \end{theorem} \begin{proof} By standard concentration inequalities $G$ has w.h.p. $(1-o_d(1))dn/2$ edges. It is well known~\cite{molloy1995critical,chvatal1991almost} that w.h.p. the number of vertices in $G$ with more than $10d$ neighbors is at most $e^{-\Omega(d)}n$. Let $C$ be the set of vertices of degree larger than $10d$. Assuming $|C|=e^{-\Omega(d)}n$ we have using Theorem~\ref{thm:sparse} that w.h.p. $C$ touches at most $e^{-\Omega(d)}n$ edges. Deleting all the vertices in $C$ from $G$ we get a graph $G'=(V',E')$ such that $|E'|/|V'| \geq d/3$. Using Theorem~\ref{thm:samotij} we have that every subset $A$ of $V'$ with at most $O(\log d/d)|V'|$ vertices spans at most $O(\log d)|A|$ edges. Using Theorem~\ref{thm:kriv} we get that $G'$ contains a subgraph $H$ on $\Omega(\frac{\log d}{d})n$ vertices that is an $\Omega(d/\log d)$-edge expander. Clearly the maximum degree of $H$ is at most $10d$ concluding the proof. \end{proof} Finally, We need the existence of sparse graphs with large treewidth. \begin{prop} There exist graphs with $n$ vertices and $n$ edges of treewidth $\Omega(n).$ \end{prop} \begin{proof} As random $3$-regular graphs have with high probability linear treewidth~\cite{do2022note,feige2016giant} there are $m$-vertex graphs with $m$ vertices and $3m/2$ edges and treewidth $\Omega(m)$. Adding to such a graph $m/2$ isolated vertices results with a graph with the desired property. \end{proof} Using our results we can lower bound the local treewidth of a random graph: \begin{theorem}\label{thm:lowerbound} Let $G:=G(n,d/n)$ be a random graph where $d$ is a large enough constant. Assume $k \leq O(n/d)$. Then w.h.p. $G$ contains a subgraph with $O(k)$ vertices whose treewidth is $\Omega(\frac{k \log d}{\log n}).$ \end{theorem} \begin{proof} We may assume that $k=\Omega(\log n/\log d)$, otherwise the lower bound in the Theorem can be upper bounded by a constant, hence it immediately follows. Let $H$ be a graph with $s$ vertices and edges of treewidth $\Omega(s)$. Let $1\leq s \leq \left(\frac{n\log d}{d \log n}\right)$. By Theorems~\ref{thm:minor_edge_expander} and~\ref{thm:subgraph} $G$ contains an embedding of $H,H'$ of width $O(\log n/ \log d)$. It follows that $H'$ has at most $O(s\log n/ \log d)$ vertices and treewidth at least $\Omega(s)$. \end{proof} Lower bounds on the local treewidth can be used to provide upper bounds on the size of the nonplanar subgraph of $G(n,d/n)$. \begin{corollary} Let $G:=G(n,d/n)$ where $d$ is a sufficiently large constant. Then with high probability $G$ contains a subgraph with $O\left(\frac{\log n}{\log d}\right)^2$ vertices that is non planar. \end{corollary} \begin{proof} This follows from Theorem~\ref{thm:lowerbound} along with the fact that every $m$-vertex planar graph has treewidth $O(\sqrt{m}).$ \end{proof} \subsection{Local treewidth of random regular graphs}~\label{sec:random-reg} Similar bounds on the local treewidth of random regular graphs $G(n,d)$ can be established via similar arguments to those used for $G(n,d/n).$ For the upper bound, one can use the fact~\cite{coja2014contagious} that for every $k<nd/4$ distinct unordered pairs of vertices, the probability they all occur simultaneously in $G(n,d)$ is at most $(2d/n)^k$ and then nearly identical arguments to those in Theorem~\ref{thm:upperbound}. For the lower bound one can use the fact that w.h.p. $G(n,d)$ is a $(1/2,\Omega(d))$ expander~\cite{bollobas1988isoperimetric,kolesnik2014lower} from which it easily follows that it is also a $(2/3,\Omega(d)$) expander (for sets of size larger than $n/2$ count the edge coming out of the complementary set) and afterwards use Theorem~\ref{thm:minor_edge_expander}. We summarize this with the following Theorem: \begin{theorem}\label{thm:regular} Suppose that $2<d$ is a constant and $k\leq n^{1-\epsilon}$ for some constant $\epsilon \in (0,1)$ . Then for $G = G(n, d)$ we have that w.h.p.: \begin{align*} \Omega\left(\frac{k\log d}{\log n} \right)\leq t_k(G) \leq 3 + O\left(\frac{k\log d}{\log n} \right). \end{align*} \end{theorem} \section{Local treewidth of noisy graphs}\label{sec:noisy} We study the local treewidth of noisy graphs: Recall that in this model there is a base $n$-vertex graph $G$ with maximum degree $\Delta$. On top of this base graph every non edge of $G$ is added independently with probability $1/n$. All proofs missing from this section can be found in the Appendix. Our main result is: \begin{theorem}\label{thm:smallworld} Let $G$ be an $n$-vertex connected graph of maximum degree $\Delta$. Suppose that we add every non-edge of $G$ to $G$ with probability $1 /n$ independently of all other random edges. Call the resulting graph $G'$. With high probability, then, $t_k(G') \leq O(t_k(G) + r)$, where \begin{align*} r=3 + O\left(\frac{k (\log k+ \log \Delta)}{\log n}\right). \end{align*} \end{theorem} To prove Theorem~\ref{thm:smallworld} we need several Lemmas. The first is due to~\cite{bagchi2006effect}. While tighter bounds are known~\cite{beveridge1998random}, the simpler bound from~\cite{bagchi2006effect} suffices to establish our asymptotic upper bounds for the local treewidth. \begin{lemma} \label{lem:bagchi} Let $G$ be an $n$-vertex graph of maximum degree $\Delta$. Then the number of connected subgraphs of $G$ with $k$ vertices is at most $n\Delta^{2(k-1)}$. \end{lemma} \begin{lemma} Suppose that we have a graph $H$ composed of $k$ connected components $C_1, \ldots, C_k$. Suppose that we merge these connected components by adding exactly $k-1$ edges to produce a connected graph $G$. If $t=\max\{ \tw(C_1), \ldots, \tw(C_k) \}$ then the treewidth of $G$ is at most $\max\{t, 1\}$. \end{lemma} \begin{proof} For connected components $C_1, \dots, C_k$ consider tree decompositions $T_1, \allowbreak \dots, T_k$ with widths at most $t$; these surely exist given the treewidth of each $C_i$ is at most $t$ for all $1 \le i \le k$. Assume $C_i$ and $C_j$ are connected in $G$ by an edge from vertex $v_i$ in $C_i$ to $v_j$ in $C_j$. Take the corresponding tree decompositions $T_i$ and $T_j$ and choose arbitrary bags containing $v_i$ and $v_j$ respectively; introduce a new bag $\{v_i, v_j\}$ and connect this to both. Repeating this process for all $k - 1$ edges added to $H$ connects all $T_1, \dots, T_k$. We claim the resulting graph is a valid tree decomposition of $G$ with width at most $t$, proving the proposition. Edge counting certifies the resulting connected graph is a tree. Furthermore, every edge in $G$ has some bag containing its endpoints: edges from $H$ have such a bag in some subtree $T_i$, while the remaining edges explicitly have a bag with its endpoints as constructed above. Finally, since new bags introduced share a vertex with each of its two neighbors, subgraphs corresponding to individual vertices remain trees, satisfying the final requirement for tree decompositions. \end{proof} Finally we need the following Lemma: \begin{lemma}\label{lem:ave} Let $G$ be a graph with average degree $d$, Then $G$ contains a subgraph $H$ of $G$ with $H$ having minimal degree at least $d/2$. \end{lemma} Armed with this we can prove Theorem~\ref{thm:smallworld}. \begin{proof} We may assume $\Delta,k \leq n^{\epsilon}$ for sufficiently small $\epsilon$, as otherwise the upper bound in the Theorem follows from the fact that $t_k(G') \leq k.$ We begin by upper bounding $t_k(G')$ for connected subgraphs of $G'$ of size $k$. Later we show how to lift the connectedness requirement. We consider two possibilities for $G$. In the first, suppose all subgraphs of $G$ on $k$ vertices have at most \begin{align*} \binom{k}{2} - (r + \ell - 1) \end{align*} edges. Now fix such a subgraph $H$ of $G$ with $k$ vertices and $\ell \le k$ connected components; we upper bound the probability the corresponding subgraph $H'$ in $G'$ (the subgraph induced on the vertices of $H$ in $G'$) is connected and has treewidth at least $t_k(G) + r$. To that end, the probability a fixed set of $\ell - 1$ random edges connect the $\ell$ connected components in $H$ into a single component in $G'$ is $n^{-(\ell - 1)}$. By construction, the largest component in $H$ has size no larger than $k - \ell + 1$. Therefore, by our lemma, the treewidth of $H$ together with $\ell - 1$ connecting edges is upper bounded by $t_{k - \ell + 1}(G)$. For $H'$ to additionally have treewidth at least $t_k(G) + r$, a minimum of $r$ additional random edges must be present in $H$ as $t_{k - \ell + 1}(G) \leq t_k(G)$. Therefore, we can upper bound the probability that $H'$ is both connected and has treewidth larger than $r$ by \begin{align*} \binom{k^2}{r + \ell - 1} n^{-(r + \ell - 1)} \end{align*} We count the number of possible subgraphs $H$ with $k$ vertices and $\ell$ connected components. An upper bound can be derived by noticing there are $\binom{k - 1}{\ell - 1}$ ways to choose the positive sizes of the components, denoted $s_1, \dots, s_\ell$. For each set of sizes, we can bound the choices of components using Lemma~\ref{lem:bagchi}: \begin{align*} \prod_{i = 1}^{\ell} n\Delta^{2(s_i -1)} &\le n^\ell \Delta^{2 \sum_{i = 1}^\ell s_i} = n^{\ell}\Delta^{2k}. \end{align*} We can now upper bound the probability there exists some $k$-vertex subgraph of $G$, $H$, whose corresponding subgraph in $G'$ is connected and has treewidth at least $t_k(G) + r$, denoted $p(n, \Delta, k, r)$. Since $G$ is connected, this is equivalent to the probability $t_k(G') \ge t_k(G) + r$. Now take a union bound over all possible subgraphs $H$ on $k$ vertices and $1 \le \ell \le k$ connected components; in particular, for each possible choice of $\ell$, we multiply an upper bound on the number of possible subgraphs by the maximum probability each subgraph ends up connected and with large treewidth, derived above. Applying the inequality $\binom{q}{s} \le q^s$, we arrive at the following upper bound: \begin{align*} p(n, \Delta, k, r) &\le \sum_{\ell = 1}^k \binom{k - 1}{\ell - 1} n^{\ell}\Delta^{2k} \times \binom{k^2}{r + \ell - 1} n^{-(r + \ell - 1)} &\le \frac{k^{2(r+k)}\Delta^{2k}}{n^{r - 1}} \sum_{\ell = 1}^k k^{\ell} \\ &\le \frac{k^{2r}\Delta^{2k}}{n^{r - 1}} k^{3k+1.} \end{align*} Taking logarithms and using our assumptions on $k,\Delta$ we have that for $$r= O\left(\frac{k (\log k+ \log \Delta)}{\log n}\right)$$ it holds that $p(n,\Delta,k,r) \leq O(1/n).$ Now consider the second case where $G$ has some some $k$-vertex subgraph $H$ with $\ell \le k$ connected components and more than \begin{align*} \binom{k}{2} - (r + \ell - 1) \ge {k \choose 2}- 2k + 1 \end{align*} edges. We may assume without loss of generality that $r \leq k$, as otherwise the inequality $t_k(G') \leq O(t_k(G)+r)$ trivially holds---in fact, $t_k(G') \leq r$. Given its edge count $H$ has average degree $\Omega(k)$. It follows from Lemma~\ref{lem:ave} that $H$ contains a subgraph $\tilde{H}$ of minimum degree $\Omega(k)$. Therefore $\tilde{H}$, and hence $H$, have treewidth $\Omega(k)$, since any graph of treewidth $w$ must contain a vertex of degree at most $w$. Hence, we get that $t_k(G') \leq O(t_k(G)+r)$ as $t_k(G') \leq k$; in fact, $t_k(G') \leq O(t_k(G))$ in this case. To conclude the proof we need to consider $k$-vertex subgraphs of $G'$ that are not necessarily connected. To prove our claim for subgraphs that are not connected, we need to consider their connected components. As we have shown, the probability there is a connected subgraph with $k'$ vertices for a fixed $k'<k$ with treewidth larger than $t_k'(G)+O(r(k',n,\Delta)$ is $O(1/n)$. Therefore a simple union bounded argument over all $k'\leq k$ (using $k\leq n^{\epsilon}$) as well as the fact that the treewidth of a graph with components $C_1 \ldots C_j$ is $\max(\{\tw(C_1) \ldots \tw(C_j)\})$ concludes the proof. \end{proof} Observe that similarly to the $G(n,d/n)$ case, a simple union bound argument implies that w.h.p $t_s(G') \leq O(t_s(G) + r)$ for all $s \leq k.$ Finally, the upper bound in Theorem~\ref{thm:smallworld} is nearly tight for certain noisy trees. \begin{theorem} Consider the $n$ vertex path, $P_n$. Suppose we add every nonedge to $P_n$ with probability $\epsilon/n$ where $\epsilon>0$ is an arbitrary constant. Call the perturbed graph $P'.$ Then with high probability for any $k=\Omega(\log n)$, there exists a subgraph of $P'$ with $O(k)$ vertices with treewidth $\Omega(k/ \log n).$ \end{theorem} \begin{proof} Fix $B$ to be a large enough constant. Chop $P_n$ to $n/B$ disjoint paths\footnote{To simplify the presentation we assume $B$ divides $n$. Similar ideas work otherwise.} $A_1 \ldots A_{n/b}$ each of length $B$. Consider now the graph $G$ whose vertex set is $A_1 \ldots A_{n/B}$ and two vertices $A_i$ and $A_j$ are connected if there is an edge (in $P'$) connecting $A_i$ to $A_j$. The probability two vertices in $G$ are connected is at least $$1-\left(1-\epsilon/n\right)^{B^2}\geq \epsilon B^2/2n.$$ For a fixed graph $H$ with $s$ vertices and edges, it is known~\cite{krivelevich2019expanders} that the supercritical random graph $G(m, \frac{1+\epsilon}{m})$ contains an embedding of $H$ into $G$ as long as $s= O(m/\log m)$. Furthermore the width of the embedding is $O(\log m)$. The probability that two vertices in $G$ are connected is larger than $\frac{1+\epsilon}{n/B}$. Therefore we can embed $H$ into a subgraph $H'$ of $G$ whose size is at most $ s\log n$ such that $H$ is a minor of $H'$. Furthermore as the vertices of $G$ are paths of length $B$ (in $P_n$), the embedding of $H$ into $G$ directly translates to an embedding of $H$ into $P'$ whose width is $O(B \log n)=O(\log n)$. Choosing $H$ with $s$ vertices and edges and treewidth $\Omega(s)$ concludes the proof. \end{proof} \section{Algorithms for graphs of bounded treewidth}\label{sec:treewidth} In this section, we build on the results of ~\cite{cordasco2021parameterized} to provide polynomial time algorithms for bounded treewidth instances of minimizing contagion and stopping contagion. As we sketched in our introduction, we generalize the influence diffusion minimization problem introduced by the authors and use a similar dynamic-programming algorithm. Our main result is the following algorithm for graphs of bounded treewidth $\tau$: \begin{theorem}\label{thm:treewidth} Let $G$ be an $n$ vertex graph with maximum degree $\Delta$, maximum threshold $r$ and treewidth $\tau$. Then both minimizing and stopping contagion can be solved in time $O\left(\tau 1296^\tau \min\{r, \max\{\Delta, 2\} \}^{4\tau} \poly(n) \right)$. \end{theorem} For a proof, including a description of our algorithm and runtime analysis, please see the Appendix. Note that to combine subproblems, we must effectively account for the effect of infected vertices elsewhere on each subgraph we consider. We therefore essentially solve minimizing contagion and stopping contagion in a more flexible infection model, where thresholds are allowed to differ between vertices but remain at most $r$; as a result, our theorem cleanly translates to this setting as well. \section{Algorithms for minimizing and stopping contagion in grids, random graphs and noisy trees}\label{sec:algorithms} In this section we study how to solve minimizing contagion and stopping contagion when the set of seeds $A$ is not too large and does not spread by too much. We use this along with local treewidth upper bounds to devise algorithms for minimizing and stopping contagion in random graphs. We also consider algorithms for grids and planar graphs. As usual all missing proofs appear in the Appendix. Using similar ideas to~\cite{cordasco2021parameterized} (who consider vertex deletions problems) we have the following result for the minimizing contagion problem. \begin{theorem}~\label{thm:exp} Suppose there are $t$ edges whose removal ensures no more than $r$ vertices are infected from the seed set $A$. Then minimizing contagion can be solved optimally in (randomized) time $2^{r+t}\poly(n)$ where $n$ is the number of vertices. \end{theorem} \begin{proof} Color every vertex in $V \setminus A$ independently blue or red. Consider a solution of $t$ edges such that after removing these edges a set $B$ of cardinality at most $r$ is infected from $A$. With probability at least $2^{-r}$ all vertices in $B$ are red. For each of the $t$ edges in an optimal solution, exactly one endpoint does not get infected from $A$ as a result of removing the edge. With probability at least $2^{-t}$, all these $t$ endpoints belonging to edges in an optimal solution are colored blue. Assuming the two events above occur, we run the contagion process only on red vertices and find the set of vertices $B$ infected from $A$. Once we recover $B$ we can remove the minimum set of edges from $G$ ensuring only $B$ is infected from $A$. Therefore we can solve the problem optimally with probability (at least) $2^{-(t+r)}$. Repeating this process independently $2^{r+t+10}$ times results with a randomized algorithm solving this problem with probability at least $2/3$. The running time is $2^{r+t}\poly(n)$ as desired. \end{proof} The algorithm above can become slow if $r$ or $t$ are very large. Additionally, we do not know how to get similar results (e.g., algorithms of running time $2^{|A|}\poly(n)$) for stopping contagion. Below we show that we can improve upon this algorithm for graphs that have some local sparsity conditions. A key property we use is that for both minimizing contagion and stopping contagion with a seed set $A,$ we restrict our attention to the subgraph of $G$ induced on $\langle A \rangle$. \subsection{Grids and planar graphs} Consider the $n \times n$ grid where all vertices have threshold at least 2 we have the following ``bounded spread" result: \begin{lemma} In the $n \times n$ grid every set of size $k$ infects no more than $O(k^2)$ vertices. \end{lemma} \begin{proof} Embed the $n \times n$ grid $G=\{1, \ldots, n\}\times \{1, \ldots ,n\}$ in $H=\{0, \ldots, n+1\}\times \{0, \ldots, n+1\}$ in the natural way. Given a subset $A$ of $G$, the \emph{perimeter} of $A$ is the set of all vertices not belonging to $A$ having a neighbor in $A$. The crucial observation is that if $A$ is a set of infected seeds, the perimeter of $A$ can never increase during the contagion process~\cite{balogh1998random}. As the perimeter of $A$ is at most $4k$ the infected set has perimeter at most $4k$ as well. The result follows as every set $A \subseteq \{1, \ldots, n\}\times \{1, \ldots ,n\}$ of size $m$ has perimeter $\Omega(\sqrt{m})$. \end{proof} Using Theorem~\ref{thm:exp} we have that minimizing contagion on the $n$ by $n$ grid with $k=|A|$ can be solved in time $2^{O(k^2)}\poly(n)$. We simply apply the algorithm in Theorem~\ref{thm:exp} to $\langle A\rangle$. Alternatively we can use exhaustive search over all subsets of edges in the graph induced on $\langle A\rangle$ to solve\footnote{For minimizing contagion using the FPT algorithm may be preferable as it may run significantly faster if the optimal solution has cardinality $o(k^2)$.} both minimizing or stopping contagion. We can do better using the following fact: \begin{lemma} Let $G$ be a subgraph of an $n$ by $n$ grid with $r$ vertices. Then $G$ has treewidth $O(\sqrt{r})$. \end{lemma} \begin{proof} Every $m$-vertex planar graph has treewidth $O(\sqrt{m})$. \end{proof} We get: \begin{corollary} Let $G=(V,E)$ be the $n$ by $n$ grid. Suppose $H=(V,E')$ where $E' \subseteq E$ and every vertex has a threshold of at least $2$. Let $A$ be the seed set with $k=|A|.$ Then stopping contagion and minimizing contagion can be solved in time $2^{O(k)}\poly(n)$. \end{corollary} \begin{proof} For solving either problems we only need to consider the subgraph of $G, \langle A\rangle$. The result now follows from Theorem~\ref{thm:treewidth}. \end{proof} Similarly, for a planar graph where every vertex has threshold at least $2$ and at most $b$ and every subset $A$ of size $k$ infects at most $f(k)$ vertices, stopping contagion can be solved in time $b^{O(\sqrt{f(k)})}\poly(n)$. \subsection{Sparse random graphs} Consider the random graph $G(n,d/n)$ assuming all vertices have threshold larger than $1$. Assuming $d \leq n^{1/2-\delta}$ for $\delta>0$, it is known~\cite{feige2017contagious} that with high probability every set of size $O(\frac{n}{d^2\log d })$ does not infect more than $O(|A|\log d)$ vertices. Furthermore, it is known~\cite{feige2017contagious} that any set of size $O(n/d^2)$ has with high probability constant average degree. It follows that assuming $|A|=O(\frac{n}{d^2\log d })$ the optimal solution to minimizing contagion is of size $O(|A|\log d)$. Therefore in random graphs with $|A| \leq O(\frac{n}{d^2\log d})$, minimizing contagion can be solved using Theorem~\ref{thm:exp} in time $O(2^{|A|\log d}\poly(n))$. As before, exhaustive search over all edges on the graph induced on $\langle A\rangle$ can solve both minimizing and stopping contagion in time $O(2^{|A|\log d}\poly(n))$ as well. Using our local treewidth estimates, Theorem~\ref{thm:treewidth}, the bounded spread property and the fact that w.h.p the maximum degree of $G$ is $O(\log n/ \log \log n)$ we have the following improvement for the running time: \begin{theorem} Let $G:=G(n,d/n)$. Let $k=|A|$. Suppose that $k \leq O(\min(n^{1-\epsilon},\frac{n}{d^2\log d}))$ and $d \leq n^{1/2-\delta}$, and that every vertex has threshold larger than $1$. Then w.h.p both minimizing contagion and stopping contagion can be solved in time $$\exp\left(O\left(\frac{k\log^2 d \log \log n}{\log n}\right)\right)\poly(n).$$ \end{theorem} \begin{proof} As before we can solve either problem on $\langle A\rangle$ using the upper bound on the treewidth from Theorem~\ref{thm:upperbound}, the fact that with high probability $|\langle A\rangle| \leq O(\log d|A|)$ and the algorithm for graphs of bounded treewidth for stopping or minimizing contagion. \end{proof} One can derive similar algorithms for stopping contagion in random $d$-regular graphs (where $d$ is a constant). As the details are very similar to the analysis of the binomial random graph they are omitted. \subsection{Noisy trees} We now devise an algorithm for stopping contagion and minimizing contagion for noisy trees. To achieve this we first prove that for forests every sets of seeds does not spread by much and furthermore this property is maintained after adding a "small" number of edges on top of the edges belonging to the forest. Then we use similar ideas to Theorem~\ref{thm:smallworld} and prove that noisy trees are locally sparse in the sense that every subsets of vertices of cardinality $k$ spans w.h.p $k+o(k)$ edges assuming $k$ is not too large. We use this property to prove that any subset $A$ of $k$ seeds infects w.h.p $O(k)$ vertices. Thereafter we can use the algorithms for bounded treewidth to solve either minimizing contagion or stopping contagion on $\langle A \rangle$. Let $T$ be an $n$-vertex tree with max degree $\Delta$ and let $T'$ be the noisy tree obtained from $T$. Here we show that with high probability every set of $k\leq n^{\epsilon}$ seeds does not infect more than $ck$ additional nodes where $c$ is an absolute constant. One key ingredient is proving that such noisy trees are locally sparse. \begin{theorem}\label{thm:spreadtree} Let $T$ be a tree of maximum degree $\Delta$ and let $T'$ be the noisy graph obtained from $T$. Suppose $k, \Delta\leq n^{\epsilon}$. Then with high probability every set of $k$ seeds infects no more than $ck$ vertices where $c$ is some absolute constant. \end{theorem} We need a few preliminaries before we can prove Theorem~\ref{thm:spreadtree}. When $T$ is a tree, a subset of vertices with $k$ vertices and $\ell$ components spans exactly $k-\ell$ edges. A simple modification of the proof of Theorem~\ref{thm:smallworld} yields: \begin{theorem}\label{thm:edgespan} Let $T'$ be the noisy tree. Then, with probability $1 - O(1/n)$, every connected subset of vertices of size $k$ in $T'$ spans at most \begin{align*} (k - 1) + 1 + O\left(\frac{k (\log k+ \log \Delta)}{\log n}\right) \end{align*} edges. \end{theorem} \begin{proof} First observe that if $k$ or $\Delta$ are larger than $n^{\Omega(1)}$ the statement in the Theorem follows immediately from the fact that in $G(n,1/n)$ w.h.p every subset $|S|$ of vertices spans at most $2|S|$ edges~\cite{krivelevich2015smoothed}. Hence we shall assume that $\Delta,k \leq n^{\epsilon}.$ We employ a nearly identical argument to that used in the proof of Theorem~\ref{thm:smallworld}. Here, we are interested in bounding the probability some $k$-vertex subgraph of $T$ ends up connected in $T'$ with at least $k - 1 + r$ edges. Now fix such a $k$-vertex subgraph of $T$ with $\ell$ connected components, $H$. Since $T$ is a tree, any such subgraph will contain exactly $k - \ell$ edges; $\ell - 1$ random edges will be required to connect $H$ and $r$ additional to reach the $k - 1 + r$ edge threshold. As argued in the proof of Theorem~\ref{thm:smallworld}, this happens with probability upper bounded by \begin{align*} \binom{k^2}{r + \ell - 1} n^{-(r + \ell - 1)} \end{align*} We can now take a union bound over all possible subgraphs (of $T$) on $k$ vertices and $1 \le \ell \le k$ connected components in $T$; this yields the same upper bound on the probability $T'$ contains a connected subgraph with more than $k+r$ edges computed in the proof of Theorem~\ref{thm:smallworld}: \begin{align*} \frac{k^{2r}\Delta^{2k}}{n^{r - 1}} k^{3k+1.} \end{align*} Setting $r$ as in the claim ensures the probability some $k$-vertex connected subgraph in $T'$ has at least $k + r$ edges is $O(1/n)$, as desired. \end{proof} We now easily extend this argument to all connected subgraphs of size {\bf at most} $k$ in $T'$, rather than simply those size exactly $k$. \begin{corollary} Let $T, T'$ be as in Theorem~\ref{thm:edgespan}. Suppose $t = n^{\epsilon}$. Then w.h.p every connected subgraph of $T'$ with $k \leq t$ vertices has at most \begin{align*} (k - 1) + 1 + O\left(\frac{k (\log k+ \log \Delta)}{\log n}\right) \end{align*} edges. \end{corollary} \begin{proof}\label{cor:unionbound} From Theorem~\ref{thm:edgespan}, we know the probability that some connected subgraph on $k$ vertices exceeds this number of edges is $O(1/n)$. Taking a union bound over all $k \leq t$ concludes the proof. \end{proof} We can now prove: \begin{theorem} Let $T,T',k$ be as in Theorem~\ref{thm:edgespan}. Then, w.h.p every subset of vertices of size $k$ in $T'$ spans at most \begin{align*} (k - 1) + 1 + O\left(\frac{k (\log k+ \log \Delta)}{\log n}\right) \end{align*} edges. \end{theorem} \begin{proof} The result Theorem~\ref{thm:edgespan} extends to arbitrary (not necessarily connected) subgraphs of $T'$ by decomposing an arbitrary subgraph with $k$ vertices of $F$ to its $\ell$ connected components with sizes $k_1 + \dots + k_\ell = k$. Applying the bound derived above to each of these connected subgraphs, we know that for some constants $c_1, \dots c_\ell$, we have that with high probability for $C=\max(c_1 \ldots c_\ell)$: \begin{align*} \sum_{i = 1}^\ell \left[ (k_i - 1) + 1 + c_i \frac{k_i (\log k_i+ \log \Delta)}{\log n} \right] &\le k + \frac{C}{\log n} \sum_{i = 1}^\ell k_i \log k_i + C \frac{k \log \Delta}{\log n} \\ &\le k + \frac{C}{\log n} k \log k + C \frac{k \log \Delta}{\log n} \\ &= (k - 1) + 1 + O \left( \frac{k (\log k + \log \Delta)}{\log n} \right). \end{align*} The second inequality follows from rewriting $\log k_i$ as $\log k-\log (k/k_i)$ and collecting the positive terms. This concludes the proof. \end{proof} As before it is easy to extend the Theorem above to all subsets of $T'$ of size at most $k$. Details omitted. We proceed to prove the bounded spread property of noisy trees. We need a few more auxiliary Lemmas. \begin{lemma}\label{lem:spreadtree} Let $G$ be a forest. Suppose the threshold of every vertex is at least $2$. Then any set of $k$ seeds in $G$ activates less than $k$ additional vertices in $G$. \end{lemma} \begin{proof} If a set of seeds $A$ activates a set $B$ of additional vertices then $|E(A \cup B)| \geq 2|B|$ as every vertex in $B$ must be adjacent to two active vertices. On the other hand, as $G$ is a forest we have that $|E(A \cup B)|\leq |A|+|B|-1$. Therefore $|B|<|A|$ which is what we wanted to prove. \end{proof} \begin{lemma}\label{lem:edgeaddition} Let $G$ be a graph and suppose we add a set of $k$ edges to $G$. Call the resulting graph $H$. Then $m(H,2) \geq m(G,2)-k$. \end{lemma} \begin{proof} Suppose towards contradiction that $m(H,2)<m(G,2)-k$. Consider a contagious set $A$ in $H$ of minimal size. $A$ can be turned into a contagious set in $G$ by adding no more than $k$ vertices: we run the contagious process on $G$ and whenever we reach a vertex that was infected from $A$ (in $H$) because of an additional edge in $H$ we simply add it to $A$. The total number of vertices added in this way is at most $k$. Therefore, if $m(H,2)<m(G,2)-k$ we would have found a contagious set in $G$ of size smaller than $m(G,2)$ which is absurd. \end{proof} Lemmas~\ref{lem:spreadtree} and ~\ref{lem:edgeaddition} easily extend to the case where the threshold of every vertex is at least 2. We can now prove Theorem~\ref{thm:spreadtree}. \begin{proof} We prove the result for the case every vertex has threshold exactly $2$. The result when the thresholds are at least $2$ is similar. Consider a set $A$ of $k$ seeds. Suppose $A$ infects an additional set of vertices $B$. We now show that w.h.p for a large enough constant $c,$ $|B|$ must be smaller than $c|A|$. Else, suppose that $A$ infects at least $c|A|$ vertices for some fixed constant $c$. Without loss of generality $A$ infects exactly $c|A|$ additional vertices. Assuming that $c$ is sufficiency large, we have using Theorem~\ref{thm:edgespan} that w.h.p the number of edges in $T'$ added on top of $F$, the subgraph of $T$ induced on $A \cup B$ is smaller than $(|A|+|B|)/4$. In addition, $F$ satisfies by Theorem~\ref{thm:edgespan} the inequality $m(F,2) \geq (|A|+|B|)/2$. Therefore, by Lemma~\ref{lem:edgeaddition}, in $T'$ the subgraph $F'$ induced on $A \cup B$ satisfies w.h.p. $m(F',2) \geq (|A|+|B|)/4$. On the other hand, we have that $m(F',2) \leq |A|$ as we assume $A$ infects $B$ in $F'$. Taking $c> 3$ leads to a contradiction concluding the proof. \end{proof} As the star with $n-1$ leafs shows, the spread of a subset of size $2$ in a noisy tree with degree $\Omega(n)$ can be w.h.p $\Omega(\log n).$ In addition, we believe that this is the worst possible spread: every subset of size $k$ in a noisy tree will not infect with high probability more than $O(k \log n)$ vertices. It seems likely that that restriction on $k$ in Theorem~\ref{thm:spreadtree} can be lifted and that the Theorem holds for arbitrary $k$. Whether this is the case is left for future work. Finally, we can leverage Theorem~\ref{thm:spreadtree} to get algorithms for stopping contagion in noisy trees: \begin{theorem} Let $T$ be a tree and let $T'$ be the noisy tree obtained from $T$. Assume $|A|=k,\Delta \leq n^{\epsilon}$ and that every vertex has threshold larger than $1$. Let $m:=\max(\log \log n,\log \Delta)$. Then both minimizing contagion and stopping contagion can be solved in $T'$ in time $$\exp\left(O\left(\frac{k(\log k+\log \Delta)m}{\log n}\right)\right)\poly(n).$$ \end{theorem} \begin{proof} This follows the fact that w.h.p $|\langle A \rangle|=o(k)$, the upper bound on the treewidth of the subgraph induced on $\langle A \rangle$ from~\ref{thm:smallworld}, Theorem~\ref{thm:treewidth} and the fact that the maximum degree of $G(n,1/n)$ is $O(\log n/ \log \log n)$ with high probability. \end{proof} \section*{Acknowledgements} We are very grateful to Michael Krivlevich who provided numerous valuable comments and links to relevant work. Josh Erde offered useful feedback. Finally we would like to thank the anonymous referees for useful comments and suggestions. Any typos or inaccuracies are the sole responsibility of the authors of this paper. \bibliographystyle{plain}
proofpile-arXiv_065-3325
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction.} McMullen's g-conjecture \cite{McMullen-g} characterizes all possible face numbers of simplicial polytopes $\Delta$. The sufficiency part of the conjecture was proved by Billera and Lee \cite{BilleraLee}. Stanley \cite{StanleyHL} proved the necessity by applying the Hard Lefschetz theorem to the cohomology ring $H(\Delta)$. The Hard Lefschetz theorem is traditionally proved together with the Hodge-Riemann bilinear relations, which state that a quadratic form is positive definite on the primitive cohomology. When trying to generalize the g-conjecture from simplicial polytopes to simplicial spheres, one is faced with the fact that there is no convexity and hence no positivity for the Hodge-Riemann bilinear relations. Proving Hard Lefschetz without Hodge-Riemann bilinear relations is very hard (e.g. see \cite{Adiprasito}). However, in order to deduce Hard Lefschetz from Hodge-Riemann relations, one does not need positivity. Indeed, anisotropy of the quadratic form is sufficient. Papadakis and Petrotou \cite{PapadakisPetrotou} prove a very strong version of anisotropy of the quadratic form on not just the primitive cohomology but the whole middle degree cohomology. \begin{theorem}[Papadakis, Petrotou] \label{thm-PP} Let $\Delta$ be a simplicial sphere of dimension $n-1=2m-1$ over a field $k$ of characteristic $2$. Let $K=k(\a{i,j})$ be the field of rational functions where $\a{i, j}$ are the coefficients of a linear system of parameters in the definition of $H(\Delta)$. Then the quadratic form defined on the middle degree cohomology $H^m(\Delta)$ by multiplication \[ Q(g) = g^2 \in H^{2m}(\Delta) \simeq K\] is anisotropic. \end{theorem} Papadakis and Petrotou used Theorem~\ref{thm-PP} to prove the Hard Lefschetz theorem for all spheres in characteristic $2$, in both even and odd dimensions. The Hard Lefschetz theorem then implies the g-conjecture for such spheres. Our first result in this article is to give an explicit description of the quadratic form $Q$ that holds in any characteristic (Theorem~\ref{thm-Q} below). This description allows us to prove a conjecture stated in \cite{PapadakisPetrotou} that generalizes the main ingredient in the proof of Theorem~\ref{thm-PP}. We then give a simplified proof of Theorem~\ref{thm-PP} as well as its counterpart in odd dimension $n$. One can define a Hodge-Riemann type quadratic form not just on the middle degree cohomology. Let \[ l = x_1 + x_2 + \ldots + x_N \in H^1(\Delta).\] The quadratic form $Q_l$ on $H^j(\Delta)$ for $j\leq n/2$ is \[ Q_l(g) = l^{n-2j} g^2 \in H^n(\Delta) \simeq K.\] This form is defined for both even and odd $n$. The following is the counterpart of Theorem~\ref{thm-PP} for odd $n$. \begin{theorem} \label{thm-lef} Let $\Delta$ be a simplicial sphere of dimension $n-1 = 2m$ over a field $k$ of characteristic $2$, and let $K=k(\a{i, j})$. Then the quadratic form $Q_l$ defined on $H^m(\Delta)$: \[ Q_l (g) = l g^2 \in H^{n}(\Delta) \simeq K\] is anisotropic. \end{theorem} A version of Theorem~\ref{thm-lef} can be deduced from Theorem~\ref{thm-PP} in one dimension higher \cite{PapadakisPetrotou}. However, Theorem~\ref{thm-lef} is also a direct application of the conjecture in \cite{PapadakisPetrotou}. Theorem~\ref{thm-PP} and Theorem~\ref{thm-lef} (or one of these theorems alone) can be used to prove the Hard Lefschetz theorem. The Hard Lefschetz theorem then implies that the quadratic form $Q_l$ is anisotropic on $H^j(\Delta)$ for any $j\leq n/2$. It remains an open problem to prove anisotropy in characteristic $0$. The explicit description of the quadratic form allows us to use a specialization argument to show that anisotropy in characteristic $2$ implies the same in characteristic $0$, but this only applies when $\Delta$ is a simplicial sphere in both characteristics, for example when $\Delta$ is a topological sphere. \begin{theorem} \label{thm-2to0} Let $\Delta$ be an orientable simplicial complex that is a sphere in characteristic $2$. Then Theorem~\ref{thm-PP} for even $n$ and Theorem~\ref{thm-lef} for odd $n$ hold over the field $K_0 = {\mathbb{Q}}(\a{i, j})$. \end{theorem} The case where $\Delta$ is a generalized homology sphere in characteristic $0$ but not in characteristic $2$ remains open. In Section~\ref{sec-consec} below we also discuss anisotropy for pseudo-manifolds proved by Adiprasito, Papadakis and Petrotou \cite{APP}. \section{Stanley-Reisner rings} We fix a field $k$ of any characteristic and let $\Delta$ be a simplicial sphere of dimension $n-1$ with vertex set $v_1, \ldots, v_N$. By a simplicial sphere we mean a simplicial complex that is a generalized homology sphere -- the link of every simplex is a homology sphere of the appropriate dimension. Here the homology is computed with coefficients in $k$, hence being a sphere depends on the characteristic of the field. The Stanley-Reisner ring of $\Delta$ over a field $K$ is \[ {\mathcal A}(\Delta) = K[x_1,\ldots, x_N]/I_\Delta,\] where $I_\Delta$ is the ideal generated by all square-free monomials $\prod_{i\in S} x_i$ such that the set $\{ v_i \}_{i\in S}$ does not form a simplex in $\Delta$. The ring ${\mathcal A}(\Delta)$ is a graded $K$-algebra. Given homogeneous degree $1$ elements $\theta_1,\ldots, \theta_n \in {\mathcal A}^1(\Delta)$, we define the cohomology ring \[ H(\Delta) = {\mathcal A}(\Delta)/(\theta_1,\ldots,\theta_n).\] To remove dependence on $\theta_i$, we work with generic parameters: \[ \theta_i = \a{i, 1} x_1 + \a{i, 2} x_2 + \cdots + \a{i, N} x_N, \quad i=1,\ldots,n,\] where $\a{i, j}$ are indeterminates and the field $K$ is the field of rational functions $K=k(\a{i, j})$. We will only consider the generic case. The ring $H(\Delta)$ is a standard graded, Artinian, Gorenstein $K$-algebra of socle degree $n$. The Poincar\'e pairing defined by multiplication \[ H^j(\Delta) \times H^{n-j}(\Delta) \longrightarrow H^n(\Delta) \simeq K\] is a nondegenerate bilinear pairing. \subsection{Piecewise polynomial functions} When the Stanley-Reisner ring ${\mathcal A}(\Delta)$ is defined over the field ${\mathbb{R}}$, we can view elements of the ring as piecewise polynomial functions on a fan. The fan here is the simplicial fan with each simplex in $\Delta$ replaced by a convex cone generated by the simplex. A piecewise polynomial function on the fan $\Delta$ is a collection of polynomial functions $f_\sigma$ on maximal cones that agree on the intersections of cones. The parameters $\theta_1, \ldots, \theta_n$ are piecewise linear functions on the fan. They define a piecewise linear map $\Delta \to V ={\mathbb{R}}^n$. We assume that this map is injective on every cone. Then an element $f\in {\mathcal A}(\Delta)$ is a collection $\{ f_\sigma \}$ of polynomials on $V$, \[ f_\sigma \in {\mathbb{R}}[t_1,\ldots,t_n]\] such that $f_{\sigma_1}$ and $f_{\sigma_2}$ agree on the image of $\sigma_1\cap \sigma_2$. When ${\mathcal A}(\Delta)$ is defined over an arbitrary field $K$ then ${\mathcal A}(\Delta)$ is the coordinate ring of the affine scheme $\operatorname{Spec} {\mathcal A}(\Delta)$. This scheme consists of linear $n$-dimensional spaces, one for each maximal simplex $\sigma$, glued along hyperplanes. The linear parameters $\theta_i$ define a morphism of this scheme to the $n$-space $V = \operatorname{Spec} K[t_1,\ldots, t_n]$, and again we may view an element $f\in {\mathcal A}(\Delta)$ as a collection of polynomials, one for each maximal simplex $\sigma$, \[ f= \{ f_\sigma \}, \quad f_\sigma \in K[t_1,\ldots, t_n]. \] \subsection{Brion's integration map} Brion\cite{Brion} defined the isomorphism \[ H^n(\Delta) \to K \] in terms of piecewise polynomial functions on the fan $\Delta$. We describe this map in the more general situation where the field $K$ is not necessarily ${\mathbb{R}}$. The isomorphism depends on a fixed volume form \[ t_1\wedge t_2 \wedge \cdots \wedge t_n \in \Lambda^n V^*,\] and an orientation on $\Delta$ when the characteristic of $K$ is not $2$. By an orientation we mean a compatible orientation on maximal simplices, given, for example, by ordering the vertices of each such simplex. When the field $K$ has characteristic $2$ then any ordering of the vertices is considered positive. Let $\sigma$ be a maximal simplex in $\Delta$, and let $v_{j_1},\ldots,v_{j_n}$ be its vertices ordered positively. The piecewise linear map $\theta$ gives an isomorphism \begin{align*} K[t_1,\ldots,t_n] &\simeq K[x_{j_1},\ldots, x_{j_n}] \\ t_i &\mapsto \a{i, j_1} x_{j_1} + \cdots + \a{i, j_n} x_{j_n}. \end{align*} Define the polynomial $\chi_\sigma \in K[t_1,\ldots, t_n]$ as \[ \chi_\sigma = c_\sigma x_{j_1} x_{j_2} \cdots x_{j_n},\] where the constant $c_\sigma \in K$ is such that \[ c_\sigma x_{j_1} \wedge x_{j_2} \wedge \cdots \wedge x_{j_n} = t_1\wedge t_2 \wedge \cdots \wedge t_n.\] One can easily compute that \begin{equation} \label{eq-dets} c_\sigma= \det\sigma = \det \begin{bmatrix} \a{1, j_1} & \a{1, j_2} & \ldots & \a{1, j_n} \\ \vdots & \vdots & \ddots & \vdots \\ \a{n, j_1} & \a{n, j_2} & \ldots & \a{n, j_n} \end{bmatrix}. \end{equation} Let $\langle \cdot \rangle: {\mathcal A}^n(\Delta) \to K$ be the linear map \begin{equation} \{ f_\sigma \} \longmapsto \sum_\sigma \frac{f_\sigma}{\chi_\sigma}. \label{eq-int} \end{equation} Each summand on the right hand side is a rational function in $K(t_1,\ldots, t_n)$. However, the poles of this rational function cancel out in the sum and the result is a constant. This map $\langle \cdot \rangle$ defines the isomorphism $H^n(\Delta) \to K$. \begin{remark} \label{rem-eval} The map $\langle \cdot \rangle$ can also be viewed as an evaluation map. Choose a point $v_0\in V$ general enough such that $\chi_\sigma(v_0)\neq 0$ for any $\sigma$. We may now represent an element $f\in {\mathcal A}^n(\Delta)$ as a vector of values $(f_\sigma(v_0))_\sigma \in K^M$. The map $\langle \cdot \rangle$ is then defined as a weighted sum of these values: \[ ( f_\sigma(v_0) )_\sigma \longmapsto \sum_\sigma \frac{f_\sigma(v_0)}{\chi_\sigma(v_0)}.\] \end{remark} \subsection{Connected sums of spheres} \label{sec-connected-sum} Let $\Delta$ be a connected sum of spheres \[ \Delta = \Delta_1 \#_D \Delta_2.\] Here we remove a common simplicial disk $D$ from the spheres $\Delta_1, \Delta_2$, and glue the remaining complexes along their common boundary. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{connected_sum.pdf} \caption{Connected sum of $1$-dimensional spheres $\Delta_1$ and $\Delta_2$ along $D$.} \end{figure} Let us also denote the spheres $\Delta_1$ and $\Delta_2$ glued along $D$ by \[ \tilde{\Delta} = \Delta_1 \cup_D \Delta_2.\] We extend the linear parameters $\theta_i \in {\mathcal A}^1(\Delta)$ to $\tilde{\theta}_i \in {\mathcal A}^1(\tilde\Delta)$. These restrict to linear parameters on $\Delta_1$ and $\Delta_2$. We assume that $\Delta_1, \Delta_2, \Delta$ are oriented compatibly. This means that if a maximal simplex lies in both $\Delta$ and $\Delta_i$ then it has the same orientation in both. Given $f\in {\mathcal A}^n(\Delta)$, we can always extend it to $\tilde{f} \in {\mathcal A}^n(\tilde{\Delta})$. This $\tilde{f}$ restricts to elements $\tilde{f}|_{\Delta_1} \in {\mathcal A}^n(\Delta_1)$ and $\tilde{f}|_{\Delta_2} \in {\mathcal A}^n(\Delta_2)$ that agree on $D$. \begin{lemma} For any extension $\tilde{f}$ of $f$, \[ \langle f \rangle = \langle \tilde{f}|_{\Delta_1} \rangle + \langle \tilde{f}|_{\Delta_2} \rangle.\] \end{lemma} \begin{proof} The maximal simplices of $D$ appear in $\Delta_1$ and $\Delta_2$ with opposite orientations. Hence these terms cancel on the right hand side. The remaining terms give the left hand side. \end{proof} \begin{remark} The previous lemma was used in \cite{Karu, BL2, BBFK2}. Its meaning as integration over a connected sum was realized by Karl-Heinz Fieseler. The lemma says that Brion's integration map behaves like ordinary integration. One can decompose the domain of integration into pieces and sum the integrals over the pieces. \end{remark} We next consider a more general connected sum. Let $v_0$ be a new vertex and let \[ C(\Delta) = \{v_0\} * \Delta \] be the cone over $\Delta$ with vertex $v_0$. Let $\pi_i = \{v_0\} * \sigma_i$, $i=1,\ldots,M$ be the maximal simplices in $C(\Delta)$, and let $\Pi_i = \partial \pi_i$ be the simplicial spheres. Then \[ \Delta = \#_{i=1}^M \Pi_i.\] Here we do not mean that the sphere $\Delta$ can be built up step-by-step using the operation of pairwise connected sum. We simply mean that maximal simplices of $\sqcup_i \Pi_i$ are either simplices of $\Delta$ or they appear twice with opposite orientations. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{general_connected_sum.pdf} \caption{Decomposition of a $1$-sphere as a connected sum.} \end{figure} As before, we let $\tilde{\Delta}$ be the union of $\Pi_i$, extend the parameters $\theta_i$ to $\tilde{\Delta}$, and choose orientations on $\Pi_i$ compatibly with the orientation on $\Delta$. \begin{lemma} \label{lem-int-sum} Let $f\in {\mathcal A}^n(\Delta)$. Then \[ \langle f \rangle = \sum_{i=1}^M \langle \tilde{f}|_{\Pi_i} \rangle,\] where $\tilde{f}$ is any extension of $f$ to ${\mathcal A}^n(\tilde{\Delta})$. \qed \end{lemma} When we extend the parameters $\theta_i$ to the new vertex $v_0$, we should also extend the field $K$ with new indeterminates $\a{i, 0}$. Thus \[ K = k(\a{i, j})_{i=1,\ldots,n; j=0,\ldots, N}.\] However, the integration map does not depend on the parameters $\a{i, 0}$. If $f$ has coefficients in $k(\a{i, j})_{i=1,\ldots,n; j=1,\ldots, N}$ then $\langle f \rangle$ also lies in the same field. An alternative connected sum decomposition would be to take one of the existing vertices, say $v_1$, as the cone point and replace $C(\Delta)$ with \[ \{v_1\}* (\Delta \setmin \operatorname{Star}^\circ v_1).\] \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{alt_connected_sum.pdf} \caption{Alternative decomposition of the $1$-sphere in Figure 2 as a connected sum.} \end{figure} \section{Mixed volumes} \label{sec-mixed} It is well-known that a standard graded, Artinian, Gorenstein $K$-algebra of socle degree $n$, \[ H = K[x_1,\ldots,x_N]/I, \] is determined by the linear function \[ W: K[x_1,\ldots,x_N]_n \to H^n \stackrel{\simeq}{\longrightarrow} K.\] (We have denoted by subscript $n$ the degree $n$ homogeneous part of $K[x_1,\ldots,x_N]$.) Indeed, one recovers the ideal $I$ from $W$ using the property that $f\in K[x_1,\ldots,x_N]$ lies in $I$ if and only if $W(fg) = 0$ for any $g$ of appropriate degree. For the algebra \[ H(\Delta) = K[x_1,\ldots, x_N]/(I_\Delta+(\theta_1,\ldots,\theta_n)),\] the function $W = W_\Delta$ is the composition \[ W_\Delta: K[x_1,\ldots,x_N]_n \to {\mathcal A}^n(\Delta) \stackrel{\langle \cdot \rangle}{\longrightarrow} K.\] In the theory of polytopes and toric varieties the function $W_\Delta$ is known as the mixed volume. \subsection{The case of $\Pi$} \label{sec-Pi} Let $\pi$ be an $n$-simplex and $\Pi = \partial\pi$ the simplicial $(n-1)$-sphere. We compute the mixed volume $W = W_\Pi$. Let $v_0, v_1,\ldots, v_n$ be the vertices of $\Pi$, and $\sigma_j = \{v_0,\ldots, \hat{v}_j, \ldots, v_n\}$ the maximal simplices. We choose the orientation on $\Pi$ so that $v_1,\ldots,v_n$ is positively oriented on the simplex $\sigma_0$. Denote by \[ A = (\a{i, j})_{i,j} \] the $n\times (n+1)$ matrix of variables, where the columns are indexed by $0,1,\ldots, n$ and the rows by $1,\ldots,n$. Let $X_j \in K$ be $(-1)^j$ times the determinant of the matrix $A$ with its $j$-th column removed. Then $X_j = \det \sigma_j$ as defined in Equation~(\ref{eq-dets}) on page~\pageref{eq-dets}. We let $\partial_{x_j}$ be the partial derivative with respect to $x_j$, $j=0,\ldots, n$. Define the partial derivative with coefficients in $K$: \[ \partial_\Pi = \sum_{j=0}^n X_j\partial_{x_j}.\] \begin{lemma} \label{lem-Pi} Assume that the field $K$ has characteristic zero, and let $f\in K[x_0,\ldots, x_n]_n$. Then \[ W_\Pi (f) = \frac{1}{X_0 X_1\cdots X_n} \frac{1}{n!} \partial_\Pi^n(f).\] \end{lemma} \begin{remark} \label{rem-mweight} Since $\partial_\Pi(x_i) = X_i$, the mixed volume can also be given as \[ W_\Pi f(x_0,x_1,\ldots,x_n) = \frac{f(X_0, X_1, X_2, X_3,\ldots, X_n)}{X_0 X_1\cdots X_n}.\] \end{remark} \begin{proof} We first check that $W_\Pi(\theta_i g) = 0$ for any $g$ of degree $n-1$ and $i=1,\ldots,n$. It suffices to show that $\partial_\Pi(\theta_i)=0$. From the definitions, \[ \partial_\Pi(\theta_i) = \sum_{j=0}^n \a{i, j} X_j.\] This sum is the expansion of the determinant of the matrix $A$ with a copy of its $i$-th row added as the first row. Since the matrix has two repeated rows, its determinant is zero. The previous argument shows that the map $W_\Pi$ factors through $H^n(\Delta)$. Let us check that its value on the monomial $\chi_{\sigma_0} = \det(\sigma_0) x_1\cdots x_n$ is $1$ as required: \[ W_\Pi(\det(\sigma_0)x_1\cdots x_n) = \frac{X_0 X_1 \cdots X_n}{X_0 X_1\cdots X_n} = 1. \qedhere\] \end{proof} \subsection{Hasse derivatives} We claim that Lemma~\ref{lem-Pi} holds in any characteristic if we view the derivative $\frac{1}{n!} \partial_\Pi^n$ as a Hasse derivative. Ordinary partial derivatives do not work as expected in finite characteristic. For example, \[ \partial_x^2(x^m) = m(m-1) x^{m-2},\] which vanishes for any $m$ if the characteristic is $2$. Hasse derivatives are defined on monomials by the rule: \[ \partial_x^{(n)} (x^m) = {m \choose n} x^{m-n},\] and then extended in the obvious way to all polynomials. These derivatives are defined in any characteristic. Hasse derivatives of degree $n$ with constant coefficients form the vector space dual of $K[x_1,\ldots,x_N]_n$. If $\sum_j n_j= \sum_j m_j = n$ then \[ \prod_j \partial_{x_j}^{(n_j)} \big( \prod_j x_j^{m_j}\big) = \begin{cases} 1 & \text{if $n_j = m_j$ for all $j$,} \\ 0 &\text{otherwise.} \end{cases}\] It is therefore reasonable to expect that the mixed volume $W$ should be such a derivative. Consider the derivative in Lemma~\ref{lem-Pi}: \[ \frac{1}{n!} \partial_\Pi^n = \frac{1}{n!} \big(\sum_j X_j \partial_{x_j}\big)^n.\] We replace this with the Hasse derivative \[ \partial_\Pi^{(n)}= X_1^n \partial_{x_1}^{(n)} + X_1^{n-1} X_2 \partial_{x_1}^{(n-1)} \partial_{x_2}^{(1)} + \ldots.\] This derivative satisfies the product rule \[ \partial_\Pi^{(n)} (fg) = \partial_\Pi^{(n_1)}(f) \partial_\Pi^{(n_2)}(g),\] where $f$ and $g$ are homogeneous polynomials of degree $n_1$ and $n_2$, respectively, and $n_1+n_2=n$. Lemma~\ref{lem-Pi} now holds in any characteristic: \begin{lemma} \label{lem-Pi-gen} Let the field $K$ have any characteristic, and let $f\in K[x_0,\ldots, x_n]_n$. Then \[ W_\Pi (f) = \frac{1}{X_0 X_1 \cdots X_n} \partial_\Pi^{(n)}(f). \] \end{lemma} To simplify notation, let us write this mixed volume as: \[ W_\Pi = \frac{1}{c_\Pi} \partial_\Pi^{(n)}.\] Recall that in Section~\ref{sec-connected-sum} we wrote $\Delta$ as a connected sum \[ \Delta = \#_{i=1}^M \Pi_i.\] The following result now follows from Lemma~\ref{lem-int-sum} and Lemma~\ref{lem-Pi-gen}: \begin{theorem} \label{thm-W-sum} Let $\Delta$ be a simplicial $(n-1)$-sphere. Then \[ W_\Delta = \sum_{i=1}^M W_{\Pi_i} = \sum_{i=1}^M \frac{1}{c_{\Pi_i}} \partial_{\Pi_i}^{(n)}.\] \end{theorem} In the theorem we view both $W_\Delta$ and $W_{\Pi_i}$ as Hasse derivatives with coefficients in $K$ and acting on $K[x_1,\ldots,x_N]$. \begin{remark} It is not too difficult to see that the previous theorem is nothing more than the integration $\langle\cdot\rangle$ viewed as an evaluation map (see Remark~\ref{rem-eval}). Indeed, when we evaluate the summands of Equation~(\ref{eq-int}) on page \pageref{eq-int} at the generic point $v_0$, we get the summands in the theorem. \end{remark} \subsection{The quadratic form $Q_l$} Let $l=x_1+x_2+\ldots+x_N$. We define the quadratic form $Q_l$ on $K[x_1,\ldots, x_N]_j$ for $j\leq n/2$: \[ Q_l(g) = W_\Delta(l^{n-2j}g^2).\] This form descends to a nondegenerate form on $H^j(\Delta)$. \begin{theorem} \label{thm-Q} The quadratic form $Q_l$ on $K[x_1,\ldots, x_N]_j$ is \[ Q_l (g) = \sum_{i=1}^M W_{\Pi_i} (l^{n-2j} g^2) = \sum_{i=1}^M \frac{1}{c_{\Pi_i}} \big[\partial_{\Pi_i}^{(1)} (l)\big]^{n-2j} \big[\partial_{\Pi_i}^{(j)} (g)\big]^2.\] \end{theorem} \begin{proof} The second equality follows from the product rule for the derivative. This is the only nontrivial statement in the theorem. \end{proof} The summands of the quadratic form $Q_l$ in Theorem~\ref{thm-Q} are defined over the field $K$ that includes the variables $\a{i, 0}$. However, the form itself does not depend on these variables and can be defined over the field $k(\a{i, j})_{i=1,\ldots,n; j=1,\ldots,N}$. The anisotropy of the form does not depend on which of the two fields we use. The previous theorem is an expression of the quadratic form $Q_l$ that holds in any characteristic. It can be used, for example, to specialize the form from characteristic zero to characteristic $2$. \section{The conjecture of Papadakis and Petrotou} We assume that the field $K=k(\a{i, j})$ has characteristic $2$ throughout this section. Papadakis and Petrotou study the values of the quadratic form $Q_l$ in $K$ and partial derivatives of these values with respect to $\a{i, j}$. Consider partial derivatives $\partial_{\a{i, j}}$ acting on $K=k(\a{i, j})$. These are the usual partial derivatives, not Hasse derivatives. They satisfy for any $f,g\in K$ \[ \partial_{\a{i, j}}^2 f=0, \quad \partial_{\a{i, j}} f^2=0, \quad \partial_{\a{i, j}} f^2 g= f^2 \partial_{\a{i, j}} g.\] We will use letters $I, J, L$ to denote vectors of non-negative integers. Let $|J|$ be the number of components in the vector $J$. For $I=(i_1,\ldots,i_n)$ with $n$ components, we let \[ \partial_I = \partial_{\a{1, i_1}} \partial_{\a{2, i_2}} \cdots \partial_{\a{n, i_n}}.\] For $J=(j_1,\ldots, j_s)$, let $x_J$ be the degree $s$ monomial \[ x_J = x_{j_1} x_{j_2} \cdots x_{j_s}.\] There is some redundancy in this notation because $x_J$ only depends on $J$ up to permutation of components. However, $\partial_I$ does depend on the order of components in $I$. We write $\sqrt{x_J}$ for the monomial whose square is $x_J$ if such a monomial exists. The following was conjectured in \cite{PapadakisPetrotou}: \begin{theorem}[Conjecture of Papadakis and Petrotou] \label{thm-PPconj} Let $\Delta$ be a simplicial sphere of dimension $n-1$. For any integer vectors $I, J$ with $n$ components \[ \partial_I W_\Delta(x_J) = \begin{cases} (W_\Delta (\sqrt{x_I x_J}))^2 & \text{if the square root exists,} \\ 0 & \text{otherwise.} \end{cases} \] \end{theorem} A special case of the theorem proved in \cite{PapadakisPetrotou} implies Theorem~\ref{thm-PP}. We recall the proof here. \begin{proof}[Proof of Theorem~\ref{thm-PP}] Let $g=\sum_J \gamma_J x_J \in K[x_1,\ldots, x_N]_m$ be such that $Q(g) = 0$. Using the characteristic $2$ assumption, \[ Q(g) = W_\Delta(g^2) = \sum_J \gamma_J^2 W_\Delta(x_J^2).\] Let $I$ be a vector with $n$ components such that $\sqrt{x_I}$ exists. By Theorem~\ref{thm-PPconj}, \[ \partial_I Q(g) = \sum_J \gamma_J^2 \partial_I W_\Delta(x_J^2) = \sum_J \gamma_J^2 (W_\Delta(\sqrt{x_I} x_J))^2 = (W_\Delta (\sqrt{x_I} g))^2.\] The derivative $\partial_I Q(g)$ is zero, hence $W_\Delta (\sqrt{x_I} g) = 0$. However, this last expression is the Poincar\'e pairing between $\sqrt{x_I}$ and $g$. Since the monomials $\sqrt{x_I}$ generate $H^m(\Delta)$, it follows that $g=0$ in $H^m(\Delta)$. \end{proof} The previous proof only needs the special case of Theorem~\ref{thm-PPconj} where both $\sqrt{x_I}$ and $\sqrt{x_J}$ exist. One can further restrict to the case where $\sqrt{x_I}$ and $\sqrt{x_J}$ are square free monomials. A similar argument proves Theorem~\ref{thm-lef}. \begin{proof}[Proof of Theorem~\ref{thm-lef}] Let again $g=\sum_J \gamma_J x_J \in K[x_1,\ldots, x_N]_m$ be such that $Q_l(g) = 0$. Now \[ Q_l(g) = W_\Delta(l g^2) = \sum_J \gamma_J^2 W_\Delta(l x_J^2).\] Let $I$ be a vector with $n$ components and $i$ an integer such that $x_I x_i = x_L^2$ for some $L$. By Theorem~\ref{thm-PPconj}, \[ \partial_I Q_l(g) = \sum_J \gamma_J^2 \partial_I W_\Delta(l x_J^2) = \sum_J \gamma_J^2 (W_\Delta(x_L x_J))^2 = (W_\Delta (x_L g))^2.\] This shows that the Poincar\'e pairing between $x_L$ and $g$ is zero. Since the monomials $x_L$ generate $H^{m+1}(\Delta)$, it follows that $g=0$ in $H^m(\Delta)$. \end{proof} The rest of this section consists of the proof of Theorem~\ref{thm-PPconj}. \subsection{Reductions.} We start by reducing Theorem~\ref{thm-PPconj} to simpler cases. First notice that all expressions in Theorem~\ref{thm-PPconj} are defined over the field ${\mathbb{F}}_2$. Hence we may assume that $k={\mathbb{F}}_2$. Recall that we wrote $W_\Delta = \sum_i W_{\Pi_i}$ in Theorem~\ref{thm-W-sum}. \begin{lemma} \label{lem-redPi} Theorem~\ref{thm-PPconj} for all $\Pi_i$ implies it for $\Delta$. \end{lemma} \begin{proof} This follows directly from the statement of the theorem, using the characteristic $2$ assumption and the simple observation that if a monomial $x_Ix_J$ restricts to a nonzero monomial on $\Pi_i$, then the monomial is a square if and only if its restriction is a square. \end{proof} From now on we will assume that $\Delta = \Pi$ as in Section~\ref{sec-Pi}. Assume that $\Pi$ has vertices $v_0, v_1, \ldots, v_n$. The matrix $A=(\a{i, j})$ has size $n\times (n+1)$, with columns indexed by $0,1,\ldots,n$ and rows by $1,\ldots,n$. We use the notation $X_i$, $i=0,1,\ldots,n$, for the determinant of $A$ with its $i$-th column removed. If $J= (j_1, \ldots, j_s)$ is a vector with entries in $\{0,1,\ldots,n\}$, we let \[ X_J = X_{j_1} X_{j_2} \cdots X_{j_s}.\] Theorem~\ref{thm-PPconj} for $\Pi$ can be further reduced to the following: \begin{theorem} \label{thm-simplest} Let $I$ and $J$ be vectors with entries in $\{0,1,\dots,n\}$. Assume that $|I|=n$ and $|J|$ is odd. Then \[ \partial_I X_J = \begin{cases} (X_L)^2 & \text{if $X_I X_J = X_L^2 X_{(0,1,\dots,n)}$}, \\ 0 & \text{otherwise.} \end{cases}\] \end{theorem} \begin{lemma} \label{lem-conj-Pi} Theorem~\ref{thm-simplest} implies Theorem~\ref{thm-PPconj} for $\Pi$. \end{lemma} \begin{proof} Recall from Remark~\ref{rem-mweight} that \[ W_\Pi (x_J) = \frac{X_J}{c_\Pi},\] where $c_\Pi = X_{(0,1,\ldots,n)}$. Theorem~\ref{thm-PPconj} is now equivalent to the statement \[ \partial_I \frac{X_J}{c_\Pi} = \frac{1}{c_\Pi^2} \partial_I (c_\Pi X_J) = \begin{cases} \frac{X_I X_J}{c_\Pi^2} & \text{if $\sqrt{x_I x_J}$ exists,}\\ 0 & \text{otherwise.} \end{cases} \] When we replace $J$ with $J'$ such that $c_\Pi X_J = X_{J'}$, we get the statement of Theorem~\ref{thm-simplest} with $J$ replaced by $J'$. \end{proof} We will prove Theorem~\ref{thm-simplest} below after some preparations. \subsection{$SL(n,k)$ invariance} Let $A=(\a{i, j})$ be the $n\times (n+1)$ matrix of variables. For a matrix $B\in SL(n,k)$, consider the linear change of variables from $A$ to $BA$. This defines an action on $SL(n,k)$ on the polynomial ring $k[\a{i, j}]$. The invariants of this action are all polynomials in the variables $X_i$. \begin{lemma} \label{lem-invariance} Let $I$ and $J$ be as in Theorem~\ref{thm-simplest}. Then the polynomial $\partial_I X_J \in k[\a{i, j}]$ is $SL(n,k)$ invariant. \end{lemma} \begin{proof} The group $SL(n,k)$ is generated by elementary matrices. An elementary matrix acts on the matrix of variables by adding a constant $c$ times row $i$ to row $j$. It suffices to prove invariance under this change of variables. We may assume without loss of generality that $i=2$ and $j=1$. Consider the new variables \[ a'_{i,j} = \begin{cases} \a{i, j} + c \a{2, j}, & \text{if $i=1$,} \\ \a{i, j} & \text{otherwise.} \end{cases} \] For a polynomial $f(a) = f(\a{i, j})$, let us denote by $f(a')$ the result of substituting $a'_{i,j}$ in $\a{i, j}$. Similarly, let us write $\partial_I(a')$ for the partial derivative where we replace $\partial_{\a{i, j}}$ with $\partial_{a'_{i,j}}$. We need to prove that \[ (\partial_I X_J)(a') = (\partial_I X_J) (a).\] We claim that if $\partial_I = \partial_{\a{1, i_1}} \partial_{\a{2, i_2}} \cdots \partial_{\a{n, i_n}}$, then \[ (\partial_I X_J)(a') = (\partial_I X_J)(a) - c (\partial_{\a{1, i_1}} \partial_{\a{1, i_2}} \cdots \partial_{\a{n, i_n}} X_J)(a).\] The next lemma shows that $\partial_{\a{1, i_1}} \partial_{\a{1, i_2}} X_J = 0$, hence the second summand vanishes. For any polynomial $f(\a{i, j})$, by replacing all symbols $a$ with $a'$, we have \[ \partial_I(a') f(a') = (\partial_I f)(a').\] If now $f$ is $SL(n,k)$ invariant then \[ f(a') = f(a) = f(a'_{1,j} - c a'_{2,j}, a'_{2,j},\ldots, a'_{n,j}).\] Applying the chain rule gives \[ \partial_I(a') f(a') = (\partial_I f)(a) - c (\partial_{\a{1, i_1}} \partial_{\a{1, i_2}} \cdots \partial_{\a{n, i_n}} f)(a). \qedhere\] \end{proof} \begin{lemma} \label{lem-2deriv} If $|J|$ is odd then $\partial_{\a{r, i_1}} \partial_{\a{r, i_2}} X_J = 0$ for any $r, i_1, i_2$. \end{lemma} \begin{proof} It is enough to consider the case where $J$ contains no repeating indices, since any square factors of $X_J$ can be factored out of the partial derivatives. Under a suitable relabelling of rows and columns of $A$, we can assume that $r=1$, $i_1 = 1$ and $i_2 = 2$, so that the derivative under consideration is $\partial_{\a{1, 1}} \partial_{\a{1, 2}} X_J$. Let us denote by $Y_{i,j} = Y_{j,i}$ the determinant of the matrix $A$ with its first row and columns $i,j$ removed. Then \[ \partial_{\a{1, i}} X_j = Y_{i,j}.\] The polynomials $X_i$ and $Y_{i,j}$ satisfy the following relations. For any distinct indices $i,j,p,q$ \begin{equation} \label{eq-Plucker} Y_{i,j} Y_{p,q} - Y_{i,p} Y_{j,q} + Y_{i,q} Y_{j,p} = 0, \end{equation} and for any distinct indices $i,j,p$ \begin{equation} \label{eq-flag} Y_{i,j} X_p - Y_{i,p} X_{j} + Y_{j,p} X_{i} = 0. \end{equation} These equations hold in any characteristic. In characteristic $2$ the signs in the equations are not important. The equations come from the Pl\"ucker embedding of the Grassmannian. Rows $2,3,\ldots,n$ of the matrix $A$ span an $(n-1)$-plane in the $(n+1)$-space and hence define a point in the Grassmannian $Gr(n-1, n+1)$. The polynomials $Y_{i,j}$ are the Pl\"ucker coordinates on this Grassmannian. These coordinates satisfy the Pl\"ucker relations in Equation~(\ref{eq-Plucker}). Similarly, the $n$ rows of the matrix $A$ define a point in $Gr(n,n+1)$ with coordinates $X_i$. The relations in Equation~(\ref{eq-flag}) state that the $(n-1)$-plane with coordinates $Y_{i,j}$ lies in the $n$-plane with coordinates $X_i$. Using the product rule we have \[ \partial_{\a{1, 1}} \partial_{\a{1, 2}} X_J = \sum \frac{X_J}{X_{i}X_{j}} Y_{1,i} Y_{2,j}.\] Here the sum runs over all distinct entries $i,j$ of $J$ such that $i\neq 1$ and $j\neq 2$. We claim that this sum is equal to \[ \sum \frac{X_J}{X_{i}X_{j}} Y_{1,2} Y_{i,j},\] where the sum now runs over all pairs of entries $\{i,j\}$ in $J$. To see this, first consider the case where $i$ and $j$ are both distinct from $1$ and $2$. In this case we apply the Pl\"ucker relation to get \[ Y_{1,i} Y_{2,j} + Y_{2,i} Y_{1,j}= Y_{1,2} Y_{i,j}.\] The case where $i$ or $j$ is equal to $1$ or $2$ is simpler and does not require any relation. We are now reduced to proving that \[ \sum_{\{i,j\}} \frac{X_J}{X_{i}X_{j}} Y_{1,2} Y_{i,j} = X_J Y_{1,2} \sum_{\{i,j\}} \frac{Y_{i,j}}{X_{i}X_{j}} = 0.\] Using Equation~(\ref{eq-flag}) we have for any distinct $i,j, p$ \[ \frac{Y_{i,j}}{X_{i}X_{j}} + \frac{Y_{i,p}}{X_{i}X_{p}} +\frac{Y_{j,p}}{X_{j}X_{p}} = 0.\] Now consider all triples $\{i,j,p\}$ in $J$. Then \[ \sum_{\{i,j,p\}} \left( \frac{Y_{i,j}}{X_{i}X_{j}} + \frac{Y_{i,p}}{X_{i}X_{p}} +\frac{Y_{j,p}}{X_{j}X_{p}} \right) = 0.\] Since every pair $\{i,j\}$ occurs in an odd number of triples $\{i,j,p\}$, this sum is equal to \[ \sum_{\{i,j\}} \frac{Y_{i,j}}{X_{i}X_{j}}. \qedhere\] \end{proof} \subsection{Proof of Theorem~\ref{thm-simplest}} Lemma~\ref{lem-invariance} tells us that $\partial_I X_J$ is a polynomial in $X_i$. Consider the grading by ${\mathbb{Z}}^{n+1}$ on the ring $k[\a{i, j}]$ such that $\a{i, j}$ has degree $e_j$, where $e_0,\ldots, e_{n}$ form the standard basis for ${\mathbb{Z}}^{n+1}$. Let ${\mathbf{1}} = (1,\ldots, 1)$. Then $X_i$, $i=0,\ldots, n$, is homogeneous with \[ \operatorname{deg} X_i = {\mathbf{1}} - e_i.\] Notice that since the vectors ${\mathbf{1}} - e_i$ are linearly independent, there can be at most one monomial $X_L$ in each degree. The derivative $\partial_I$ applied to a homogeneous polynomial reduces its degree by $\sum_j e_{i_j}$. Since $X_J$ is homogeneous, so is $\partial_I X_J$, hence $\partial_I X_J$ is equal to a constant $c$ times a monomial $X_M$. Here $c\in {\mathbb{F}}_2$, hence $\partial_I X_J$ is either $X_M$ or $0$. Computing the degrees, the monomial $X_M$ must satisfy $X_I X_J = X_M X_{(0,1,\dots,n)}$. Theorem~\ref{thm-simplest} has two cases depending on whether $\sqrt{X_M}$ exists or not. \begin{lemma} If $\sqrt{X_M}$ does not exist then $\partial_I X_J = 0$. \end{lemma} \begin{proof} Suppose that $\partial_I X_J = X_M$. By Lemma~\ref{lem-2deriv}, all partial derivatives \[ \partial_{\a{i, j}} X_M = \partial_{\a{i, j}} \partial_I X_J\] vanish. This implies that $X_M$ must be the square of a polynomial in ${\mathbb{F}}_2[\a{i, j}]$. Since $X_M$ is the product of irreducible polynomials $X_i$, it follows that $X_M$ must be the square of a monomial $X_L$. \end{proof} \begin{lemma} If $\sqrt{X_M} = X_L$ then $\partial_I X_J = X_M$. \end{lemma} \begin{proof} It suffices to prove that $\partial_I X_J \neq 0$. Then it must be equal to $X_M$. We will prove that $\partial_I X_J \neq 0$ by induction on $n$. The base of the induction is $n=0$. In this case $\partial_I X_J = X_J \neq 0$. Consider now $n>0$. Let $I=(i_1,\ldots, i_n)$ and $J=(j_1,\ldots,j_s)$, where $s$ is odd. By assumption, there exists a monomial $X_L$ such that $X_I X_J = X_L^2 X_{(0,1,\ldots,n)}$. To prove that $\partial_I X_J \neq 0$, we may factor out squares in $X_J$ and assume that $J$ has no repeated entries. There exists an entry in $J$, say $j_1$, such that $j_1 \neq i_r$ for $r=1,\ldots, n$. This follows from the fact that $X_{(0,1,\ldots,n)}$ of degree $n+1$ divides $X_I X_J$, but $X_I$ has degree $n$. Define the monomial \[ \mu = \a{1,i_1} \a{1,j_1}^{s-1}.\] Then the coefficient of $\mu$ in $X_J=X_{j_1} \cdots X_{j_s}$ is \[ Y_{j_1, i_1} Y_{j_2, j_1} \cdots Y_{j_s,j_1}.\] Indeed, $X_{j_1}$ does not contain $\a{1,j_1}$. Hence $\a{1,j_1}^{s-1}$ must come from the factors $X_{j_2}, \ldots, X_{j_s}$ and $\a{1,i_1}$ from the factor $X_{j_1}$. As before, we have denoted by $Y_{i,j}$ the coefficient of $\a{1,j}$ in $X_i$. Notice that the coefficient of $\a{1,j_1}^{s-1}$ in $\partial_I X_J$ is \[ \partial_{\a{2,i_2}} \partial_{\a{3,i_3}} \cdots \partial_{\a{n,i_n}} Y_{j_1, i_1} Y_{j_2, j_1} \cdots Y_{j_s,j_1}.\] If this coefficient is nonzero then also $\partial_I X_J$ is nonzero. We claim that this coefficient being nonzero follows by induction from the case of dimension $n-1$. From the matrix $(\a{i,j})$ we have removed row $1$ and column $j_1$, the derivative $\partial_I$ is replaced with $\partial_{I'}$, where $I' = (i_2,\ldots, i_n)$, and $X_J$ is replaced (using a similar notation in dimension $n-1$) with $X_{J'}$, where $J'=(i_1, j_2, \ldots, j_s)$. Let us check that we can apply the induction assumption to prove that $\partial_{I'} X_{J'} \neq 0$. Viewing $X_i$ purely as formal symbols, we have \[ X_{I'} X_{J'} = \frac{X_I}{X_{i_1}} \cdot \frac{ X_{i_1} X_J }{ X_{j_1} } = \frac{X_L^2 X_{(0,1,\dots,n-1,n)}}{X_{j_1}} \] Since we assumed that $J$ does not contain repeated entries, $X_{j_1}$ appears in the numerator of the fraction to the first power. If we suppose that $j_1 = n$, then \[ X_{I'} X_{J'} = X_L^2 X_{(0,1,\dots, n-1)}, \] where $X_{j_1} = X_n$ does not appear in any monomial. By induction, $\partial_{I'} X_{J'} \neq 0$. \end{proof} \section{Specializations and generalizations.} \label{sec-consec} In this section we explain how the anisotropy theorem for spheres in characteristic $2$ implies the same for some spheres in characteristic $0$. We also consider pseudo-manifolds in characteristic $2$. We start by proving Theorem~\ref{thm-2to0}. Let $K={\mathbb{F}}_2(\a{i, j})$ and $K_0 = {\mathbb{Q}}(\a{i, j})$. \begin{lemma} Let $\Delta$ be as in Theorem~\ref{thm-2to0}. Let $B$ be a set of monomials in $x_i$ that forms a basis for the vector space $H(\Delta)$ over $K$. Then $B$ also forms a basis for $H(\Delta)$ over $K_0$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm-2to0}] Since ${\mathcal A}(\Delta)$ is a graded free $K[\theta_1,\ldots,\theta_n]$-module, the product of elements of $B$ with monomials in $\theta_i$ gives a basis for ${\mathcal A}(\Delta)$ over $K$. Now the same set of elements must give a basis for ${\mathcal A}(\Delta)$ over $K_0$. Indeed, if there is a relation between these elements with coefficients in $K_0$, we may clear denominators and assume that the coefficients lie in ${\mathbb{Z}}[\a{i, j}]$ so that not all coefficients are divisible by $2$. Such a relation gives a nontrivial relation mod $2$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm-2to0}] Let $B_m$ be a basis of monomials for $H^m(\Delta)$ as in the lemma. Suppose $g\in H^m(\Delta)_{K_0}$ is such that $Q_l(g)=0$. We may clear denominators and assume that $g$ is a linear combination of monomials in $B_m$ with coefficients in ${\mathbb{Z}}[\a{i, j}]$, not all coefficients divisible by $2$. This $g$ mod $2$ gives a nonzero element $\bar{g}\in H^m(\Delta)_{K}$. Moreover, $Q_l(\bar{g}) = Q_l(g) \pmod 2 = 0$. This contradicts Theorem~\ref{thm-PP}. \end{proof} A pseudo-manifold is a purely $(n-1)$-dimensional simplicial complex such that every $(n-2)$-simplex lies in two $(n-1)$-simplices. When the characteristic is not $2$, we assume that the pseudo-manifold is orientable. Pseudo-manifolds are complexes where the integration map $\langle\cdot\rangle$, the mixed volume $W_\Delta$, and the decomposition of the mixed volume as a sum of $W_{\Pi_i}$ works the same way as for spheres. The ring $H(\Delta)$ may not be Gorenstein. However, the mixed volume defines a nonzero linear function on $K[x_1,\ldots, x_N]_n$, and as explained at the beginning of Section~\ref{sec-mixed}, such a function gives rise to a standard graded, Artinian, Gorenstein $K$-algebra. We denote this algebra by $\overline{H}(\Delta)$. Since the mixed volume vanishes on the ideal $I_\Delta+ (\theta_1,\ldots,\theta_n)$, this algebra is a quotient of $H(\Delta)$. The following theorem was proved in \cite{APP}: \begin{theorem} Let $\Delta$ be a pseudo-manifold of dimension $n-1 = 2m-1$ or $n-1=2m$. Then the quadratic form $Q_l$ on $\overline{H}^m(\Delta)$ defined over $K$ is anisotropic. \end{theorem} \begin{proof} The proof of Theorem~\ref{thm-PPconj} for $\Delta$ works the same way as before because it is reduced at once to the case $\Delta=\Pi$. The proofs of Theorem~\ref{thm-PP} and Theorem~\ref{thm-lef} show that if $Q_l(g) = 0$ then $g$ pairs trivially with every other polynomial, hence it vanishes in $\overline{H}^m(\Delta)$. \end{proof} \bibliographystyle{plain}
proofpile-arXiv_065-3332
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} The 5G communication system stacked the Internet of Things (IoT) on top of the 4G mobile broadband to attain enhanced mobile broadband (eMBB), accordingly. The eMBB due to IoT implicitly ensures the inclusion of ultra-reliable and low-latency communication (uRLLC), and massive machine type communication (mMTC), respectively. Although the worldwide deployment of 5G has begun in 2020, it is assumed that the said communication system will not be able to meet the requirements for Next-G communication systems \cite{Du2020veh}. In this regard, researchers have begun working on 6G standards to overcome some of the associated limitations. \par A report from Gartner\footnote{http://www.gartner.com/newsroom/id/2684616} states that by the end of 2025 the connection of more than 30 billion devices to the communication network will be observed. The surge in connection devices requires low-latency communication to be more reliable and accommodation of mMTC to be more scalable in comparison to 5G. The vision of Next-G cellular communication via 6G is to expand the coverage and boundaries of the aforementioned services, with Internet of Everything (IoE) and artificial intelligence (AI) as its key enabling technologies \cite{Du2020veh}. Similarly, user-experienced data rate, traffic capacity, spectrum efficiency, energy efficiency, connection density, mobility support, coverage, security capacity, and cost efficiency are assumed to improve with the emergence of Next-G networks \cite{Du2020veh}. An example of different applications in relation to Next-G networks is shown in Fig.~\ref{Fig1}. The Next-G allows the users to have ubiquitous services, consistent and generalized mobility irrespective of the underlying communication medium or transport technology\footnote{https://www.igi-global.com/dictionary/modelling-quality-and-pricing-in-next-generation-telecom-networks/20320}. The idea is illustrated in the aforementioned figure, such that the services are provided and users are accommodated irrespective of the coverage area or fixed infrastructure. The connectivity needs to be ensured through devices that can connect to the Internet or are capable of establishing a connection with communication satellites. The applications to Next-G communication systems include but are not limited to smart grids, Industry 5.0, smart homes, smart transportation systems, disaster recovery systems, and remote tourism \cite{Dev2021}. \\ As Next-G networks need to support more mMTC devices to fulfil the scalability needs, random access (RA) procedure is of vital importance as it ensures the uplink synchronization from IoE related devices to the base station. We will refer to the 5G standard for further explanation, as the standards for Next-G have not been proposed yet. The transmission of random access preambles from IoE devices to generation node B (gNB) is carried out through the physical random access channel (PRACH) \cite{wu2020efficient}. The gNB detects the preambles while processing the PRACH signals to assign the timing advance and the preamble ID (PID). The signal information is then transmitted in a response message to IoE device for adjusting the transmission time and establishing synchronization with gNB\footnote{3GPP Release 15 and Release 16}. In case of PRACH process failure, a preamble needs to be sent again after a pre-defined amount of time that degrades the performance and suffers from unnecessary delay \cite{Sharma2020}. \begin{figure*}[ht!] \centering \includegraphics[width=\linewidth]{6G-intro-fig-v4.png} \caption{An overview of Next-G communication networks.} \label{Fig1} \end{figure*} The existing works for preamble detection have shown promising results \cite{yang2020mixed}\cite{zhang2020tara} by achieving a detection rate of 99$\%$ using threshold-based techniques on signal-to-noise ratio. However, these techniques fail to generalize the performance on large number of devices due to the false peaks \cite{Sharma2020, Modina2019}. Researchers have since then moved to machine learning approaches for preamble detection \cite{Mostafa2021}. The problem with the existing methods employing machine learning is that they do not consider the random noise problem that can affect the data collection process itself. This random noise can manipulate the training process to classify false peaks as true ones. Detection of false peaks not only increases latency, but also affects network efficiency and scalability. To the best of our knowledge, the random noise problem has not been dealt in PRACH processing.\par In this study, we collected primary data from a well-known company working in the field of communication systems for preamble detection in accordance with the new 5G radio systems and 3GPP technical specification group radio access network NR physical channels and modulation\footnote{3rd GPP Technical Specification Group Radio Access Network NR Physical Channels and Modulation, 38.211, 2019}. Data are gathered specifically when an IoT device transmits preambles to request an uplink allocation to the gNB. The data are then injected with random additive white Gaussian noise levels varying between 5 - 15$\%$. We applied several shallow- and deep-learning approaches to show the effect of random noise on false-peak detection, respectively. In this regard, we propose an informative instance-based fusion network (IIFNet) for not only detecting the preambles accurately but also to deal with the random noise problem in PRACH processing. The contributions of this work are stated below:\par \begin{enumerate} \item Collection of primary data for preamble detection in compliance to 5G-new radio systems. \item A new sampling strategy is proposed to select the most informative instances. \item A novel fusion network, IIFNet is proposed for reliable detection of preambles in noisy environment. \item State-of-the-art results for preamble detection on noisy data have been reported. \end{enumerate} \vspace{\baselineskip} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{IIFNet-fig.png} \caption{Proposed IIFNet framework for preamble detection in 6G and beyond communication networks.} \label{Fig2} \end{figure*} \section{State-of-the-Art Studies}\label{sec:BG} Telecommunication industry is rapidly growing and integrating with wide array of technologies due to the emergence of 6G communication systems. The 6G services are supported by enhanced eMBB, lower uRLLC, and more mMTC. For consolidating a brief review, we consider 5G-new radio (NR) to be a reference point. The authors in \cite{dahlman20185g} discussed the importance of optimizing downlink resource slicing and its impact on both the uRLLC and the eMBB with respect to 5G-NR systems. Amongst many characteristics of downlink resources, the study emphasized the importance of preamble detection/design for improving the network performance. Thanks to several favorable characteristics like constant amplitude and low cross-correlation, Zadoff-Chu (ZC) sequences are employed to generate random preambles in LTE and 5G-NR \cite{yang2020mixed}\cite{lin2016random}. As one of the seminal works on the design of 5G-NR for narrowband IoT (NB-IoT), a single-tone waveform and frequency hopping NB-IoT PRACH (NPRACH) was proposed in \cite{wu2020efficient}. The aforementioned design had the merit of being nearly zero peak-to-average power ratio, thus suitable for NB-IoT systems with strict requirements of long battery lifetime, low cost, and extensive coverage. When multiple IoT devices try to access the network simultaneously, the NR may receive superimposed NPRACH preambles, and further need to detect them that are received from different IoT devices. In this network configuration, the work in \cite{zhang2020tara} solves the preamble detection problem to obtain the optimal Neyman-Pearson threshold in NB-IoT systems. The work also focused on estimating the time-of-arrival and residual frequency offset of the detected IoT devices. The work in \cite{zhen2018random} exploits the difference in time-of-arrivals of collided preambles to further improve the performance of NB-IoT systems. As AI is one of the key enabling technologies for 6G communication systems, many researchers have adapted it for improving the preamble detection process. In \cite{Mostafa2021} a deep learning-based method was developed for the decoding of preambles. The study aggregated to ZC-sequences to mimic the effect of massive IoT devices for the use-case of 5G systems and designed two separate decoders. The first was based on threshold-based measurements, while the second used a deep learning approach to detect the preambles accordingly. A deep neural network architecture was developed in \cite{ding2020deep} for preamble collision detection in grant-free random access MIMO systems. The key idea of the work was that only nearby base stations of a collided user are used, instead of all base stations in the network. Simulation results show that their deep model for base station clustering yields a higher achievable rate compared to the baseline alternatives. The study \cite{Modina2019} proposed the use of shallow approaches to detect the preamble in 5G systems. It should be noted that all the deep learning based approaches are applied to the data generated using synthetic means. Moreover, none of the said strategies has considered the random noise problem that significantly degrades the preamble detection performance. Many studies have proposed the use of majority voting and consensus voting methods \cite{Samami2020} to remove random noise, but they will not work in the field of communication systems, as the peaks are detected at a much lower rate compared to the false peaks, and the aforementioned methods tend to eliminate the preambles if the sampling frequency is lower. To the best of our knowledge, this is the first study to deal with the random noise problem for preamble detection in the PRACH process. \section{Proposed Method} It is well established that noise is an integral part of the communication system that needs to be modeled to remove any ambiguity in the detection process. The aim of this work is to design a framework that is capable of accurate detection even if the samples are corrupted by the random noise at the physical layer. In this regard, we propose an informative instance-based fusion network (IIFNet) to improve the detection process as shown in Fig.~\ref{Fig2}. To the best of our knowledge, such sampling strategy for selecting the most informative instances in the context of preamble detection has not been proposed before. The preambles are collected from IoT devices and stored in a preamble database. We inject the random Gaussian noise into the preamble database to create distorted data as should be in real-life scenarios. We then transform the raw feature space, i.e., amplitude, variance, threshold, and SNR to phase space reconstruction (PSR) \cite{Khuwaja2020}, and principal component analysis (PCA). The intuition behind adopting the said feature transformation techniques is to cover a wide range of feature engineering spectrum. The PSR projects the lower dimensional data onto higher dimensions, whereas the PCA takes the inverse approach. The higher dimensions in PSR would sense the data impurity through information gains such as distance measures, whereas the PCA would only consider the feature variables that could explain the major portion of the variance that filters the data in an intrinsic manner. Moreover, PCA has been used extensively for pre-processing and denoising the data, respectively. We then select a portion of informative instances from the feature transformation technique that defines the entire feature space and train individual classifiers. The trained classifiers will then be used to label the remaining instances, and the same process will be iterated over until all instances are labeled accordingly. Once the classification models are trained, we use them to classify the preambles. As the classification models are trained individually for both feature transformation techniques, we perform decision level fusion (DLF) \cite{Khuwaja2020} on the predicted labels to further improve the detection process. These trained models and fusion strategies are attached to each gNB, evolved node B (eNB), or base stations to distinguish between true and false peaks accordingly. \subsection{Random Noise Injection} Artificial noise injection is a common method in the research studies to corrupt the data samples. Random Gaussian noise (RGN) injects the noise by changing the labels of each sample with a varying probability ranging from 0 to 1. Assuming that the power level is quite low in Next-G communication systems, we considered a zero-mean additive white Gaussian noise (AWGN) for injection. As the mean is zero, we use the square root of the mean power level of the dataset to be used as the standard deviation. We create a Gaussian distribution and corrupt a specific portion of the dataset, respectively. During the ablation study, it was observed that a simple threshold-based technique can accurately differentiate between the true and false peaks. However, adding just 5$\%$ noise represents a difficult problem for many sophisticated classification algorithms, since accuracy decreases by more than 7$\%$ at times. Further increase in the noise level leads to larger variations and exponential degradation in detection performance. \subsection{Phase Space Reconstruction} The PSR has been extensively used for the studies related to non-linear dynamics. It projects a one-dimensional vector to \( k \) -dimensional space with respect to the delay \( T \). Although, the projected vector in \( k \) -dimensional space is equivalent to one-dimensional vector, it considers each higher dimensional space as a probability distribution. These distributions are considered as a meaningful feature \cite{Khuwaja2020}. In this study, we assume that each data point is an independent and identically distributed (IID) variable; therefore, we set the time lag for PSR as 1. This suggests that for each embedding dimension 1 $ \ldots $ \( k \), the time lag is added by 1, accordingly. The projection of IID data onto higher dimensions in such a way leads to an evolutionary trajectory in embedding space. The PSR embeddings will be used to calculate information distances. \subsection{Principal Component Analysis} The PCA, unlike PSR, projects time series data to lower dimensions by choosing coefficients based on their variance. In this study, we use PCA as one of the feature transformation techniques. In comparison to PSR, we use embedding dimensions in PCA as well; however, for PCA, the selected dimensions are less than or equal to the actual ones. The PCA computes a covariance matrix that comprises Eigenvalues which are sorted in descending order accordingly. Considering that the raw feature comprises 4 dimensions, we get 4 eigenvalues. We need to select fewer than four eigenvalues to project the original feature space to lower dimensions. \subsection{Instance Selection} Existing studies have used majority or consensus filtering based approaches to deal with the random noise problem. The majority filtering is quite sensitive to the data itself and provides a lot of false positives whereas the consensus filtering is conservative and mislabels the actual instances as noise. We take a different path by leveraging the feature transformation techniques used in this study. We select the informative instances, which, in turn, will let the system correct the label if needed. In the past, the selection of informative instances has been carried out using pool-based uncertainty sampling and query by committees. This study considers the density and uncertainty based samples to select the informative instances. The term “density” is defined as the similarity between other examples and the similar ones within the same cluster, as the clusters are very large and the density values are quite close to one another. The second is the skewed distribution of the clusters that creates a bias to a particular label. To overcome the aforementioned problems, we use nearest neighbor based density measure \cite{Samami2020} for computing the similarity between an input instance and its nearest neighbors to quantify its uncertainty. We assume that the density quantifies the uncertainty of a given instance suggesting that higher density maximizes uncertainty; therefore, our proposed informative instance selection characterizes both the density and the uncertainty, simultaneously. The sampling strategy suggests that a sample with higher uncertainty and density will be given more weightage in comparison to other samples at each learning cycle. The process of sampling strategy and training is defined in the following steps: \begin{itemize} \item First, we partition the data in train and validation sets. We select 10\% data for the training process. \item A base classifier is initially trained using the train set. \item The trained model is then used to label the instances in the validation set. \item The instances with high density and uncertainty are sampled accordingly. \item The selected instances are augmented to the training data, the said samples are removed from validation set, and the classifier is re-trained on the training set with augmented samples. \item The process will be carried out repeatedly until there are no instances left in the validation set. \end{itemize} \subsection{Classification and Fusion} A good classification algorithm can help to achieve better recognition performance. Although the proposed IIFNet can accommodate any classification algorithm, but to evaluate its performance, we compare eight techniques that include decision trees, support vector machines, adaptive boosting, extreme gradient boosting, K-nearest neighbor, extreme learning machines, 1-D CNN, and long short-term memory networks. The six former classifiers belong to the shallow learning category of classification methods. Each of the chosen classifier belongs to a different family, thus, proves the applicability with IIFNet and also covers wide range of spectrums. The latter two belong to the deep learning category, where the paradigm has been shifted recently concerning the classification and detection strategies in the context of next generation communication systems. The importance of deep learning methods is justified from the works reviewed in literature (Section II). The classifiers are trained in conjunction with automatic hyperparameter optimization, suggesting that each classifier is trained/evaluated several times with varying parameters and the one attaining the best performance will be reported in the results. In the machine learning literature, studies tend to combine the results from multiple classifiers in order to boost the recognition performance. As we are training individual classifiers for the PSR and PCA transformations in the IIFNet, we can naturally combine their predictions to further improve the preamble detection performance. In this study, we employ weighted averaging, and meta-learning using Naïve Bayes to fuse the decision labels, accordingly. The meta-learning strategy refers to a machine learning algorithm stacked onto the decisions of multiple base classification methods. IIFNet combines the two streams, that is, the best classifier results using PSR features and the best classifier results using PCA features. A Naïve Bayes classifier is trained on the probability of occurrences given the samples from two-streams mentioned above to yield the final output. This process is referred to as meta-learning using Naïve Bayes, in this study. \par \section{Experimental Setup and Results} In this section, we first provide a brief introduction to our primary data. We then present details regarding the experimental setup and parameters regarding the proposed IIFNet framework. We first provide the results related to our problem formulation, i.e., the results without and with noise injection. The results provide a basis for the IIFNet framework and will also be used as a baseline to evaluate the performance.\par \subsection{Dataset} The primary dataset for preamble detection has been collected with the collaboration of reputed commercial company based in Italy that operates in the field of wireless communication and specializes in small cell solutions, LTE / HSPA + and C-RAN, accordingly. The parameter setting for the data collection process is shown in Table \ref{tab:Tab1}. The data set was collected according to the 3GPP standards for the radio access network. It includes the amplitude, threshold, variance, and SNR values with their corresponding labels, i.e., FalsePeak and Peak. The data comprises more than 100,000 samples out of which, 92000 samples and 8000 samples correspond to FalsePeak and Peak, respectively. The dataset has varying scales; therefore, we performed mean normalization before providing the data to the IIFNet framework. \begin{table}[] \centering \caption{Parameter settings of the primary data collected.} \label{tab:Tab1} \begin{tabular}{|l|l|} \hline Parameter & Values \\ \hline System Bandwidth & 20 MHz \\ \hline PRACH Format & 0 \\ \hline Channel & AWN/ETU 70 \\ \hline Doppler & 0/200 Hz \\ \hline Rx Antennas & 2 \\ \hline Ncs & 13 \\ \hline \begin{tabular}[c]{@{}l@{}}Data Subscriber\\ Spacing\end{tabular} & 1.25 KHz \\ \hline \# of Sequences & 1000 \\ \hline Frame Length & 800 $\mu$sec \\ \hline SNR & 10 dB \\ \hline \end{tabular} \end{table} \subsection{Setup} All experiments in this study are carried out using MATLAB R2018b on a PC clocked at 3GHz, and RAM of 32 GB. We injected the data with 5$\%$, 10$\%$, and 15$\%$ noise levels. The PSR method requires the time lag and number of embedding dimensions. It was already mentioned that due to IID characteristics, the time lag was selected to 1. We empirically select the number of embedding dimensions for PSR and PCA to be 7 and 2, respectively. We use the hyperparameter toolbox to dynamically select the parameters for each classifier based on the best performance. The dataset was split into 70$\%$ and 30$\%$ for training and testing, respectively. For fair results, we repeated the experiments five times by randomly selecting the proportion of dataset, the reported results in the subsequent sections are averaged, accordingly. As the primary dataset is highly imbalanced, we report performance in terms of accuracy and the F1 score. \subsection{Experimental Results} We present the results without and with varying noise levels using different classifiers in Table 2. It was noticed that without noise, each of the classifiers yield 100$\%$ accuracy for preamble detection. However, when adding noise with varying levels, the accuracy decreases to $ \sim $ 66$\%$ which supports our problem formulation. Amongst all the classifiers, extreme learning machine (ELM) yields the best performance. Therefore, we will use ELM for analysis regarding the selection for value of \( J \) representing the number of highly dense and uncertain samples. The sensitivity analysis for the selection of \( J \) is shown in Fig.~\ref{Fig5}. The results are carried out using selection of informative instances on PSR features and learning through ELM classifier while injecting 15$\%$ noise. The best results were achieved with \( J=20 \). The results show an improvement in the F1 score by 7.86$\%$ while learning only from 150 instances. This proves our assumption that the proposed sampling strategy makes the classifier noise resilient. We report the F1-scores for each classifier trained on PSR and PCA features using IIFNet in Table \ref{tab:my-table}. The findings show that the PSR performs better than the PCA features, in general. The best performing classifier in terms of preamble detection is ELM for both features; therefore, we use the ELM predictions from both PSR and PCA to perform the DLF. It is also observed that the PSR and PCA features complement each other when used in a fusion scheme, as IIFNet can improve the preamble detection performance by approximately 33$\%$, which is quite remarkable. It should also be noted that the proposed framework (without fusion) improves the preamble detection performance for each classifier, in general. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{Sensitivity_analysis_J.jpg} \caption{Results of the proposed sampling strategy in IIFNet on the collected dataset, with $J$ varying from 5 to 200. The results are obtained using PSR features trained with ELM.} \label{Fig5} \end{figure} \begin{table}[] \centering \caption{F1-Scores for Each Classifier using Sampling Strategy with PSR, PCA, and Proposed IIFNet} \label{tab:my-table} \begin{tabular}{|lccc|} \hline \multicolumn{1}{|l|}{Classification Method} & \multicolumn{1}{l|}{5\% Noise} & \multicolumn{1}{l|}{10\% Noise} & \multicolumn{1}{l|}{15\% Noise} \\ \hline \multicolumn{4}{|c|}{PSR Features} \\ \hline \multicolumn{1}{|l|}{Decision Trees} & \multicolumn{1}{c|}{0.9337} & \multicolumn{1}{c|}{0.8943} & 0.7236 \\ \hline \multicolumn{1}{|l|}{Support Vector Machines} & \multicolumn{1}{c|}{0.9786} & \multicolumn{1}{c|}{0.9274} & 0.8284 \\ \hline \multicolumn{1}{|l|}{Adaptive Boosting} & \multicolumn{1}{c|}{0.9449} & \multicolumn{1}{c|}{0.8524} & 0.7623 \\ \hline \multicolumn{1}{|l|}{Extreme Gradient Boosting} & \multicolumn{1}{c|}{0.9562} & \multicolumn{1}{c|}{0.8978} & 0.7845 \\ \hline \multicolumn{1}{|l|}{K-Nearest Neighbor} & \multicolumn{1}{c|}{0.9300} & \multicolumn{1}{c|}{0.8876} & 0.7669 \\ \hline \multicolumn{1}{|l|}{Extreme Learning Machines} & \multicolumn{1}{c|}{0.9926} & \multicolumn{1}{c|}{0.9417} & 0.8592 \\ \hline \multicolumn{1}{|l|}{1-D Convolution Neural Networks} & \multicolumn{1}{c|}{0.9467} & \multicolumn{1}{c|}{0.8725} & 0.7934 \\ \hline \multicolumn{1}{|l|}{Long-Short Term Memory Networks} & \multicolumn{1}{c|}{0.9624} & \multicolumn{1}{c|}{0.9148} & 0.8377 \\ \hline \multicolumn{4}{|c|}{PCA Features} \\ \hline \multicolumn{1}{|l|}{Decision Trees} & \multicolumn{1}{c|}{0.9293} & \multicolumn{1}{c|}{0.8465} & 0.6922 \\ \hline \multicolumn{1}{|l|}{Support Vector Machines} & \multicolumn{1}{c|}{0.9367} & \multicolumn{1}{c|}{0.8923} & 0.7847 \\ \hline \multicolumn{1}{|l|}{Adaptive Boosting} & \multicolumn{1}{c|}{0.9148} & \multicolumn{1}{c|}{0.8229} & 0.7116 \\ \hline \multicolumn{1}{|l|}{Extreme Gradient Boosting} & \multicolumn{1}{c|}{0.9209} & \multicolumn{1}{c|}{0.8577} & 0.7342 \\ \hline \multicolumn{1}{|l|}{K-Nearest Neighbor} & \multicolumn{1}{c|}{0.9134} & \multicolumn{1}{c|}{0.8241} & 0.7192 \\ \hline \multicolumn{1}{|l|}{Extreme Learning Machines} & \multicolumn{1}{c|}{0.9408} & \multicolumn{1}{c|}{0.9066} & 0.8138 \\ \hline \multicolumn{1}{|l|}{1-D Convolution Neural Networks} & \multicolumn{1}{c|}{0.9223} & \multicolumn{1}{c|}{0.8591} & 0.7370 \\ \hline \multicolumn{1}{|l|}{Long-Short Term Memory Networks} & \multicolumn{1}{c|}{0.9321} & \multicolumn{1}{c|}{0.8937} & 0.7875 \\ \hline \multicolumn{4}{|c|}{IIFNet} \\ \hline \multicolumn{1}{|l|}{Weighted Averaging} & \multicolumn{1}{c|}{1.00} & \multicolumn{1}{c|}{0.9723} & 0.8917 \\ \hline \multicolumn{1}{|l|}{Meta-learning (NB)} & \multicolumn{1}{c|}{1.00} & \multicolumn{1}{c|}{0.9837} & 0.9274 \\ \hline \end{tabular} \end{table} \section{Open Issues, Challenges, and Future Directions} \begin{itemize} \item \textbf{\textit{Large-scale Fading}}: Similarly to noise, fading also plays an important role in the degradation of a signal. Particularly, in Next-G networks, cell-free massive MIMO are used. Fig.~\ref{Fig1} illustrates that multiple antennas are co-located in a cell-free massive MIMO; therefore, the signal undergoes large-scale fading at each antenna in a different manner. In such cases, the preamble detection system might cause delays, i.e., re-sending preambles after a specific amount of time. We believe that IIFNet can be used for the problem of fading in an effective manner if some data can be collected for such a phenomenon. It is currently one of the future directives of this study. \item \textbf{\textit{Macro Diversity and Spatial Sparsity}}: Next-G networks propose the use of non-terrestrial networks (NTNs) in combination with infrastructure-based systems. Macro diversity and spatial sparsity are missing from infrastructure-based systems, as devices not close to the antenna or lack of strong signals are ignored. As illustrated in Fig.~\ref{Fig1}, the NTN overcomes this limitation but adds the challenge of asynchronous reception, which leads to the reception of signals at distributed antennas, thus introducing propagation delays for preamble detection. IIFNet can be combined with compensation schemes or timing estimation methods to reduce the effect of asynchronous reception on preamble detection performance. \item \textbf{\textit{Devise Diversity}}: Fig.~\ref{Fig1} illustrates the use of satellites, high altitude platform stations, and UAVs as NTN nodes. These are often referred to as 3D networks. The preamble detection method can be affected with the diversity of device, type, antenna design, line-of-sight availability at high altitudes, and characteristic disparities. Acquiring data separately for each of the NTN devices may not be possible at first. In this regard, domain adaptation techniques could be exploited in conjunction with IIFNet to improve the detection performance in Next-G 3D Networks. \item \textbf{\textit{Energy Efficiency}}: At times when communication infrastructure is not available, UAVs can be deployed to collect data for random access in the capacity of MIMO gateways, as shown in Fig.~\ref{Fig1}. Since UAVs have resource constraints, such as energy efficiency, battery capacity, and computation resources, it is crucial that the detection techniques employed are lightweight. We have shown that IIFNet is not confined to a specific classification method; therefore, lightweight detection techniques can be used to further reduce the computational complexity and increase the energy efficiency, accordingly. \item \textbf{\textit{Anomaly Detection}}: With the diversified NTN nodes and devise diversity, the system might detect the probable unwanted broadcasts as a result from some anomalous equipment behavior. These broadcast can occur while initiating a request to gNB/eNB. As IIFNet uses PSRs and PCAs to transform the feature space, it can naturally be extended to detect the device type as well, provided that the data embed the device-type information. On the basis of the device identification, anomalous behavior can also be detected, concurrently. \end{itemize} \section{Conclusion} This paper focuses on the detection of the preamble for the PRACH process in Next-G communication networks. We showed that by injecting noise into the data, the performance degrades to 48.75$\%$ in terms of the F1 score. In this regard, we proposed IIFNet that uses a novel sampling strategy to select the informative instances from the transformed feature spaces. Furthermore, the use of two different feature transformation methods allows us to perform DLF on classifier predictions so that the detection performance could be further improved. IIFNet has been reported to improve the F1 score by 33.2$\%$ from baseline using the ELM classifier and Naïve Bayes as a meta-learner for DLF. We assume that reducing the false detection of peaks helps the communication systems such as 6G to maintain the high throughput as it reduces the delay in terms of requests. \section*{Acknowledgment} The authors thank all contributors who shared data with us for this analysis. \section{Introduction}\label{sec:intro} The 5G communication system stacked the Internet of Things (IoT) on top of the 4G mobile broadband to attain enhanced mobile broadband (eMBB), accordingly. The eMBB due to IoT implicitly ensures the inclusion of ultra-reliable and low-latency communication (uRLLC), and massive machine type communication (mMTC), respectively. Although the worldwide deployment of 5G has begun in 2020, it is assumed that the said communication system will not be able to meet the requirements for Next-G communication systems \cite{Du2020veh}. In this regard, researchers have begun working on 6G standards to overcome some of the associated limitations. \par A report from Gartner\footnote{http://www.gartner.com/newsroom/id/2684616} states that by the end of 2025 the connection of more than 30 billion devices to the communication network will be observed. The surge in connection devices requires low-latency communication to be more reliable and accommodation of mMTC to be more scalable in comparison to 5G. The vision of Next-G cellular communication via 6G is to expand the coverage and boundaries of the aforementioned services, with Internet of Everything (IoE) and artificial intelligence (AI) as its key enabling technologies \cite{Du2020veh}. Similarly, user-experienced data rate, traffic capacity, spectrum efficiency, energy efficiency, connection density, mobility support, coverage, security capacity, and cost efficiency are assumed to improve with the emergence of Next-G networks \cite{Du2020veh}. An example of different applications in relation to Next-G networks is shown in Fig.~\ref{Fig1}. The Next-G allows the users to have ubiquitous services, consistent and generalized mobility irrespective of the underlying communication medium or transport technology\footnote{https://www.igi-global.com/dictionary/modelling-quality-and-pricing-in-next-generation-telecom-networks/20320}. The idea is illustrated in the aforementioned figure, such that the services are provided and users are accommodated irrespective of the coverage area or fixed infrastructure. The connectivity needs to be ensured through devices that can connect to the Internet or are capable of establishing a connection with communication satellites. The applications to Next-G communication systems include but are not limited to smart grids, Industry 5.0, smart homes, smart transportation systems, disaster recovery systems, and remote tourism \cite{Dev2021}. \\ As Next-G networks need to support more mMTC devices to fulfil the scalability needs, random access (RA) procedure is of vital importance as it ensures the uplink synchronization from IoE related devices to the base station. We will refer to the 5G standard for further explanation, as the standards for Next-G have not been proposed yet. The transmission of random access preambles from IoE devices to generation node B (gNB) is carried out through the physical random access channel (PRACH) \cite{wu2020efficient}. The gNB detects the preambles while processing the PRACH signals to assign the timing advance and the preamble ID (PID). The signal information is then transmitted in a response message to IoE device for adjusting the transmission time and establishing synchronization with gNB\footnote{3GPP Release 15 and Release 16}. In case of PRACH process failure, a preamble needs to be sent again after a pre-defined amount of time that degrades the performance and suffers from unnecessary delay \cite{Sharma2020}. \begin{figure*}[ht!] \centering \includegraphics[width=\linewidth]{6G-intro-fig-v4.png} \caption{An overview of Next-G communication networks.} \label{Fig1} \end{figure*} The existing works for preamble detection have shown promising results \cite{yang2020mixed}\cite{zhang2020tara} by achieving a detection rate of 99$\%$ using threshold-based techniques on signal-to-noise ratio. However, these techniques fail to generalize the performance on large number of devices due to the false peaks \cite{Sharma2020, Modina2019}. Researchers have since then moved to machine learning approaches for preamble detection \cite{Mostafa2021}. The problem with the existing methods employing machine learning is that they do not consider the random noise problem that can affect the data collection process itself. This random noise can manipulate the training process to classify false peaks as true ones. Detection of false peaks not only increases latency, but also affects network efficiency and scalability. To the best of our knowledge, the random noise problem has not been dealt in PRACH processing.\par In this study, we collected primary data from a well-known company working in the field of communication systems for preamble detection in accordance with the new 5G radio systems and 3GPP technical specification group radio access network NR physical channels and modulation\footnote{3rd GPP Technical Specification Group Radio Access Network NR Physical Channels and Modulation, 38.211, 2019}. Data are gathered specifically when an IoT device transmits preambles to request an uplink allocation to the gNB. The data are then injected with random additive white Gaussian noise levels varying between 5 - 15$\%$. We applied several shallow- and deep-learning approaches to show the effect of random noise on false-peak detection, respectively. In this regard, we propose an informative instance-based fusion network (IIFNet) for not only detecting the preambles accurately but also to deal with the random noise problem in PRACH processing. The contributions of this work are stated below:\par \begin{enumerate} \item Collection of primary data for preamble detection in compliance to 5G-new radio systems. \item A new sampling strategy is proposed to select the most informative instances. \item A novel fusion network, IIFNet is proposed for reliable detection of preambles in noisy environment. \item State-of-the-art results for preamble detection on noisy data have been reported. \end{enumerate} \vspace{\baselineskip} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{IIFNet-fig.png} \caption{Proposed IIFNet framework for preamble detection in 6G and beyond communication networks.} \label{Fig2} \end{figure*} \section{State-of-the-Art Studies}\label{sec:BG} Telecommunication industry is rapidly growing and integrating with wide array of technologies due to the emergence of 6G communication systems. The 6G services are supported by enhanced eMBB, lower uRLLC, and more mMTC. For consolidating a brief review, we consider 5G-new radio (NR) to be a reference point. The authors in \cite{dahlman20185g} discussed the importance of optimizing downlink resource slicing and its impact on both the uRLLC and the eMBB with respect to 5G-NR systems. Amongst many characteristics of downlink resources, the study emphasized the importance of preamble detection/design for improving the network performance. Thanks to several favorable characteristics like constant amplitude and low cross-correlation, Zadoff-Chu (ZC) sequences are employed to generate random preambles in LTE and 5G-NR \cite{yang2020mixed}\cite{lin2016random}. As one of the seminal works on the design of 5G-NR for narrowband IoT (NB-IoT), a single-tone waveform and frequency hopping NB-IoT PRACH (NPRACH) was proposed in \cite{wu2020efficient}. The aforementioned design had the merit of being nearly zero peak-to-average power ratio, thus suitable for NB-IoT systems with strict requirements of long battery lifetime, low cost, and extensive coverage. When multiple IoT devices try to access the network simultaneously, the NR may receive superimposed NPRACH preambles, and further need to detect them that are received from different IoT devices. In this network configuration, the work in \cite{zhang2020tara} solves the preamble detection problem to obtain the optimal Neyman-Pearson threshold in NB-IoT systems. The work also focused on estimating the time-of-arrival and residual frequency offset of the detected IoT devices. The work in \cite{zhen2018random} exploits the difference in time-of-arrivals of collided preambles to further improve the performance of NB-IoT systems. As AI is one of the key enabling technologies for 6G communication systems, many researchers have adapted it for improving the preamble detection process. In \cite{Mostafa2021} a deep learning-based method was developed for the decoding of preambles. The study aggregated to ZC-sequences to mimic the effect of massive IoT devices for the use-case of 5G systems and designed two separate decoders. The first was based on threshold-based measurements, while the second used a deep learning approach to detect the preambles accordingly. A deep neural network architecture was developed in \cite{ding2020deep} for preamble collision detection in grant-free random access MIMO systems. The key idea of the work was that only nearby base stations of a collided user are used, instead of all base stations in the network. Simulation results show that their deep model for base station clustering yields a higher achievable rate compared to the baseline alternatives. The study \cite{Modina2019} proposed the use of shallow approaches to detect the preamble in 5G systems. It should be noted that all the deep learning based approaches are applied to the data generated using synthetic means. Moreover, none of the said strategies has considered the random noise problem that significantly degrades the preamble detection performance. Many studies have proposed the use of majority voting and consensus voting methods \cite{Samami2020} to remove random noise, but they will not work in the field of communication systems, as the peaks are detected at a much lower rate compared to the false peaks, and the aforementioned methods tend to eliminate the preambles if the sampling frequency is lower. To the best of our knowledge, this is the first study to deal with the random noise problem for preamble detection in the PRACH process. \section{Proposed Method} It is well established that noise is an integral part of the communication system that needs to be modeled to remove any ambiguity in the detection process. The aim of this work is to design a framework that is capable of accurate detection even if the samples are corrupted by the random noise at the physical layer. In this regard, we propose an informative instance-based fusion network (IIFNet) to improve the detection process as shown in Fig.~\ref{Fig2}. To the best of our knowledge, such sampling strategy for selecting the most informative instances in the context of preamble detection has not been proposed before. The preambles are collected from IoT devices and stored in a preamble database. We inject the random Gaussian noise into the preamble database to create distorted data as should be in real-life scenarios. We then transform the raw feature space, i.e., amplitude, variance, threshold, and SNR to phase space reconstruction (PSR) \cite{Khuwaja2020}, and principal component analysis (PCA). The intuition behind adopting the said feature transformation techniques is to cover a wide range of feature engineering spectrum. The PSR projects the lower dimensional data onto higher dimensions, whereas the PCA takes the inverse approach. The higher dimensions in PSR would sense the data impurity through information gains such as distance measures, whereas the PCA would only consider the feature variables that could explain the major portion of the variance that filters the data in an intrinsic manner. Moreover, PCA has been used extensively for pre-processing and denoising the data, respectively. We then select a portion of informative instances from the feature transformation technique that defines the entire feature space and train individual classifiers. The trained classifiers will then be used to label the remaining instances, and the same process will be iterated over until all instances are labeled accordingly. Once the classification models are trained, we use them to classify the preambles. As the classification models are trained individually for both feature transformation techniques, we perform decision level fusion (DLF) \cite{Khuwaja2020} on the predicted labels to further improve the detection process. These trained models and fusion strategies are attached to each gNB, evolved node B (eNB), or base stations to distinguish between true and false peaks accordingly. \subsection{Random Noise Injection} Artificial noise injection is a common method in the research studies to corrupt the data samples. Random Gaussian noise (RGN) injects the noise by changing the labels of each sample with a varying probability ranging from 0 to 1. Assuming that the power level is quite low in Next-G communication systems, we considered a zero-mean additive white Gaussian noise (AWGN) for injection. As the mean is zero, we use the square root of the mean power level of the dataset to be used as the standard deviation. We create a Gaussian distribution and corrupt a specific portion of the dataset, respectively. During the ablation study, it was observed that a simple threshold-based technique can accurately differentiate between the true and false peaks. However, adding just 5$\%$ noise represents a difficult problem for many sophisticated classification algorithms, since accuracy decreases by more than 7$\%$ at times. Further increase in the noise level leads to larger variations and exponential degradation in detection performance. \subsection{Phase Space Reconstruction} The PSR has been extensively used for the studies related to non-linear dynamics. It projects a one-dimensional vector to \( k \) -dimensional space with respect to the delay \( T \). Although, the projected vector in \( k \) -dimensional space is equivalent to one-dimensional vector, it considers each higher dimensional space as a probability distribution. These distributions are considered as a meaningful feature \cite{Khuwaja2020}. In this study, we assume that each data point is an independent and identically distributed (IID) variable; therefore, we set the time lag for PSR as 1. This suggests that for each embedding dimension 1 $ \ldots $ \( k \), the time lag is added by 1, accordingly. The projection of IID data onto higher dimensions in such a way leads to an evolutionary trajectory in embedding space. The PSR embeddings will be used to calculate information distances. \subsection{Principal Component Analysis} The PCA, unlike PSR, projects time series data to lower dimensions by choosing coefficients based on their variance. In this study, we use PCA as one of the feature transformation techniques. In comparison to PSR, we use embedding dimensions in PCA as well; however, for PCA, the selected dimensions are less than or equal to the actual ones. The PCA computes a covariance matrix that comprises Eigenvalues which are sorted in descending order accordingly. Considering that the raw feature comprises 4 dimensions, we get 4 eigenvalues. We need to select fewer than four eigenvalues to project the original feature space to lower dimensions. \subsection{Instance Selection} Existing studies have used majority or consensus filtering based approaches to deal with the random noise problem. The majority filtering is quite sensitive to the data itself and provides a lot of false positives whereas the consensus filtering is conservative and mislabels the actual instances as noise. We take a different path by leveraging the feature transformation techniques used in this study. We select the informative instances, which, in turn, will let the system correct the label if needed. In the past, the selection of informative instances has been carried out using pool-based uncertainty sampling and query by committees. This study considers the density and uncertainty based samples to select the informative instances. The term “density” is defined as the similarity between other examples and the similar ones within the same cluster, as the clusters are very large and the density values are quite close to one another. The second is the skewed distribution of the clusters that creates a bias to a particular label. To overcome the aforementioned problems, we use nearest neighbor based density measure \cite{Samami2020} for computing the similarity between an input instance and its nearest neighbors to quantify its uncertainty. We assume that the density quantifies the uncertainty of a given instance suggesting that higher density maximizes uncertainty; therefore, our proposed informative instance selection characterizes both the density and the uncertainty, simultaneously. The sampling strategy suggests that a sample with higher uncertainty and density will be given more weightage in comparison to other samples at each learning cycle. The process of sampling strategy and training is defined in the following steps: \begin{itemize} \item First, we partition the data in train and validation sets. We select 10\% data for the training process. \item A base classifier is initially trained using the train set. \item The trained model is then used to label the instances in the validation set. \item The instances with high density and uncertainty are sampled accordingly. \item The selected instances are augmented to the training data, the said samples are removed from validation set, and the classifier is re-trained on the training set with augmented samples. \item The process will be carried out repeatedly until there are no instances left in the validation set. \end{itemize} \subsection{Classification and Fusion} A good classification algorithm can help to achieve better recognition performance. Although the proposed IIFNet can accommodate any classification algorithm, but to evaluate its performance, we compare eight techniques that include decision trees, support vector machines, adaptive boosting, extreme gradient boosting, K-nearest neighbor, extreme learning machines, 1-D CNN, and long short-term memory networks. The six former classifiers belong to the shallow learning category of classification methods. Each of the chosen classifier belongs to a different family, thus, proves the applicability with IIFNet and also covers wide range of spectrums. The latter two belong to the deep learning category, where the paradigm has been shifted recently concerning the classification and detection strategies in the context of next generation communication systems. The importance of deep learning methods is justified from the works reviewed in literature (Section II). The classifiers are trained in conjunction with automatic hyperparameter optimization, suggesting that each classifier is trained/evaluated several times with varying parameters and the one attaining the best performance will be reported in the results. In the machine learning literature, studies tend to combine the results from multiple classifiers in order to boost the recognition performance. As we are training individual classifiers for the PSR and PCA transformations in the IIFNet, we can naturally combine their predictions to further improve the preamble detection performance. In this study, we employ weighted averaging, and meta-learning using Naïve Bayes to fuse the decision labels, accordingly. The meta-learning strategy refers to a machine learning algorithm stacked onto the decisions of multiple base classification methods. IIFNet combines the two streams, that is, the best classifier results using PSR features and the best classifier results using PCA features. A Naïve Bayes classifier is trained on the probability of occurrences given the samples from two-streams mentioned above to yield the final output. This process is referred to as meta-learning using Naïve Bayes, in this study. \par \section{Experimental Setup and Results} In this section, we first provide a brief introduction to our primary data. We then present details regarding the experimental setup and parameters regarding the proposed IIFNet framework. We first provide the results related to our problem formulation, i.e., the results without and with noise injection. The results provide a basis for the IIFNet framework and will also be used as a baseline to evaluate the performance.\par \subsection{Dataset} The primary dataset for preamble detection has been collected with the collaboration of reputed commercial company based in Italy that operates in the field of wireless communication and specializes in small cell solutions, LTE / HSPA + and C-RAN, accordingly. The parameter setting for the data collection process is shown in Table \ref{tab:Tab1}. The data set was collected according to the 3GPP standards for the radio access network. It includes the amplitude, threshold, variance, and SNR values with their corresponding labels, i.e., FalsePeak and Peak. The data comprises more than 100,000 samples out of which, 92000 samples and 8000 samples correspond to FalsePeak and Peak, respectively. The dataset has varying scales; therefore, we performed mean normalization before providing the data to the IIFNet framework. \begin{table}[] \centering \caption{Parameter settings of the primary data collected.} \label{tab:Tab1} \begin{tabular}{|l|l|} \hline Parameter & Values \\ \hline System Bandwidth & 20 MHz \\ \hline PRACH Format & 0 \\ \hline Channel & AWN/ETU 70 \\ \hline Doppler & 0/200 Hz \\ \hline Rx Antennas & 2 \\ \hline Ncs & 13 \\ \hline \begin{tabular}[c]{@{}l@{}}Data Subscriber\\ Spacing\end{tabular} & 1.25 KHz \\ \hline \# of Sequences & 1000 \\ \hline Frame Length & 800 $\mu$sec \\ \hline SNR & 10 dB \\ \hline \end{tabular} \end{table} \subsection{Setup} All experiments in this study are carried out using MATLAB R2018b on a PC clocked at 3GHz, and RAM of 32 GB. We injected the data with 5$\%$, 10$\%$, and 15$\%$ noise levels. The PSR method requires the time lag and number of embedding dimensions. It was already mentioned that due to IID characteristics, the time lag was selected to 1. We empirically select the number of embedding dimensions for PSR and PCA to be 7 and 2, respectively. We use the hyperparameter toolbox to dynamically select the parameters for each classifier based on the best performance. The dataset was split into 70$\%$ and 30$\%$ for training and testing, respectively. For fair results, we repeated the experiments five times by randomly selecting the proportion of dataset, the reported results in the subsequent sections are averaged, accordingly. As the primary dataset is highly imbalanced, we report performance in terms of accuracy and the F1 score. \subsection{Experimental Results} We present the results without and with varying noise levels using different classifiers in Table 2. It was noticed that without noise, each of the classifiers yield 100$\%$ accuracy for preamble detection. However, when adding noise with varying levels, the accuracy decreases to $ \sim $ 66$\%$ which supports our problem formulation. Amongst all the classifiers, extreme learning machine (ELM) yields the best performance. Therefore, we will use ELM for analysis regarding the selection for value of \( J \) representing the number of highly dense and uncertain samples. The sensitivity analysis for the selection of \( J \) is shown in Fig.~\ref{Fig5}. The results are carried out using selection of informative instances on PSR features and learning through ELM classifier while injecting 15$\%$ noise. The best results were achieved with \( J=20 \). The results show an improvement in the F1 score by 7.86$\%$ while learning only from 150 instances. This proves our assumption that the proposed sampling strategy makes the classifier noise resilient. We report the F1-scores for each classifier trained on PSR and PCA features using IIFNet in Table \ref{tab:my-table}. The findings show that the PSR performs better than the PCA features, in general. The best performing classifier in terms of preamble detection is ELM for both features; therefore, we use the ELM predictions from both PSR and PCA to perform the DLF. It is also observed that the PSR and PCA features complement each other when used in a fusion scheme, as IIFNet can improve the preamble detection performance by approximately 33$\%$, which is quite remarkable. It should also be noted that the proposed framework (without fusion) improves the preamble detection performance for each classifier, in general. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{Sensitivity_analysis_J.jpg} \caption{Results of the proposed sampling strategy in IIFNet on the collected dataset, with $J$ varying from 5 to 200. The results are obtained using PSR features trained with ELM.} \label{Fig5} \end{figure} \begin{table}[] \centering \caption{F1-Scores for Each Classifier using Sampling Strategy with PSR, PCA, and Proposed IIFNet} \label{tab:my-table} \begin{tabular}{|lccc|} \hline \multicolumn{1}{|l|}{Classification Method} & \multicolumn{1}{l|}{5\% Noise} & \multicolumn{1}{l|}{10\% Noise} & \multicolumn{1}{l|}{15\% Noise} \\ \hline \multicolumn{4}{|c|}{PSR Features} \\ \hline \multicolumn{1}{|l|}{Decision Trees} & \multicolumn{1}{c|}{0.9337} & \multicolumn{1}{c|}{0.8943} & 0.7236 \\ \hline \multicolumn{1}{|l|}{Support Vector Machines} & \multicolumn{1}{c|}{0.9786} & \multicolumn{1}{c|}{0.9274} & 0.8284 \\ \hline \multicolumn{1}{|l|}{Adaptive Boosting} & \multicolumn{1}{c|}{0.9449} & \multicolumn{1}{c|}{0.8524} & 0.7623 \\ \hline \multicolumn{1}{|l|}{Extreme Gradient Boosting} & \multicolumn{1}{c|}{0.9562} & \multicolumn{1}{c|}{0.8978} & 0.7845 \\ \hline \multicolumn{1}{|l|}{K-Nearest Neighbor} & \multicolumn{1}{c|}{0.9300} & \multicolumn{1}{c|}{0.8876} & 0.7669 \\ \hline \multicolumn{1}{|l|}{Extreme Learning Machines} & \multicolumn{1}{c|}{0.9926} & \multicolumn{1}{c|}{0.9417} & 0.8592 \\ \hline \multicolumn{1}{|l|}{1-D Convolution Neural Networks} & \multicolumn{1}{c|}{0.9467} & \multicolumn{1}{c|}{0.8725} & 0.7934 \\ \hline \multicolumn{1}{|l|}{Long-Short Term Memory Networks} & \multicolumn{1}{c|}{0.9624} & \multicolumn{1}{c|}{0.9148} & 0.8377 \\ \hline \multicolumn{4}{|c|}{PCA Features} \\ \hline \multicolumn{1}{|l|}{Decision Trees} & \multicolumn{1}{c|}{0.9293} & \multicolumn{1}{c|}{0.8465} & 0.6922 \\ \hline \multicolumn{1}{|l|}{Support Vector Machines} & \multicolumn{1}{c|}{0.9367} & \multicolumn{1}{c|}{0.8923} & 0.7847 \\ \hline \multicolumn{1}{|l|}{Adaptive Boosting} & \multicolumn{1}{c|}{0.9148} & \multicolumn{1}{c|}{0.8229} & 0.7116 \\ \hline \multicolumn{1}{|l|}{Extreme Gradient Boosting} & \multicolumn{1}{c|}{0.9209} & \multicolumn{1}{c|}{0.8577} & 0.7342 \\ \hline \multicolumn{1}{|l|}{K-Nearest Neighbor} & \multicolumn{1}{c|}{0.9134} & \multicolumn{1}{c|}{0.8241} & 0.7192 \\ \hline \multicolumn{1}{|l|}{Extreme Learning Machines} & \multicolumn{1}{c|}{0.9408} & \multicolumn{1}{c|}{0.9066} & 0.8138 \\ \hline \multicolumn{1}{|l|}{1-D Convolution Neural Networks} & \multicolumn{1}{c|}{0.9223} & \multicolumn{1}{c|}{0.8591} & 0.7370 \\ \hline \multicolumn{1}{|l|}{Long-Short Term Memory Networks} & \multicolumn{1}{c|}{0.9321} & \multicolumn{1}{c|}{0.8937} & 0.7875 \\ \hline \multicolumn{4}{|c|}{IIFNet} \\ \hline \multicolumn{1}{|l|}{Weighted Averaging} & \multicolumn{1}{c|}{1.00} & \multicolumn{1}{c|}{0.9723} & 0.8917 \\ \hline \multicolumn{1}{|l|}{Meta-learning (NB)} & \multicolumn{1}{c|}{1.00} & \multicolumn{1}{c|}{0.9837} & 0.9274 \\ \hline \end{tabular} \end{table} \section{Open Issues, Challenges, and Future Directions} \begin{itemize} \item \textbf{\textit{Large-scale Fading}}: Similarly to noise, fading also plays an important role in the degradation of a signal. Particularly, in Next-G networks, cell-free massive MIMO are used. Fig.~\ref{Fig1} illustrates that multiple antennas are co-located in a cell-free massive MIMO; therefore, the signal undergoes large-scale fading at each antenna in a different manner. In such cases, the preamble detection system might cause delays, i.e., re-sending preambles after a specific amount of time. We believe that IIFNet can be used for the problem of fading in an effective manner if some data can be collected for such a phenomenon. It is currently one of the future directives of this study. \item \textbf{\textit{Macro Diversity and Spatial Sparsity}}: Next-G networks propose the use of non-terrestrial networks (NTNs) in combination with infrastructure-based systems. Macro diversity and spatial sparsity are missing from infrastructure-based systems, as devices not close to the antenna or lack of strong signals are ignored. As illustrated in Fig.~\ref{Fig1}, the NTN overcomes this limitation but adds the challenge of asynchronous reception, which leads to the reception of signals at distributed antennas, thus introducing propagation delays for preamble detection. IIFNet can be combined with compensation schemes or timing estimation methods to reduce the effect of asynchronous reception on preamble detection performance. \item \textbf{\textit{Devise Diversity}}: Fig.~\ref{Fig1} illustrates the use of satellites, high altitude platform stations, and UAVs as NTN nodes. These are often referred to as 3D networks. The preamble detection method can be affected with the diversity of device, type, antenna design, line-of-sight availability at high altitudes, and characteristic disparities. Acquiring data separately for each of the NTN devices may not be possible at first. In this regard, domain adaptation techniques could be exploited in conjunction with IIFNet to improve the detection performance in Next-G 3D Networks. \item \textbf{\textit{Energy Efficiency}}: At times when communication infrastructure is not available, UAVs can be deployed to collect data for random access in the capacity of MIMO gateways, as shown in Fig.~\ref{Fig1}. Since UAVs have resource constraints, such as energy efficiency, battery capacity, and computation resources, it is crucial that the detection techniques employed are lightweight. We have shown that IIFNet is not confined to a specific classification method; therefore, lightweight detection techniques can be used to further reduce the computational complexity and increase the energy efficiency, accordingly. \item \textbf{\textit{Anomaly Detection}}: With the diversified NTN nodes and devise diversity, the system might detect the probable unwanted broadcasts as a result from some anomalous equipment behavior. These broadcast can occur while initiating a request to gNB/eNB. As IIFNet uses PSRs and PCAs to transform the feature space, it can naturally be extended to detect the device type as well, provided that the data embed the device-type information. On the basis of the device identification, anomalous behavior can also be detected, concurrently. \end{itemize} \section{Conclusion} This paper focuses on the detection of the preamble for the PRACH process in Next-G communication networks. We showed that by injecting noise into the data, the performance degrades to 48.75$\%$ in terms of the F1 score. In this regard, we proposed IIFNet that uses a novel sampling strategy to select the informative instances from the transformed feature spaces. Furthermore, the use of two different feature transformation methods allows us to perform DLF on classifier predictions so that the detection performance could be further improved. IIFNet has been reported to improve the F1 score by 33.2$\%$ from baseline using the ELM classifier and Naïve Bayes as a meta-learner for DLF. We assume that reducing the false detection of peaks helps the communication systems such as 6G to maintain the high throughput as it reduces the delay in terms of requests. \section*{Acknowledgment} The authors thank all contributors who shared data with us for this analysis.
proofpile-arXiv_065-3342
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The equivalence problem, \cite{[2.5]}, for the fifth-order operator means that two fifth-order differential operators on a real line can be transformed into each other by an appropriate change of variables, \cite{[2], [3], [4], [5]}. We will treat both versions of equivalence problems that include the direct equivalence problem and an equivalence problem to determine conditions on two differential operators such that there exists a fiber-preserving transformation mapping one to the other according to gauge equivalence. We associate a collection of one-forms to an object under investigation in the original coordinates; the the corresponding object in the new coordinates will have its own collection of one-forms. Once an equivalence problem has been reformulated in the proper Cartan form, in terms of a coframe $\omega$ on the $m$-dimensional base manifold $M$, along with a structured group $G\subset {\rm GL}(m)$, we can apply the Cartan equivalence method. The goal is to normalize the structure group valued coefficients in a suitably invariant manner, and this is accomplished through the determination of a sufficient number of invariant combinations thereof, \cite{[1]}. The classification of linear differential equations is a special case of the general problem of classifying differential operators, which has a variety of important applications, including quantum mechanics and the projective geometry of curves, \cite{[1]}. In the last section, applications of this method for fifth-order differential operators are presented. S. S. Chern turned his attention to the problem under contact transformations \cite{[5]} and Hajime Sato et all \cite{[6]}, but are specialized by linearity. Niky Kamran and Peter J. Olver have solved the equivalence problem for the second-order differential operator with two versions of the equivalence problem \cite{[7]} and also Mehdi Nadjafikhah and Rohollah Bakhshandeh-Chamazkoti have solved this problem for third-order operators \cite{[10]} and finally, Rohollah Bakhshandeh-Chamazkoti has solved this problem for fourth-order operators \cite{[11]}. The fifth-order operators have different geometries and considering them in this paper will have different challenges. % \section{The Cartan equivalence method} We first review Cartan's equivalence problem as an algorithmic method that include the structure equations, normalization, and absorption. The standard reference is \cite{[1]}. Let $G\subset {\rm GL}(m)$ be a lie group and $\omega$ and $\overline{\omega}$ denote coframes defined on the $m$-dimensional manifolds $M$ and $\overline{M}$. The $G$-valued equivalence problem for these coframes is to determine whether or not there exists a local diffeomorphism $\Phi:M\to\overline{M}$ and a $G$-valued function $g:M\to G$ with the property that \begin{eqnarray}\label{equiv} \Phi^*(\overline{\omega})=g(x)\omega. \end{eqnarray} In full detail, the equivalence condition(\ref{equiv}) has the form \begin{eqnarray}\label{555} \Phi^*(\overline{\omega}^i)=\sum_{j=1}^mg_j^i(x)\omega^j, \end{eqnarray} for $i=1, \cdots, m$ where the functions $g_j^i(x)$ are the entries of the matrix $g(x)$, which is constrained to belong to the structure group $G$ at each point$x\in M$. In view of the group property of $G$, it will be satisfied if and only if we can find a pair of $G$-valued functions $\overline{g}(\overline{x})$ and $g(x)$ such that omitting pull-back for clarity \begin{eqnarray}\label{555} \overline{g}(\overline{x})\overline\omega=g(x)\omega. \end{eqnarray} Our goal is to reduce a given $G$-equivalence problem to a standard equivalence problem for coframes, and the way to do that is to specify the matrix entries of $g=g(x)$ and $\overline{g}(\overline{x})$ as functions of their respective coordinate. The new coframes defined by \begin{eqnarray}\label{lifcofram2} \theta^i=\sum_{j=1}^mg_j^i\omega^j, \quad \quad \overline {\theta}^i=\sum_{j=1}^m \overline{g}_j^i\overline{\omega}^j, \end{eqnarray} which will be invariant: $\Phi^*(\overline{\theta}^i)=\theta^i$. This motivates the preliminary step in the Cartan solution to the equivalence problem, which is to introduce the lifted coframe \begin{eqnarray}\label{lifcofram2} \theta=g.\omega, \end{eqnarray} or, in full detail, \begin{eqnarray}\label{556} \theta^i=\sum_{j=1}^mg_j^i(x)\omega^j. \end{eqnarray} We compute the differentials of the lifted coframe elements: \begin{eqnarray}\label{eq:diffliftedcoframe} d\theta^i=d\Big(\sum_{j=1}^mg_j^i\omega^j\Big) =\sum_{j=1}^m\{dg_j^i\wedge\omega^j+g_j^id\omega^j\}. \end{eqnarray} Since the $\omega$ forms a coframe on $M$, one can rewrite the 2-forms $d\omega^j$ in terms of sums of wedge products of the $\omega^i$'s. Moreover, viewing (\ref{lifcofram2}), these can be rewritten as wedge products of the $\theta^k$'s, so that \begin{eqnarray}\label{eq:Differentliftedcoframe} d\theta^i=\sum_{j=1}^m\gamma_j^i\wedge\theta^j +\sum_{\substack{j,k=1\\j<k}}^mT_{jk}^i(x,g)\theta^j\wedge\theta^k,\quad i=1, \ldots, m. \end{eqnarray} The functions $T_{jk}^i$ are called torsion coefficients. The torsion coefficients are constant, or depend on the base variables $x$ or the group parameters $g$. Some of torsion coefficients may be invariants but they are typically not invariants for the problem. The $\gamma_j^i$s in (\ref{eq:Differentliftedcoframe}) are the 1-forms \begin{eqnarray}\label{eq:gamma1} \gamma_j^i=\sum_{k=1}^mdg_k^i(g^{-1})_j^k, \end{eqnarray} which have the following matrix notation \begin{eqnarray}\label{eq:gamma2} \gamma=dg\cdot g^{-1}. \end{eqnarray} The $\gamma$ forms the matrix of Maurer-Cartan forms on the structure group $G$. Assume the set $\{\alpha^1, \ldots, \alpha^r\}$ is a basis for the space of Maurer-Cartan forms then each $\gamma_j^i$ is a linear combination of the Maurer-Cartan basis: \begin{eqnarray}\label{eq:gammamurcartan} \gamma_j^i=\sum_{l=1}^rA_{jl}^i\alpha^l,\qquad i, j=1, \ldots, m. \end{eqnarray} Thus the final structure equations for our lifted coframe, in terms of the Maurer-Cartan forms, have the following general form \begin{eqnarray}\label{eq:finalstructequans} d\theta^i=\sum_{l=1}^r\sum_{j=1}^mA_{jl}^i\alpha^l\wedge\theta^j +\sum_{\substack{j,k=1\\j<k}}^mT_{jk}^i(x,g)\theta^j\wedge\theta^k,\quad i=1, \ldots, m. \end{eqnarray} Now one can reduce the Maurer-Cartan forms $\alpha^l$ back to the base manifold $M$ by replacing them by general linear combinations of coframe elements \begin{eqnarray}\label{eq:subsmurcart} \alpha^l\mapsto\sum_{l=1}^rz_j^l\theta^j, \end{eqnarray} where the $z_j^l$ are as yet unspecified coefficients, whose explicit dependence on $x$. By substituting (\ref{eq:subsmurcart}) into the structure equations (\ref{eq:finalstructequans}), one can obtain a system of 2-forms \begin{eqnarray}\label{eq:twoformsys} \Theta^i=\sum_{\substack{j,k=1\\j<k}}^m \{B_{jk}^i[{\bf z}]+T_{jk}^i(x, g)\}\theta^j\wedge\theta^k,\quad i, j=1,\ldots, m, \end{eqnarray} where \begin{eqnarray}\label{eq:Bvalue} B_{jk}^i[{\bf z}]=\sum_{l=1}^r(A_{kl}^iz_j^l-A_{jl}^iz_k^l), \end{eqnarray} are linear functions of the coefficients ${\bf z}=(z_k^l)$, whose constant coefficients are determined by the specific representation of the structure group $G\subset {\rm GL}(m)$, and so do not depend on the coordinate system. The general process of determining the unknown coefficients ${\bf z}$ from the full torsion coefficients is known as {\it absorption of torsion} and the other being {\it normalization} of the resulting invariant torsion coefficients, as described above. Replacing each Maurer-Cartan form $\alpha^l$ by the modified 1-form \begin{eqnarray}\label{eq:modifiedoneform} \pi^l=\alpha^l-\sum_{i=1}^pz_i^l\theta^i,\qquad l=1, \ldots, r, \end{eqnarray} leads to absorb the inessential torsion in the structure equation (\ref{eq:finalstructequans}). Here the $z_i^l=z_i^l(x,g)$ are the solutions to the absorption equations. Thus the structure equations change to the simpler absorbed form \begin{eqnarray}\label{eq:observedform} d\theta^i=\sum_{l=1}^r\sum_{j=1}^mA_{jl}^i\pi^l\wedge\theta^j +\sum_{\substack{j,k=1\\j<k}}^mU_{jk}^i\theta^j\wedge\theta^k,\quad i, j=1, \ldots, m. \end{eqnarray} where the remaining nonzero coefficients $U_{jk}^i$ consist only of essential torsion. We write the linear system of absorption equations \begin{eqnarray}\label{eq:absorbsys} \sum_{l=1}^r(A_{jl}^iz_k^l-A_{kl}^iz_j^l)=-T_{jk}^i. \end{eqnarray} and solve for the unknowns ${\bf z}$ using the standard Gaussian elimination method. \section{Equivalence of fifth order differential operators} Consider the fifth order differential operator applied on a scalar-valued function $u(x)$ \begin{eqnarray}\label{eq:2.1} {\mathcal D}[u]=\sum_{i=0}^5 f_i(x)\,D^iu \end{eqnarray} and another fifth order differential operator applied on a scalar-valued function $\bar{u}(\bar{x})$ \begin{eqnarray}\label{eq:2.2} {\bar{\mathcal D}}[\bar{u}]=\sum_{i=0}^5 \bar{f}_i(\bar{x})\,\bar{D}^i\bar{u}. \end{eqnarray} where $f_i$ and $\bar{f}_i$, $i=1, 2, 3, 4, 5$, are analytic functions of the real variable $x$ and $\bar{x}$ respectively. For simplicity we let $f_5=\bar{f}_5=1$. Further, $D^i=d/dx^i$, $\bar{D}^i=d/d\bar{x}^i$ and $D^0=\bar{D}^0={\rm Id}$ are the identity operators. The appropriate space to work in will be the fifth jet space ${\rm J}^5$, which has local coordinates $$\Upsilon=\{(x, u, p, q, r, s, t)\in{\rm J}^5: p=u_x, q=u_{xx}, r=u_{xxx}, s=u_{xxxx}, t=u_{xxxxx}\}$$ and the goal is to know whether there exists a suitable transformation of variables $(x, u, p, q, r, s, t)\longrightarrow (\bar{x}, \bar{u}, \bar{p}, \bar{q}, \bar{r}, \bar{s}, \bar{t})$ which brings (\ref{eq:2.1}) to (\ref{eq:2.2}). Several types of such transformations are of particular importance. Here we consider fiber preserving transformations, which are of the form \begin{eqnarray}\label{eq:2.3} \bar{x}=\xi(x),\qquad \bar{u}=\varphi(x)\,u, \end{eqnarray} where $\varphi(x)\neq0$. Using the chain rule formula we find following relation between the total derivative operators \begin{eqnarray}\label{eq:2.4} \bar{D}=\frac{d}{d\bar{x}}=\frac{1}{\xi'(x)}\, \frac{d}{dx}=\frac{1}{\xi'(x)}\,D. \end{eqnarray} First we consider the {\it direct equivalence problem}, which identifies the two linear differential functions \begin{eqnarray}\label{eq:2.5} {{\mathcal D}}[u]=\bar{{\mathcal D}}[\bar{u}]. \end{eqnarray} under change of variables (\ref{eq:2.3}). This induces the transformation rule \begin{eqnarray}\label{eq:2.6} \bar{{\mathcal D}}={{\mathcal D}}\cdot\frac{1}{\varphi(x)} \hspace{1cm}\mbox{when}\hspace{1cm}\bar{x}=\xi(x), \end{eqnarray} on the differential operators themselves, and solving local direct equivalence problem is to find explicit conditions on the coefficients of the two differential operators that guarantee they satisfy (\ref{eq:2.5}) for some change of variables of the form (\ref{eq:2.3}). The transformation rule (\ref{eq:2.6}) doesn't preserve either the eigenvalue problem ${{\mathcal D}}[u]=\lambda u$ or the Schr$\rm{\ddot{o}}$dinger equation $iu_t={{\mathcal D}}[u]$, since we are missing a factor of $\varphi(x)$. To solve the problem, we consider the {\it gauge equivalence} with the following transformation rule \begin{eqnarray}\label{eq:2.7} \bar{\mathcal D}=\varphi(x)\cdot{\mathcal D}\cdot\frac{1}{\varphi(x)} \hspace{1cm}\mbox{when}\hspace{1cm}\bar{x}=\xi(x). \end{eqnarray} \begin{prop}\label{prop1} Let ${\mathcal D}$ and $\bar{\mathcal D}$ denote fifth-order differential operators. There are two coframes $\Omega=\{\omega^1,\omega^2,\omega^3,\omega^4,\omega^5,\omega^6,\omega^7\}$ and $\bar{\Omega}=\{\bar{\omega}^1,\bar{\omega}^2,\bar{\omega}^3,\bar{\omega}^4,\bar{\omega}^5,\bar{\omega}^6,\bar{\omega}^7\}$ on open subsets $\Gamma$ and $\bar{\Gamma}$ of the fifth jet space, respectively, such that the differential operators are equivalent under the pseudogroup (\ref{eq:2.3}) according to the respective transformation rules (\ref{eq:2.6}) and (\ref{eq:2.7}) that coframes $\Omega$ and $\bar{\Omega}$ satisfy in following relation \begin{eqnarray}\label{eq:2.22} \left(\begin{array}{cccccc} \bar{\omega}^1 \\ \bar{\omega}^2 \\ \bar{\omega}^3 \\ \bar{\omega}^4 \\ \bar{\omega}^5 \\ \bar{\omega}^6 \\ \bar{\omega}^7 \end{array}\right) = \left(\begin{array}{ccccccc} a_1&0&0&0&0&0&0 \\ 0&1&0&0&0&0&0 \\ 0&a_2&a_3&0&0&0&0 \\ 0&a_4&a_5&a_6&0&0&0 \\ 0&a_7&a_8&a_9&a_{10}&0&0 \\ 0&a_{11}&a_{12}&a_{13}&a_{14}&a_{15}&0 \\ 0&0&0&0&0&0&1 \end{array}\right) \left(\begin{array}{ccccccc} \omega^1 \\ \omega^2 \\ \omega^3 \\ \omega^4 \\ \omega^5 \\ \omega^6 \\ \omega^7 \end{array}\right) \end{eqnarray} where $a_i\in{\Bbb R}$ for $i=1,\cdots,15$ and $a_1a_3a_6a_{10}a_{15}\neq0$. \end{prop} {\bf Proof:} Note that a point transformation will be in the desired linear form (\ref{eq:2.3}) if and only if, for pair of functions $\alpha=\xi_x$ and $\beta=\varphi_x/\varphi$, one-form equations \begin{eqnarray}\label{eq:2.8} d\bar{x}&=&\alpha\;dx,\\ \label{eq:2.9} \frac{d\bar u}{\bar u}&=&\frac{du}{u}+\beta\;dx. \end{eqnarray} hold on the subset of ${\rm J}^5$ where $u\neq0$. In order that the derivative variables $p, q, r, s$ and $t$ transform correctly, we need to preserve the contact ideal ${\mathcal I}$ on ${\rm J}^5$, which is \begin{eqnarray}\label{eq:2.10} {\mathcal I}=\langle du-p\;dx, dp-q\;dx, dq-r\;dx, dr-s\;dx, ds-t\;dx\rangle. \end{eqnarray} Generally, a diffeomorphism $\Phi:{\rm J}^5\to {\rm J}^5$ determines a contact transformation if and only if \begin{eqnarray}\label{eq:2.11} d\bar{u}-\bar{p}\;d\bar{x}&=&a_1(du-p\;dx), \\ \label{eq:2.12} d\bar{p}-\bar{q}\;d\bar{x}&=&a_2(du-p\;dx)+a_3(dp-q\;dx), \\ \label{eq:2.13} d\bar{q}-\bar{r}\;d\bar{x}&=&a_4(du-p\;dx)+a_5(dp-q\;dx)+a_6(dq-r\;dx), \\ \label{eq:2.13} d\bar{r}-\bar{s}\;d\bar{x}&=&a_7(du-p\;dx)+a_8(dp-q\;dx)+a_9(dq-r\;dx)+a_{10}(dr-s\;dx), \\\label{eq:2.13} d\bar{s}-\bar{t}\;d\bar{x}&=&a_{11}(du-p\;dx)+a_{12}(dp-q\;dx)+a_{13}(dq-r\;dx)+a_{14}(dr-s\;dx) \\ \nonumber && +a_{15}(ds-t\;dx), \end{eqnarray} where $a_{i}$ are functions on ${\rm J}^5$. The combination of the first contact condition (\ref{eq:2.11}) with the linearity conditions (\ref{eq:2.8}) and (\ref{eq:2.9}) constitutes part of an overdetermined equivalence problem. Taking $\beta=-p/u,~a_1=1/u$, in (\ref{eq:2.9}) and (\ref{eq:2.11}), it is found the one-form \begin{eqnarray}\label{eq:2.14} \frac{d\bar{u}-\bar{p}\;d\bar x}{\bar{u}}=\frac{du-p\;dx}{u}, \end{eqnarray} which is invariant, and (\ref{eq:2.14}) can replace both (\ref{eq:2.9}) and (\ref{eq:2.11}). Therefore, we may choose five elements of our coframe the one-forms \begin{eqnarray}\label{eq:2.15} \omega^1=dx , \;\omega^2=\dfrac{du-p\;dx}{u},\;\omega^3=dp-q\;dx, \;\omega^4=dq-r\;dx, \;\omega^5=dr-s\;dx,\;\omega^6=ds-t\;dx, \end{eqnarray} which are defined on the fourth jet space ${\rm J}^4$ locally parameterized by $(x, u, p, q, r, s, t)$, with the transformation rules \begin{eqnarray}\nonumber \bar{\omega}^1&=&a_1\omega^1,\\\nonumber \bar{\omega}^2&=&\omega^2,\\\nonumber \bar{\omega}^3&=&a_2\omega^2+a_3\omega^3,\\\nonumber \bar{\omega}^4&=&a_4\omega^2+a_5\omega^3+a_6\omega^4,\\ \nonumber \bar{\omega}^5&=&a_7\omega^2+a_8\omega^3+a_9\omega^4+a_{10}\omega^5, \\ \label{eq:2.16} \bar{\omega}^6&=&a_{11}\omega^2+a_{12}\omega^3+a_{13}\omega^4+a_{14}\omega^5+a_{15}\omega^6. \end{eqnarray} According to (\ref{eq:2.5}), the function $I(x, u, p, q, r, s, t)={{\mathcal D}}[u]=t+f_4(x)s+f_3(x)r+f_2(x)q+f_1(x)p+f_0(x)u$ is an invariant for the problem, and thus its differential \begin{eqnarray} \label{eq:2.17} \omega^7 = dI = dt+f_4ds+f_3dr+f_2dq+f_1dp+f_0du + (f'_4s+f'_3r+f'_2q+f'_1p+f'_0u)dx, \end{eqnarray} is an invariant one-form, thus one can take it as a final element of our coframe. In the second problem (\ref{eq:2.7}), for the extra factor of $\varphi$, the invariant is \begin{eqnarray}\label{eq:2.18} I(x, u, p, q, r, s, t)=\frac{{\mathcal D}[u]}{u}=\frac{f_5(x)dt+f_4(x)ds+f_3(x)r+f_2(x)q+f_1(x)p}{u}+f_0(x). \end{eqnarray} Thus, it is found \begin{eqnarray}\label{eq:2.19} \omega^7=dI&=&\frac{1}{u}\;dt+\frac{f_4}{u}\;ds+\frac{f_3}{u}\;dr+\frac{f_2}{u}\;dq+\frac{f_1}{u}\;dp-\frac{t+f_4s+f_3r+f_2q+f_1p}{u^2}\;du \\ &&+\Big\{\frac{f'_4s+f'_3r+f'_2q+f'_1p}{u}+f'_0\Big\}\;dx, \end{eqnarray} as a final element of coframe for the equivalence problem (\ref{eq:2.7}). The set of one-forms $$\Omega=\{\omega^1,\omega^2,\omega^3,\omega^4,\omega^5,\omega^6,\omega^7\}$$ is a coframe on the subset \begin{eqnarray}\label{eq:2.20} \Gamma^*=\Big\{(x, u, p, q, r, s, t)\in {\rm J}^5\,\Big|\,u\neq0\;\mbox{and}\;f_5(x)\neq0 \Big\}. \end{eqnarray} All of attention is restricted to a connected component $\Gamma\subset\Gamma^*$ of the subset (\ref{eq:2.20}) that the signs of $f_0(x)$ and $u$ are fixed. It means, the last coframe elements agree up to contact \begin{eqnarray}\label{eq:2.21} \bar{\omega}^7=\omega^7. \end{eqnarray} Viewing (\ref{eq:2.16}) and (\ref{eq:2.21}) relations, one can find the structure group associated with the equivalence problems (\ref{eq:2.6}) and (\ref{eq:2.7}) that is a 15-dimensional matrix group $G$ such that $\bar{\Omega}=G\Omega$ which leads to (\ref{eq:2.22}) and then the {\it lifted coframe} on the space ${\rm J}^5\times G$ has the form \begin{eqnarray}\nonumber &&\theta^1=a_1\omega^1,\\ \nonumber &&\theta^2=\omega^2,\\\label{eq:2.24} &&\theta^3=a_2\omega^2+a_3\omega^3,\\ \nonumber &&\theta^4=a_4\omega^2+a_5\omega^3+a_6\omega^4,\\ \nonumber &&\theta^5=a_7\omega^2+a_8\omega^3+a_9\omega^4+a_{10}\omega^5,\\\nonumber &&\theta^6=a_{11}\omega^2+a_{12}\omega^3+a_{13}\omega^4+a_{14}\omega^5+a_{15}\omega^6,\\\nonumber &&\theta^7=\omega^7.\nonumber \end{eqnarray} In the following, two important results will be presented in the form of two theorems: \begin{thm}\label{thm:1} The final structure equations for direct equivalence with (\ref{eq:2.15}) and (\ref{eq:2.17}) coframes are \begin{eqnarray}\nonumber &&d\theta^1={1\over5}\theta^1\wedge\theta^2, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \label{eq:4.8} &&d\theta^3=\theta^1\wedge\theta^4+{1\over5}\theta^2\wedge\theta^3, \\ \nonumber &&d\theta^4=I_1\theta^1\wedge\theta^4+\theta^1\wedge\theta^5+{2\over5}\theta^2\wedge\theta^4, \\ \nonumber &&d\theta^5=I_2\theta^1\wedge\theta^4+\theta^1\wedge\theta^6+{3\over5}\theta^2\wedge\theta^5+{17\over5}\theta^3\wedge\theta^4, \\ \nonumber &&d\theta^6=I_3\theta^1\wedge\theta^2+I_4\theta^1\wedge\theta^3+I_5\theta^1\wedge\theta^4+\theta^1\wedge\theta^7+{4\over5}\theta^2\wedge\theta^6+I_6\theta^3\wedge\theta^4+4\theta^3\wedge\theta^5, \\ \nonumber &&d\theta^7=0, \end{eqnarray} where the coefficients $I_1, I_2, I_3,I_4, I_5$ and $I_6$ are \begin{eqnarray} I_1&=&-{1\over \sqrt[5]{u^4}}\left[f_4u+3p\right],\\\nonumber I_2&=&{1\over 5\sqrt[5]{u^8}}\big[(10\dot{f}_4u^2-12f_4pu-5f_3u^2-9p^2-10qu\big],\\\nonumber I_3&=&-(f_0u+f_1p+f_2q+f_3r+f_4s+t),\\ \nonumber I_4&=&-{1\over625 \sqrt[5]{u^{16}}}\big[625u^4f_1-800u^2f_4pq+2375u^3f_4r+1770p^2qu-1275pru^2+3000su^3\\ &&+270f_4p^3u-225u^2f_3p^2+1750u^3f_3q+1125u^3f_2p-594p^4-800q^2u^2\big],\\ \label{hhhhj} I_5&=&7-{1\over25 \sqrt[5]{u^{12}}}\big[25u^3f_2+6up^2f_4+65u^2qf_4-55pu^2\dot{f}_4+50f_3pu^2-25u^3\dot{f}_3\\ \nonumber &&+25u^3\dot{f}_4+33p^3-45pqu+100ru^2\big],\\\nonumber I_6&=&-{1\over \sqrt[5]{u^4}}(f_4u+3p). \end{eqnarray} \end{thm} \begin{thm}\label{thm:2} The final structure equations for gauge equivalence with (\ref{eq:2.15}) and (\ref{eq:2.19}) coframes are \begin{eqnarray}\nonumber &&d\theta^1=0, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \nonumber &&d\theta^3=\theta^1\wedge\theta^4, \\\label{eq:ddd} &&d\theta^4=L_1\theta^1\wedge\theta^4+\theta^1\wedge\theta^5, \\ \nonumber &&d\theta^5=L_2\theta^1\wedge\theta^4+\theta^1\wedge\theta^6+5\theta^3\wedge\theta^4, \\ \nonumber &&d\theta^6=L_3\theta^1\wedge\theta^3+L_4\theta^1\wedge\theta^4+\theta^1\wedge\theta^7+L_5\theta^3\wedge\theta^4+5\theta^3\wedge\theta^5, \\ \nonumber &&d\theta^7=0, \end{eqnarray} where the coefficients $L_1, \ldots, L_5$ are \begin{eqnarray}\nonumber L_1&=&-{1\over u}\left[f_4u+5p\right],\\ \nonumber L_2&=&{1\over u^2}\left[2\dot{f}_4u^2-f_3u^2-4f_4pu-10p^2\right],\\ \nonumber L_3&=&-{1\over u}\left[2pf_2+3f_3q+4f_4r+f_1u+5s\right],\\ \label{th2eq2} L_4&=&{1\over u^3}\big[4\dot{f}_4pu^2-f_2u^3+\dot{f}_3u^3-3f_3pu^2-4f_4p^2u-2u^2f_4q-\ddot{f}_4u^3-10p^3+5pqu-5ru^2 \big],\\ \nonumber L_5&=&-{1\over u}(f_4u+5p),\\\nonumber \end{eqnarray} \end{thm} \section{The proof of Theorem \ref{thm:1} } We start with the help of the initial six one-forms (\ref{eq:2.15}) and (\ref{eq:2.17}) are taken as our final coframe constituent. So equivalence problem turns into $G$-equivalence problem. We normalize the problem then (\ref{eq:2.24}) are obtained. We use (\ref{eq:2.24}) instead of six one-forms (\ref{eq:2.15}) and (\ref{eq:2.17}). The goal is the basis manifold $M\times G$ turns into $M\times G'$ which ${\rm dim}~ G'<{\rm dim}~ G$ and finally $G'$ turns into $e$. F or the problem, this algorithm is done in five stage. We compute right-invariant Maurer-Cartan on lie group $G$. Thus we calculate $dg\cdot g^{-1}$. To calculate structure equations differential of (\ref{eq:2.24}) and write the result based on right-invariant Maurer-Cartan and (\ref{eq:2.24}). Then we use absorption algorithm and find coefficient of (\ref{eq:2.24}) which does not depend on $z$. The corresponding torsion coefficient will be invariant. Now we calculate the differentials of lifted coframe elements (\ref{eq:2.24}). An explicit computation leads to the structure equations \begin{eqnarray}\label{eq:vvv} \nonumber d\theta^1&=&\alpha^1\wedge\theta^1, \\ \nonumber d\theta^2&=&T_{12}^2\theta^1\wedge\theta^2+T_{13}^2\theta^1\wedge\theta^3, \\\label{eq:3.1} d\theta^3&=&\alpha^2\wedge\theta^2+\alpha^3\wedge\theta^3+ T_{12}^3\theta^1\wedge\theta^2+T_{13}^3\theta^1\wedge\theta^3+T_{14}^3\theta^1\wedge\theta^4, \\ \nonumber d\theta^4&=&\alpha^4\wedge\theta^2+\alpha^5\wedge\theta^3+\alpha^6\wedge\theta^4+ T_{12}^4\theta^1\wedge\theta^2+T_{13}^4\theta^1\wedge\theta^3+T_{14}^4\theta^1\wedge\theta^4+T_{15}^4\theta^1\wedge\theta^5, \\ \nonumber d\theta^5&=&\alpha^7\wedge\theta^2+\alpha^8\wedge\theta^3+\alpha^9\wedge\theta^4+\alpha^{10}\wedge\theta^5+ T_{12}^5\theta^1\wedge\theta^2+T_{13}^5\theta^1\wedge\theta^3+T_{14}^5\theta^1\wedge\theta^4+T_{15}^5\theta^1\wedge\theta^5\\ \nonumber &&+T_{16}^5\theta^1\wedge\theta^6, \\ \nonumber d\theta^6&=&\alpha^{11}\wedge\theta^2+\alpha^{12}\wedge\theta^3+\alpha^{13}\wedge\theta^4+\alpha^{14}\wedge\theta^5+\alpha^{15}\wedge\theta^6+ T_{12}^6\theta^1\wedge\theta^2+T_{13}^6\theta^1\wedge\theta^3+T_{14}^6\theta^1\wedge\theta^4\\ \nonumber &&+T_{15}^6\theta^1\wedge\theta^5+T_{16}^6\theta^1\wedge\theta^6+T_{17}^6\theta^1\wedge\theta^7, \\ \nonumber d\theta^7&=&0, \nonumber \end{eqnarray} which $\alpha^i$ $ i=1, \cdots, 15$ are forming a basis for the right-invariant {\it Maurer-Cartan forms} on the Lie group $G$: \begin{eqnarray*} \alpha^1&=&\dfrac{da_1}{a_1},\\ \alpha^2&=&\dfrac{a_3da_2-a_2da_3}{a_3},\\ \alpha^3&=&\dfrac{da_3}{a_3},\\ \alpha^4&=&\dfrac{a_3a_6da_4-a_2a_6da_5+(a_2a_5-a_3a_4)da_6}{a_3a_6},\\ \alpha^5&=&\dfrac{a_6da_5-a_5da_6}{a_3a_6},\\ \alpha^6&=&\dfrac{da_6}{a_6},\\ \alpha^7&=&\dfrac{a_3a_6a_{10}da_7-a_2a_6a_{10}da_8+a_{10}(a_2a_5-a_3a_4)da_9-(a_3a_6a_7-a_3a_4a_9-a_2a_6a_8+a_2a_5a_9)da_{10}}{a_3a_6a_{10}},\\ \alpha^8&=&\dfrac{a_6a_{10}da_8-a_5a_{10}da_9+(a_5a_9-a_6a_8)da_{10}}{a_3a_6a_{10}},\\ \end{eqnarray*} \begin{eqnarray*} \alpha^9&=&\dfrac{a_{10}da_9-a_9da_{10}}{a_6a_{10}},\\ \alpha^{10}&=&\dfrac{da_{10}}{a_{10}},\\ \alpha^{11}&=&\dfrac{1}{a_3a_6a_{10}a_{15}}\Big[a_3a_6a_{10}a_{15}da_{11}-a_2a_6a_{10}a_{15}da_{12}+a_{10}a_{15}(a_2a_5-a_3a_4)da_{13}-a_{15}(a_2a_5a_9-a_2a_6a_8\\ &&\hspace*{2cm}-a_3a_4a_9+a_3a_6a_7)da_{14}-(a_3a_6a_{10}a_{11} -a_2a_6a_{10}a_{12}+a_2a_5a_{10}a_{13}-a_3a_4a_{10}a_{13}\\ &&\hspace*{2cm}-a_2a_5a_9a_{14}+a_2a_6a_8a_{14}+a_3a_4a_9a_{14}-a_3a_6a_7a_{14})da_{15}\Big],\\ \alpha^{12}&=&\dfrac{1}{a_3a_6a_{10}a_{15}} \Big[a_6a_{10}a_{15}da_{12}-a_5a_{10}a_{15}da_{13} +a_{15}(a_5a_9-a_6a_8)da_{14}-(a_6a_{10}a_{12}-a_5a_{10}a_{13}\\ &&\hspace*{2cm}+a_5a_9a_{14}-a_6a_8a_{14})\Big],\\ \alpha^{13}&=&\dfrac{a_{10}a_{15}da_{13}-a_9a_{15}da_{14}-(a_{10}a_{13}-a_9a_{14})da_{15}}{a_6a_{10}a_{15}},\\ \alpha^{14}&=&\dfrac{a_{15}da_{14}-a_{14}da_{15}}{ a_{10}a_{15}},\\ \alpha^{15}&=&\dfrac{da_{15}}{a_{15}}, \end{eqnarray*} In first loop the essential torsion coefficients are \begin{eqnarray}\label{eq:4.1} T_{12}^2=-\frac{a_2+a_3p}{a_1a_3u},\quad T_{13}^2=\frac{1}{a_1a_3u}, \quad T_{14}^3=\frac{a_3}{a_1a_6}, \quad T_{15}^4=\frac{a_6}{a_1a_{10}}, \quad T_{16}^5=\frac{a_{10}}{a_1a_{15}}, \quad T_{17}^6=\frac{a_{15}}{a_1}. \end{eqnarray} One can normalize the group parameters by setting \begin{eqnarray}\label{eq:4.2} a_1=\frac{1}{\sqrt[5]{u}},\quad a_2=-\frac{p}{\sqrt[5]{u^4}},\quad a_3=\frac{1}{\sqrt[5]{u^4}}, \quad a_6=\frac{1}{\sqrt[5]{u^3}},\quad a_{10}=\frac{1}{\sqrt[5]{u^2}},\quad a_{15}=\frac{1}{\sqrt[5]{u}}. \end{eqnarray} In the second loop, the normalization (\ref{eq:4.2}) is substituted in the lifted coframe (\ref{eq:2.24}) and calculate the differentials of new invariant coframe to obtain revised structure equations. Now, the essential torsion components (\ref{eq:4.1}) are normalized by the parameters \begin{eqnarray}\label{eq:4.3} a_4=-\frac{q}{\sqrt[5]{u^3}},\;\; a_5=-\frac{9p}{5\sqrt[5]{u^4}}, \;\; a_9=\frac{5f_4u+3p}{5\sqrt[5]{u^4}},\; \; a_{14}=\frac{5f_4u+p}{5\sqrt[5]{u^4}}. \end{eqnarray} In third loop, substituting the normalization (\ref{eq:4.3}) in the lifted coframe (\ref{eq:2.24}) and determine parameters $a_4, a_7, a_8$. we recalculate the differentials. Therefore, the new structure equations are \begin{eqnarray}\nonumber d\theta^1&=&{1\over5}\theta^1\wedge\theta^2, \\ \nonumber d\theta^2&=&\theta^1\wedge\theta^3, \\ \label{eq:4.4'} d\theta^3&=&\theta^1\wedge\theta^4+{1\over5}\theta^2\wedge\theta^3, \\ \nonumber d\theta^4&=&T_{12}^4\theta^1\wedge\theta^2+T_{13}^4\theta^1\wedge\theta^3 +T_{14}^4\theta^1\wedge\theta^4+\theta^1\wedge\theta^5+{2\over5}\theta^2\wedge\theta^4, \\ \nonumber d\theta^5&=&T_{12}^5\theta^1\wedge\theta^2+T_{13}^5\theta^1\wedge\theta^3+T_{14}^5\theta^1\wedge\theta^4+\theta^1\wedge\theta^6+T_{23}^5\theta^2\wedge\theta^3 -{2\over5}\theta^2\wedge\theta^5+{3\over5}\theta^3\wedge\theta^4+\alpha^7\wedge\theta^2+\alpha^8\wedge\theta^3 \\ \nonumber d\theta^6&=&T_{12}^6\theta^1\wedge\theta^2+T_{13}^6\theta^1\wedge\theta^3+T_{14}^6\theta^1\wedge\theta^4+T_{15}^6\theta^1\wedge\theta^5+\theta^1\wedge\theta^7 +T_{23}^6\theta^2\wedge\theta^3+{3\over5}\alpha^{13}\theta^2\wedge\theta^4\\ \nonumber &&-{1\over5}\theta^2\wedge\theta^6+T_{34}^6\theta^3\wedge\theta^4+{1\over5}\theta^3\wedge\theta^5+\alpha^{11}\wedge\theta^2+\alpha^{12}\wedge\theta^3+\alpha^{13}\wedge\theta^4, \\ \nonumber d\theta^7&=&0. \end{eqnarray} where $\alpha^7, \alpha^8, \alpha^{11}, \alpha^{12}$ and $\alpha^{13}$ are the Maurer-Cartan forms on $G$ and the essential torsion coefficients are \begin{eqnarray} T_{12}^4=-\frac{5a_7u^{27/5}+5u^5f_4q+3pqu^4+5u^5r}{5u^{27/5}},\\ \nonumber T_{13}^4=-\frac{25a_8u^{13/5}+45u^2f_4p+18p^2u+70qu^2}{25u^{13/5}}, \\ \nonumber T_{15}^6=\frac{-25f_3u^2+25\dot{f}_4u^2+25a_{13}u^{8/5}-5f_4pu-6p^2+5qu}{25u^{8/5}}. \end{eqnarray} We find the following parameters: \begin{eqnarray}\nonumber a_7&=&-{5f_4qu+3pq+5ru\over5 \sqrt[5]{u^7}},\\ \label{eq:4.6} a_8&=&-{45f_4pu+18p^2+70qu\over25 \sqrt[5]{u^8}},\\ \nonumber a_{13}&=&{5f_4pu+25f_3u^2-25\dot{f}_4u^2+6p^2-5qu\over25 \sqrt[5]{u^8}}. \end{eqnarray} Substituting (\ref{eq:4.6}) in (\ref{eq:2.24}) and recomputing the differentials leads to \begin{eqnarray}\nonumber &&d\theta^1={1\over5}\theta^1\wedge\theta^2, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \label{eq:4.4''} &&d\theta^3={1\over5}\theta^2\wedge\theta^3+\theta^1\wedge\theta^4, \\ \nonumber &&d\theta^4=T_{14}^4\theta^1\wedge\theta^4+\theta^1\wedge\theta^5+{2\over5}\theta^2\wedge\theta^4, \\ \nonumber &&d\theta^5=T_{12}^5\theta^1\wedge\theta^2+T_{13}^5\theta^1\wedge\theta^3+T_{14}^5\theta^1\wedge\theta^4+\theta^1\wedge\theta^6+{3\over5}\theta^2\wedge\theta^5+{17\over5}\theta^3\wedge\theta^4, \\ \nonumber &&d\theta^6=T_{12}^6\theta^1\wedge\theta^2+T_{13}^6\theta^1\wedge\theta^3+T_{14}^6\theta^1\wedge\theta^4 \\ \nonumber &&\quad +\theta^1\wedge\theta^7+T_{23}^6\theta^2\wedge\theta^3+T_{24}^6\theta^2\wedge\theta^4-{1\over5}\theta^2\wedge\theta^6+{1\over5}\theta^3\wedge\theta^5+\alpha^{11}\wedge\theta^2+\alpha^{12}\wedge\theta^3, \\ \nonumber &&d\theta^7=0. \end{eqnarray} In final loop, we find the remaining parameters $a_{11}, a_{12}$ which is as follows \begin{equation}\label{eq:4'.4'} \begin{split} & a_{11}=-{{5f_4ru+pr+5su}\over 5u^{6/5}},\\ & a_{12}={{9f_4p^2u-70u^2f_4q-9p^3+18pqu-95u^2r}\over 25 u^{12/5}}, \end{split} \end{equation} and then it leads to final structure equations (\ref{eq:4.8}). \section{The proof of Theorem \ref{thm:2} } The calculations of this problem are similar to the previous section except that we use the initial six coframes (\ref{eq:2.15}) and the 1-form element is (\ref{eq:2.19}). In the first loop through the second equivalence problem procedure, according to Proposition \ref{prop1}, the structure group $G$ in (\ref{eq:2.22}) relation is exactly the structure group of direct equivalence, and then the equivalence method has the same intrinsic structure (\ref{eq:3.1}) by the essential torsion coefficients \begin{eqnarray}\label{eq:4'.1} T_{12}^2=-\frac{a_2+a_3p}{a_1a_3u},\quad T_{13}^2=\frac{1}{a_1a_3u}, \quad T_{14}^3=\frac{a_3}{a_1a_6},\quad T_{15}^4=\frac{a_6}{a_1a_{10}},\quad T_{16}^5=\frac{a_{10}}{a_1a_{15}},\quad T_{17}^6=\frac{a_{15}u}{a_1}. \end{eqnarray} We can normalize the group parameters by setting \begin{eqnarray}\label{eq:4'.2} a_1=1,\quad a_2=-{p\over u},\quad a_3=a_6=a_{10}=a_{15}={1\over u}. \end{eqnarray} In the second loop of the equivalence problem, we substitute the normalization (\ref{eq:4'.2}) in lifted coframe (\ref{eq:2.24}) and calculate differentials of new invariant coframe to determining following revised structure equations: \begin{eqnarray}\nonumber &&d\theta^1=0, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \nonumber &&d\theta^3=T_{12}^3\theta^1\wedge\theta^2+T_{13}^3\theta^1\wedge\theta^3+\theta^1\wedge\theta^4, \\ \label{eq:4.4'''} &&d\theta^4=\alpha^4\wedge\theta^2+\alpha^5\wedge\theta^3+T_{12}^4\theta^1\wedge\theta^2+T_{13}^4\theta^1\wedge\theta^3 +T_{14}^4\theta^1\wedge\theta^4+\theta^1\wedge\theta^5\\ \nonumber &&\qquad+\alpha^5\theta^2\wedge\theta^3-\theta^2\wedge\theta^4, \\ \nonumber &&d\theta^5=\alpha^7\wedge\theta^2+\alpha^8\wedge\theta^3+\alpha^9\wedge\theta^4+T_{12}^5\theta^1\wedge\theta^2+T_{13}^5\theta^1\wedge\theta^3 +T_{14}^5\theta^1\wedge\theta^4 \\ \nonumber &&\qquad+T_{15}^5\theta^1\wedge\theta^5+T_{23}^5\theta^2\wedge\theta^3-\theta^2\wedge\theta^5+\theta^1\wedge\theta^6, \\ \nonumber &&d\theta^6=0, \end{eqnarray} where $\alpha^4, \alpha^5, \alpha^7, \alpha^8$ and $\alpha^9$ are the Maurer-Cartan forms and the essential torsion components of structure equations (\ref{eq:4.4'''}) are \begin{equation}\label{eq:4-41} \begin{split} & T_{12}^3=-{a_4u+q\over u},\\ &T_{13}^3=-{a_5u+2p\over u}, \\ & T_{15}^5=-{a_{14}u-a_9u+p\over u},\\ & T_{16}^6={a_{14}u-f_4u-p\over u}. \end{split} \end{equation} and so the normalization is \begin{eqnarray}\label{eq:4-42} a_4=-{q\over u}, \quad a_5=-{2p\over u}, \quad a_9={f_4u+2p\over u}, \quad a_{14}={f_4u+p\over u}. \end{eqnarray} Putting (\ref{eq:4-42}) into (\ref{eq:2.24}) and then recomputing the differential of new 1-forms leads to \begin{eqnarray}\nonumber &&d\theta^1=0, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \nonumber &&d\theta^3=\theta^1\wedge\theta^4, \\ \nonumber &&d\theta^4=T_{12}^4\theta^1\wedge\theta^2+T_{13}^4\theta^1\wedge\theta^3 +T_{14}^4\theta^1\wedge\theta^4+\theta^1\wedge\theta^5, \\ \nonumber &&d\theta^5=\alpha^7\wedge\theta^2+\alpha^8\wedge\theta^3+T_{12}^5\theta^1\wedge\theta^2+T_{13}^5\theta^1\wedge\theta^3 +T_{14}^5\theta^1\wedge\theta^4+T_{23}^5\theta^2\wedge\theta^3-\theta^2\wedge\theta^5 \\ \label{eq:4.43} &&\hspace{1cm}+2\theta^3\wedge\theta^4+\theta^1\wedge\theta^6, \\ \nonumber &&d\theta^6=\alpha^{11}\wedge\theta^2+\alpha^{12}\wedge\theta^3+\alpha^{13}\wedge\theta^4+T_{12}^6\theta^1\wedge\theta^2+T_{13}^6\theta^1\wedge\theta^3 +T_{14}^6\theta^1\wedge\theta^4+T_{15}^6\theta^1\wedge\theta^5+\theta^1\wedge\theta^7 \\ \nonumber &&\qquad+T_{23}^6\theta^2\wedge\theta^3+T_{34}^6\theta^3\wedge\theta^4+\theta^3\wedge\theta^5+\alpha^{13}\theta^2\wedge\theta^4-\theta^2\wedge\theta^6, \\ \nonumber &&d\theta^7=0. \end{eqnarray} This immediately implies following normalization \begin{eqnarray}\nonumber a_7&=&-\dfrac{f_4qu+2pq+ru}{u^2},\\ \nonumber a_8&=&-\dfrac{2f_4pu+4p^2+3qu}{u^2},\\ \label{eq:4.44} a_{11}&=&-\dfrac{f_4ru+pr+su}{u^2},\\ \nonumber a_{12}&=&-\dfrac{3f_4qu+3pq+4ru}{u^2},\\ \nonumber a_{13}&=&-\dfrac{\dot{f_4}u^2-f_4pu-f_3u^2-2p^2+qu}{u^2}. \end{eqnarray} Thus the final invariant coframe is now given by \begin{eqnarray}\nonumber &&\theta^1=\frac{dx}{\sqrt[4]{f_4}}, \\ \nonumber &&\theta^2=\frac{du-p\;dx}{u}, \\ \nonumber &&\theta^3={\sqrt[4]{f_4}\over u^2}\Big[ (p^2-qu)\;dx-p\;du+u\;dp\Big], \\\nonumber &&\theta^4=-{1\over4\sqrt{f_4}\;u^3}\Big[(4f_4u^2r+\dot{f}_4u^2q-\dot{f}_4up^2-12f_4upq+8f_4p^3)\;dx \\ \label{eq:4'.7} &&\hspace{3cm}+(\dot{f}_4up+4f_4uq-8f_4p^2)\;du+(8f_4p-\dot{f}_3u)u\;dp-4f_4u^2\;dq\Big], \\ \nonumber &&\theta^5={1\over4\sqrt{f_4}\;u^3}\Big[(8f_4p^2q-4f_4uq^2-4f_4u^2s+4f_3u^2r(4f_3-3\dot{f}_4)upq+3\dot{f}_4u^2r)\;dx \\ \nonumber &&\hspace{1cm}+(8f_4pq+4f_4ur+(4f_3-3\dot{f}_4)uq)\;du+(4f_4uq)\;dp+(4f_4p+(4f_3-\dot{f}_4)u)u\;dq\\ \nonumber &&\hspace{1cm}-4f_4u^2\;dr\Big], \\ \nonumber &&\theta^6={f_4's+f_3'r+f_2'q+f_1p +f_0'u\over u}\;dx-{f_4r+f_3r+f_2q+f_1p\over u^2}\;du+{f_1\over u}\;dp\\ \nonumber &&\hspace{3cm}+{f_2\over u}\;dq+{f_3\over u}dr+{f_4\over u}ds. \end{eqnarray} Then the final structure equations (\ref{eq:ddd}) with fundamental invariant coefficients (\ref{th2eq2}) are obtained. \section{An example} In \cite{[Mit]} boundary value problem for the fifth-order differential operator \begin{eqnarray}\label{eq:22} D^5u(x)+(q(x)-\lambda a^5)u(x)=0,\;\;\;\; 0\leq x\leq\pi, \; a>0, \end{eqnarray} has been studied where potential $q(x)$ is a summable function in segment $[0,\pi]$ and $\lambda$ is spectral parameter. We are accomplishing Cartan equivalence method on the fifth-order differential operator \eqref{eq:22}. In direct method, consider one-forms (\ref{eq:2.15}) and the following one-form \begin{eqnarray}\label{eq:287} \omega^7 =q'udx=(q-\lambda a^5)du+dt \end{eqnarray} as a coframe. The final structure equations are \begin{eqnarray}\nonumber &&d\theta^1={1\over5}\theta^1\wedge\theta^2, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \label{eq:4.99} &&d\theta^3=\theta^1\wedge\theta^4+{1\over5}\theta^2\wedge\theta^3, \\ \nonumber &&d\theta^4=I_1\theta^1\wedge\theta^4+\theta^1\wedge\theta^5+{2\over5}\theta^2\wedge\theta^4, \\ \nonumber &&d\theta^5=I_2\theta^1\wedge\theta^4+\theta^1\wedge\theta^6+{3\over5}\theta^2\wedge\theta^5+{17\over5}\theta^3\wedge\theta^4, \\ \nonumber &&d\theta^6=I_3\theta^1\wedge\theta^2+I_4\theta^1\wedge\theta^3+I_5\theta^1\wedge\theta^4 \\ \nonumber &&\hspace{1cm}+\theta^1\wedge\theta^7+{4\over5}\theta^2\wedge\theta^6+I_6\theta^3\wedge\theta^4+4\theta^3\wedge\theta^5, \\ \nonumber &&d\theta^7=0, \end{eqnarray} where the coefficients $I_1, I_2, I_3,I_4, I_5$ and $I_6$ are \begin{eqnarray} I_1&=&-{3p\over \sqrt[5]{u^4}},\\ I_2&=&{1\over 5\sqrt[5]{u^8}}\big[-9p^2-10qu\big],\\ I_3&=&a^5\lambda u-q(x)u-t,\\ I_4&=&-{1\over625 \sqrt[5]{u^{16}}}\left[1770p^2qu-1275pru^2+3000su^3-594p^4-800q^2u^2\right],\\ I_5&=&-{1\over25 \sqrt[5]{u^{12}}}\left[33p^3-45pqu+100ru^2\right],\\ I_6&=&-{3p\over \sqrt[5]{u^4}}.\\ \end{eqnarray} Now we solve this example by gauge equivalence method. The one-forms (\ref{eq:2.15}) and \begin{eqnarray}\label{eq:287} \omega^7 =q'dx-{tdu \over u^2}+{dt\over u} \end{eqnarray} are chosen as element of coframes and the final structure equations with above coframes are \begin{eqnarray}\nonumber &&d\theta^1=0, \\ \nonumber && d\theta^2=\theta^1\wedge\theta^3, \\ \nonumber &&d\theta^3=\theta^1\wedge\theta^4, \\ \nonumber &&d\theta^4=L_1\theta^1\wedge\theta^4+\theta^1\wedge\theta^5, \\ \nonumber &&d\theta^5=L_2\theta^1\wedge\theta^4+\theta^1\wedge\theta^6+5\theta^3\wedge\theta^4, \\ \nonumber &&d\theta^6=L_3\theta^1\wedge\theta^3+L_4\theta^1\wedge\theta^4+\theta^1\wedge\theta^7 \\ \nonumber &&\hspace{1cm}+L_5\theta^3\wedge\theta^4+5\theta^3\wedge\theta^5, \\ \nonumber &&d\theta^7=0, \end{eqnarray} where the coefficients $L_1, \ldots, L_5$ are \begin{equation}\label{} \begin{split} & L_1=-{5p\over u},\\ & L_2=-{10p^2\over u^2},\\ & L_3=-{5s\over u},\\ & L_4={1\over u^3}\left[-10p^3+5pqu-5ru^2 \right],\\ & L_5=-{5p\over u}.\\ \end{split} \end{equation}
proofpile-arXiv_065-3344
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{frame_pairs_grass.png} \caption{\textbf{Challenging and balanced frame-pair sets.} Automotive LiDAR datasets contain sequences of point-clouds scanned throughout a driving session. The vehicles in this diagram mark the position of the scanner in each frame. To test registration algorithms, one must select a set of \emph{frame-pairs}. The standard KITTI-10m set uses a simple heuristic rule: take pairs with an offset of 10 meters between scanner positions \emph{(pairs connected by pink lines)}. Our selection algorithm returns a challenging set of frame-pairs \emph{(connected by double yellow line)} that is a balanced sampling of all relative motions in the sequence, including short and long offsets in time and space, various rotation angles, etc.} \label{fig:teaser} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=1\textwidth]{distribution_Apollo_vs_KITTI10m.png} \end{center} \caption{\textbf{Comparison of registration set statistics.} Comparison of marginal statistics between one of our proposed balanced registration sets, \emph{Apollo-Southbay-Balanced} (top row), and the popular \emph{KITTI-10m} set (bottom row). We show the distribution of samples by (from left to right): distance between the pair of point clouds, time offset, overlap between scans, and rotation in three separate axes. \emph{Apollo-Southbay-Balanced} includes a balanced representation of all the relative motions that are encountered in a real driving scenario. It is also much more challenging, as the overlap between point-clouds is as low as 0.2} \vspace{-0.04in} \label{fig:set_stats} \end{figure*} In many fields such as autonomous driving, one may be interested in calculating the relative transformation between two scans of a scene that are taken from different locations and directions. This process is known as rigid registration, and in recent years, point-cloud registration (PCR) methods have seen a significant increase in their abilities, thanks to deep learning. This has been demonstrated in various settings, such as indoor scenes and object registration. In the automotive LiDAR setting, however, testing has been limited. The standard benchmark in recent years, KITTI-10m, is saturated, i.e., various algorithms have achieved essentially perfect recall (see \cref{fig:kitti_saturation}). Our aim is to test PCR algorithms in a challenging automotive LiDAR setting. So what makes KITTI-10m easy? \Cref{fig:teaser} gives a graphic illustration. In KITTI-10m, pairs of LiDAR scans for registration are selected using a simple heuristic: pairs with an offset of 10 meters between scanner positions. Yet, such a constant gap is likely to miss the challenging scenarios, as demonstrated in \cref{fig:set_stats}. Its bottom row shows the distribution of various measures (i.e., distance, time, yaw, etc.) between frame pairs on KITTI-10m. The top row displays the statistics of our proposed balanced dataset. One approach for producing a harder benchmark is to replace the LiDAR dataset. In \cite{HRegNet}, for example, they use the more challenging NuScenes~\cite{NuScenes} dataset. However, frame-pairs are still selected with a simple heuristic: scans separated by one second. Almost perfect recall is still achieved, showing that the registration algorithms are not truly challenged. Our first contribution is an algorithm for selecting a set of frame pairs that more faithfully represent all the different situations that a registration algorithm could face in an automotive LiDAR setting. Our algorithm, described in \cref{sec:method_datasets}, returns a balanced sampling of the different relative motions that appear in a dataset, i.e. small and large rotations, small and large offsets in space and time, and various combinations of these. The frame pairs selected by our method are illustrated by yellow lines in \cref{fig:teaser}. At the heart of many of the recently successful registration algorithms lie deep-learning based local descriptors. These descriptors represent the local neighborhood of a point, making it possible to reliably recognize a semantically matching point in another point cloud. Given a set of putative matches, one can estimate the rigid motion between the two point clouds. However, special attention needs to be given to the presence of \emph{outliers}: point-matches whose relative motion does not agree with the overall rigid transformation. In the LiDAR setting these are created by independently moving objects in the scene, in addition to other causes such as partial overlap between the point clouds. To handle outliers, one must use robust estimation algorithms. As an alternative to the classic RANSAC approach, which was claimed to be slow, new algorithmic based solutions such as TEASER++~\cite{TEASER}, and deep learning based methods including DGR~\cite{DGR} and PointDSC~\cite{PointDSC} were proposed. Our second contribution is a thorough comparison of these algorithms in a challenging LiDAR setting. We use high-quality deep-learned features for the point-matching step. A variety of such features were presented recently, e.g., in the D3Feat~\cite{D3Feat} and PREDATOR~\cite{PREDATOR} works. We opted to use the FCGF~\cite{FCGF} features that have recently demonstrated state-of-the-art results (in some settings) when used with PointDSC. Our third contribution is a study of the limitations of deep-learned features (using FCGF as a representative). Specifically, we test what happens when they are trained on data of one type, e.g. LiDAR scans in the San Francisco Bay Area, and tested on data of a different type, e.g. scans collected by a different team in Singapore. We refer to this as cross-domain testing, and compare the performance of registration algorithms between the same-domain and cross-domain settings. We find that deep features suffer some degradation in accuracy. This implies that some of the learning is over-fitting to the specifics of the train set. In our experiments, we find, perhaps surprisingly, that the fastest and simultaneously most accurate registration algorithm is an advanced version of RANSAC. The basic RANSAC algorithm, proposed over 40 years ago, has been shown in some recent works~\cite{DGR,PointDSC,TEASER} to be accurate but slow. However, various improvements to it, such as PROSAC~\cite{PROSAC}, LO-RANSAC~\cite{LO_RANSAC} and others improve its speed considerably, as well as its accuracy. One element that affects the performance of RANSAC, as well as that of TEASER++, is pre-filtering the putative-match set. Our fourth contribution is proposing a novel method called Grid-Prioritized Filtering (GPF), described in \cref{sec:GPF}, and showing that it allows RANSAC to achieve even higher accuracy when replacing the commonly used mutual-nearest neighbors filtering. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{KITTI_saturation.png} \caption{\textbf{Saturation of KITTI-10m}. KITTI-10m has been the standard benchmark for LiDAR registration for the last few years. It is essentially saturated: several recent algorithms have achieved almost perfect recall on it, failing only on a handful of the 555 point-cloud pairs in its test set. The values shown here were taken from the corresponding papers \cite{DGR,PointDSC,HRegNet,D3Feat,PREDATOR}. \label{fig:kitti_saturation} \end{figure} \section{Related Work} \label{sec:related} Algorithms for rigid registration can roughly be divided into \emph{local} and \emph{global} ones. Local registration algorithms are based on the assumption that the motion is small. Global registration algorithms aim to handle any relative motion, but might be less accurate. Often their results are refined by running a local registration algorithm. \textbf{Local Registration:} Iterative Closest Points (ICP)~\cite{ICP} is one of the earliest successful approaches to local point cloud registration, and it remains popular to this day. The ICP algorithm has been developed in various different directions \cite{REVIEW}. Chen and Medioni~\cite{ICPChen} replaced the point-to-point loss function of ICP with a point-to-plane one, by using local normals. Segal \etal~\cite{GICP} presented the popular Generalized-ICP (G-ICP)~\cite{GICP} approach, which reformulated point-to-plane ICP in probabilistic terms and achieved improved accuracy. Rusinkiewicz recently suggested symmetric-ICP~\cite{SymICP}, which uses a surface-to-surface distance function that treats both point clouds symmetrically. It has been demonstrated to be superior to G-ICP in accuracy, and to have larger convergence basins. Drory \etal~\cite{BBR} presented Best-Buddies Registration, specifically BBR-F, which uses a set of mutual-nearest-neighbors in the registration to improve accuracy. \textbf{Global Registration:} A successful strategy for global registration is to generate a set of point-matches based on local descriptors, and estimate a motion from these matches. A popular classical descriptor is FPFH~\cite{FPFH} which uses histograms of gradients of neighboring points. As in other fields, learned features have been shown to be superior to hand-crafted ones \cite{PointNetLK,Sarode2019PCRNetPC,yew2020-RPMNet,Hertz20PointGMM,yuan2020deepgmr,DeepICP,DCP,PRNet}. Various such descriptor have been suggested, e.g. \cite{HRegNet, PREDATOR, FCGF}. Fully Convolutional Geometric Features (FCGF)~\cite{FCGF} are based on sparse convolutions over a voxelized representation of the point cloud. The FCGF network is very fast, and produces dense features. \textbf{Robust optimization}: The set of descriptor matches typically includes a significant fraction of \emph{outliers}, which must be taken into consideration when estimating the relative motion. This can be done for example by using robust loss functions and algorithms, or by filtering the set of matches to remove outliers \cite{yang2017performance}. RANSAC~\cite{RANSAC} is a popular method, which works by repeatedly sampling a minimal set of point-matches, estimating a motion from the sample, and calculating its score by the fraction of matches that agree with this motion. This is repeated until a preset number of iterations is performed, or until early stopping occurs when the best-so-far motion has a fraction of inliers that is sufficient (relative to a confidence value supplied by the user~\cite{GC_RANSAC}). This simple framework has been greatly enhanced over the years, improving RANSAC in both speed and accuracy. PROSAC~\cite{PROSAC} performs a prioritized selection of candidate sets. It accepts the putative pairs sorted according to a quality measure, and orders the selection of sets so that sets with higher quality pairs are examined earlier. This simultaneously makes RANSAC faster and more accurate, by making it more likely that a good model is found early. LO-RANSAC~\cite{LO_RANSAC} adds a local-optimization step: when a best-so-far model is found, its inliers are used to find a better model, for example by performing RANSAC only on the inliers. Local optimization can be repeated several times, as long as the best-so-far model keeps improving significantly. Though the local-optimization step is expensive, it is only performed a few times over the run time of the RANSAC algorithm, and so its amortized time is small. The recently proposed GC-RANSAC~\cite{GC_RANSAC} uses a Markov-Random Field formulation and solves it with Graph-Cuts to divide pairs into inliers and outliers. Another important addition to RANSAC are early rejection methods, that can be applied quickly to reject a minimal set without going through the full scoring stage. In this work we consider two such methods: Sequential Probability Ratio Test (SPRT)~\cite{SPRT}, a general domain method, and Edge-Length Consistency (ELC)~\cite{PointDSC}, which is specific to point cloud registration. In addition to producing local descriptors, deep-learning has also been used for robust estimation. Deep Global Registration (DGR)~\cite{DGR} is based on training a second FCGF-like deep network for the task of recognizing outliers. PointDSC~\cite{PointDSC} too is based on a second network, but not to simply recognize outliers. Instead, it learns an embedding space where one can locate groups of mutually-consistent pairs, that can be used to generate candidate motions. PointDSC integrated ELC into the neural network, to encourage spatial consistency. A novel approach that is not based on deep learning is TEASER, which is based on truncated least squares estimation and semi-definite relaxation. TEASER++ is a faster version that is based on Graduated Non-Convexity. \noindent {\bf Dataset Generation.} Fontana \etal~\cite{Balanced} present a collection of datasets to be used as a benchmark for registration algorithms, and specify the method for the creation of these datasets. Unlike them, we focus specifically on LiDAR point cloud datasets, and registration sets that are challenging for global registration. We adopt their idea of achieving a balanced set of relative motions by random sampling. However, in their method a random motion is applied to an existing point cloud, thus creating a synthetic sample. Instead, we produce natural samples by selecting a pair of point clouds from a recorded sequence, so that their relative motion is as close to the randomly selected one as possible. Huang \etal~\cite{PREDATOR} present the 3DLoMatch set, that contains pairs of low-overlap scans from the 3DMatch~\cite{3DMatch} dataset. They define an overlap of between $10\%$ and $30\%$ as low. We set the minimum-overlap of our registration sets to $20\%$, which is in the same range. In line with their findings, our experiments show that low-overlap is a strong indicator for registration failure. \section{Balanced LiDAR Registration Sets} \label{sec:method_datasets} \begin{figure} \centerline{\includegraphics[width=0.5\textwidth]{selection_2.png}} \caption{\textbf{Selection of Balanced Registration Set.} Toy example of our selection method, using a 2DOF motion model (instead of 6DOF). Each black point represents the relative motion between a \emph{frame-pair}. The space of all motions is normalized into the unit square. Iteratively, we randomly sample a location (\emph{green asterisk}), and select one of the frame-pairs that is close enough to this location (within \emph{green circle}).} \label{fig:selection_procedure} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c||*{3}{c|}}\hline & Train & Validation & Test \\\hline\hline KITTI-10m & 1338 & 200 & 555 \\\hline NuScenes-Boston-Balanced & 4032 & 384 & 2592 \\\hline NuScenes-Singapore-Balanced & 4032 & 384 & 2592 \\\hline Apollo-Southbay-Balanced & 4032 & 288 & 7008 \\\hline \end{tabular} \end{center} \caption{\textbf{Balanced registration set sizes.} Number of train, validation and test samples in each balanced registration set. } \label{table:dataset_sizes} \end{table} Popular registration benchmarks for the automotive LiDAR setting have become too easy for the newest registration algorithms (see \cref{fig:kitti_saturation}). We believe the main cause for that are the simple heuristics used for selecting frame-pairs for registration: a constant offset in space or time, which is typically not very large (e.g. 10 meters, or 1 second). How could we instead select a more interesting set of frame-pairs? A naive approach would be to enumerate all possible frame-pairs in each driving-sequence, and then select randomly from them. This approach has two problems: first, many frame-pairs have no overlap, making registration impossible. Second, and more importantly, for a large majority of frame-pairs, the relative motion between them is simple, e.g. "small offset, no rotation". We suggest a different approach: sample uniformly from the space of motions. We think of the space of all relative motions as a 6-dimensional hyper-cube, whose axes are $x$-offset, $y$-offset, $z$-offset, roll, pitch and yaw. Different areas in this cube represent different \emph{types} of motions: small-offset with large yaw, large-offset with small yaw but large pitch, etc. By sampling uniformly at random from this hyper-cube, we end up with a set of frame-pairs that is challenging and contains representatives of all the types of motion that appear in the LiDAR dataset. \begin{figure*} \centerline{\includegraphics[width=1\textwidth]{GPF_diagram.png}} \caption{\textbf{Grid-Prioritized Filtering (GPF).} GPF is a filtering algorithm used to select a subset of putative point-matches that are both high-quality and maximally spatially spread. This is achieved by dividing the source point-cloud into a grid of cells on the x-y plane, and selecting approximately the same number of matches from each cell. In the diagram, each match is represented by a disk, with those on the bottom, colored green, representing the ones that were selected by GPF. Within each cell, matches are ordered bottom-to-top by their estimated quality, based on analysis of the feature-space distance between the pair. Mutual nearest-neighbors (disks marked with small white circles) are selected first. Then, non-mutual. The secondary criterion for prioritizing is \cref{eq:rat_dist} (ratio between distance to 1st and 2nd nearest neighbor).} \label{fig:GPF} \end{figure*} \textbf{Generating a pool of candidates.} In theory, every pair of point-clouds from the dataset could be considered as a candidate for the registration set. Yet, the total number of pairs is quadratic in the size of the dataset making this impractical. To generate a reasonably sized candidate pool, we take each $k$th frame in a sequence to be a source frame. For each source frame we find the set of frames whose overlap with it is above $min\_overlap$, and randomly choose the target frame from this set. \textbf{Random selection of samples.} We wish to select uniformly at random from the space of all relative motions that appear in the candidate pool. We iteratively repeat the following procedure (demonstrated in \cref{fig:selection_procedure}): First, we normalize each axis of the 6D hyper-cube separately to the range [0,1], to overcome different ranges for different axes (x-offset, yaw, etc.). Then, we randomly generate a location in the unit hyper-cube. If our location is farther than a radius $r$ from any candidate, we discard it and generate another. Otherwise, we consider the set of candidates within a radius $r$. They represent essentially the same type of motion, and we choose between them according to a second criterion: which driving sequence they come from. This allows us to encourage a fair representation for each driving sequence in the dataset, which is important since different sequences often include different challenges: highways vs. residential areas, daytime vs. nighttime etc. We find it important to discard random locations that are farther than $r$ from any candidate. Allowing such locations to select the candidate nearest to them would have distorted the distribution of samples that we select. For instance, candidates that lie next to a large empty region of the hyper-cube would have a much higher probability of being selected. \textbf{Balanced registration sets.} Various Automotive LiDAR datasets are available, including KITTI-Odometry~\cite{KITTI}, NuScenes~\cite{NuScenes}, Apollo-Southbay~\cite{Apollo} and others. We use our algorithm to create three registration sets, that we use in our experiments. The sets are built over the Apollo-Southbay and NuScenes datasets. We divide NuScenes into two parts: Boston and Singapore. We name our registration sets \emph{Apollo-Southbay-Balanced}, \emph{NuScenes-Boston-Balanced} and \emph{NuScenes-Singapore-Balanced}. We set $min\_overlap\!=\!0.2$ and $r\!=\!0.1$. The number of samples in each set is shown in Table~\ref{table:dataset_sizes}. Notice that our sets are considerably larger than KITTI-10m. We believe this is beneficial in training, and also allows finer-grain comparison between algorithms in testing. In \cref{fig:set_stats} we compare the distribution of samples in \emph{Apollo-Southbay-Balanced} to that in KITTI-10m. We show marginal distributions according to different parameters: time-offset, distance, overlap, roll, yaw and pitch. In all parameters, our set includes a wider range of values than KITTI-10m. This is especially evident for distance, which for KITTI-10m is by definition always approximately 10 meters, and in our set is a wide range, upto over 50 meters. KITTI-10m includes only high-overlap pairs, while our dataset contains a range, actually focusing on the harder, low-overlap cases. Regarding yaw, KITTI-10m includes only small rotations, while our dataset includes a wide range, up to 90 degree turns and even some complete U-turns. Our dataset also contains more samples with significant roll and pitch than KITTI-10m does. \section{Grid-Prioritized Filtering (GPF)} \label{sec:GPF} Pre-filtering the set of putative pair-matches, to reduce the fraction of outliers in it, is important for methods such as RANSAC and TEASER++. Popular methods include mutual-nearest neighbors (MNN, a.k.a reciprocity check), and ratio test~\cite{SIFT}. Both methods work on each point-pair separately, and therefore do not take into consideration the spatial spread of the pairs that remain after filtering. This can be an issue when the overlap between the two point-clouds is limited. \iffalse The set of putative point-matches between two point clouds contains many outliers, i.e. pairs whose motion is not consistent with the overall rigid motion between the clouds. This happens when a point belongs to an independently moving object, like a vehicle, but also when the matching is incorrect, which can be caused by occlusions, partial overlap between the clouds, or other reasons. Pre-filtering the set of matches to increase their quality is an important step in RANSAC and other registration algorithms. A common method is using mutual nearest-neighbor filtering (a.k.a reciprocity check). Let $P$, $Q$ be point clouds, then the set of mutual nearest neighbors (MNNs) is: \begin{align} & \mathcal{N}\!=\!\Big\{(p,q)\! \in\! P\! \times\! Q | \\ \nonumber & d(p,q)\!=\!\min_{q'\in Q} d(p,q')\text{ and } d(p,q)\!=\!\min_{p'\in P} d(p',q)\Big\}, \end{align} where $d()$ is the $L_2$ distance. Another indicator for the quality is the ratio test~\cite{SIFT}. Let $q_1 \in Q$ be the nearest neighbor to $p \in P$, and $q_2 \in Q$ be the second-nearest neighbor. Then the ratio: \begin{align} \label{eq:rat_dist} S(p) = \frac{d(p,q_2)}{d(p,q_1)} \end{align} is a measure of the quality of the match. It could be used for filtering by removing all pairs where the ratio is too low. A downside of both the above filtering methods is that they process each match separately. As a result, they do not take into consideration the spatial spread of the pairs that remain after filtering. \fi We propose the Grid-Prioritized Filtering (GPF) method to explicitly ensure spatial spread in the selected pairs. As illustrated in \cref{fig:GPF}, GPF works by dividing the source point cloud into an $M \times M$ grid in the x-y plane. Then, $\ell$ matches are selected from each grid cell (or all matches if there are fewer then $\ell$ in the cell). The priority of pairs to select follows two criteria: First, matches that are MNNs are preferred. The secondary ordering criterion is the ratio $S$: \begin{align} \label{eq:rat_dist} S(p) = \frac{d(p,q_2)}{d(p,q_1)}, \end{align} where $p\!\in\!P$, $q_1, q_2 \!\in\! Q$, $q_1$ is the nearest neighbor to $p$ in $Q$ , $q_2$ is the second-nearest, and $d()$ is the $L_2$ distance. The number of pairs per cell, $\ell$ is determined by the total requested number, $R$. The simple calculation $\ell\!=\!R/M^2$ is only valid when all cells contain at-least $\ell$ pairs. Instead, we perform a quick binary search to find the value of $\ell$ that brings the overall selected number closest to $R$. $R$ can be specified explicitly, but we believe that matching it to the properties of each point-cloud is preferable. To do so, we define it by: \vspace*{-0.4cm} \begin{align} R=\phi\cdot|\mathcal{N}|, \end{align} where $\mathcal{N}$ is the set of mutual nearest neighbors for each cloud, and $\phi$ is the user supplied \emph{GPF factor}. We use notation like GPF(2.0) to refer to running GPF with $phi\!=\!2.0$. \section{Experiments} \label{sec:experiments} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{time_and_recall_comparison_B_to_B_tight.png} \caption{\textbf{Comparison of registration algorithms on a balanced LiDAR dataset}. We use \emph{NuScenes-Boston-Balanced} to compare recent point-cloud registration algorithms. All algorithms use FCGF local-descriptors that were trained on this dataset. We show wall-time and recall, with and without ICP refinement. Advanced RANSAC is simultaneously \emph{faster and more accurate} than all other algorithms. Its two versions differ in the pre-filtering method used; The faster one \emph{(mutual)} uses mutual-nearest neighbors, and the more accurate one \emph{(GPF)} uses our proposed Grid-Prioritized Filtering.} \label{fig:benchmark_B_to_B} \end{figure} In this section we present several experiments, comparing different registration algorithms on the proposed LiDAR registration sets. All methods use FCGF~\cite{FCGF} deep-features trained on these sets, and differ in the robust estimation step. In some experiments the train-set and the test-set come from the same LiDAR dataset (same-domain), and in others from different datasets (cross-domain). This allows us to analyze the effect of cross-domain testing on deep feature accuracy. We compare the following algorithms: \textbf{Learned:} DGR~\cite{DGR}, PointDSC~\cite{PointDSC}, \textbf{algorithmic:} TEASER++~\cite{TEASER} and RANSAC. We tried various flavors of RANSAC (see appendix), and the best combination found includes: \begin{enumerate} \item Prioritized selection of candidates (PROSAC), using the same priority order used in GPF \item Fast-rejection by edge-length consistency (ELC) \item Local-optimization step (LO-RANSAC), without graph-cuts \end{enumerate} We compare two kinds of pre-filtering for RANSAC: mutual-nearest neighbors (MNN), and the novel GPF, with a $10\!\times\!10$ grid. TEASER++ also requires filtering, as it tends to get stuck indefinitely when receiving too many putative pair-matches as input. We use MNN for TEASER++ in all experiments, and add a second filtering with GPF when testing on \emph{Apollo-Southbay-Balanced} (see ahead). For each registration task we measure the rotation error (RE) and translation error (TE), defined as \begin{equation} \setlength\abovedisplayskip{0.1cm} \nonumber \text{RE}(\mathbf{\hat R}) = \arccos \frac{\text{Tr}(\mathbf{\hat R^T}\mathbf{R^*})-1}{2},~~~ \text{TE}(\mathbf{\hat t}) = \left\Vert \mathbf{\hat t - t^*}\right\Vert_2. \end{equation} where $R,t$ is the ground-truth motion. We follow \cite{PointDSC} in defining a successful registration as one with RE$<$5 degrees and TE$<$0.6 meters. \emph{Recall} is the percentage of test samples for which registration succeeded, and we also refer to it as \emph{accuracy}. \subsection{Implementation Details} We start all registration algorithms by producing a set of putative matches as follows: down-sampling with an 0.3 meter voxel-grid filter, calculating FCGF features and finding nearest-neighbors in the feature space. When reporting running time we omit the time taken by this pre-processing. We use ICP for refinement of the registration results, and usually report results with and without it. \textbf{Code\footnote{See appendix for links and license information.}:} for RANSAC we use the GC-RANSAC~\cite{GC_RANSAC} code base, which is efficiently implemented and offers multiple options (PROSAC, local-optimization, etc.). We added an ELC implementation based on the one in open3d (version 0.13) \cite{Open3D}. The open3d implementation of RANSAC offers fewer options, and is somewhat slower, though still quite fast (see appendix). We run the GC-RANSAC code with \emph{distance\_ratio=0.6} and \emph{spatial\_coherence\_weight=0}, which effectively makes it LO-RANSAC and not GC-RANSAC. We also enable PROSAC and ELC. % For ICP we use open3D, with \emph{threshold=0.6}. For DGR, PointDSC and TEASER++ we use the official implementations, with slight modifications. We use our own implementation for training FCGF features. We train the FCGF network for 400 epochs, the PointDSC network for 50 epochs and the DGR network for 40 epochs. In training the FCGF network, instead of augmenting with general rotations, we augment with nearly-planar rotations, where yaw is in the range $\pm180$ degrees, but pitch and roll are only up to $\pm5$ degrees. That represents the rotations expected between pairs of LiDAR frames. We use two machines for our experiments: \begin{enumerate}[label=\Alph*] \item GPU: 4x Titan X, CPU: 20-core 2.20GHz Xeon E5-2630 \item GPU: GTX 980 Ti, CPU: 8-core 4.00GHz i7-6700K \end{enumerate} Most of our tests are performed on machine A, using a single GPU. TEASER++ code is run on machine B, due to its code failing to work on machine A. To compare running time, we extrapolate TEASER++'s presumptive running time on machine A. To do so, we calculate a normalizing ratio by running RANSAC on both machines. In the appendix we analyze the differences in CPU and GPU running times across machines. \subsection{Stress-Testing LiDAR registration} \begin{table} \begin{center} \begin{tabular}{|c||c|c||c|c|}\hline & \multicolumn{2}{c|}{Algo. only} & \multicolumn{2}{c|}{with ICP} \\\cline{2-5} & Recall & Time(s) & Recall & Time(s) \\\hline\hline DGR & 57.91\% & 0.453 & 61.81\% & 0.494\\\hline PointDSC & 80.56\% & 0.236 & 82.48\% & 0.290 \\\hline TEASER++ & 77.43\% & 0.331 & 86.88\% & 0.378\\\hline RANSAC (mutual) & \uu{84.14\%} & \textbf{0.040} & \uu{89.01\%} & \textbf{0.099} \\\hline RANSAC (GPF) & \textbf{86.88\%} & \uu{0.199} & \textbf{91.90\%} & \uu{0.257} \\\hline \end{tabular} \end{center} \caption{\textbf{Evaluation of algorithms}: Registration results for algorithms trained and tested on the \emph{NuScenes-Boston-Balanced} set. The right side of the table shows results with ICP refinement, the left side without. The best result in each column is in bold and the second best is underlined. The RANSAC algorithms are both faster and more accurate than all other algorithm, with and without ICP refinement. The fastest results are achieved by RANSAC with mutual-nearest neighbors filtering. The highest accuracy is achieved by RANSAC with Grid-Prioritized Filtering (GPF). } \begin{comment} DGR: /home/amnon/reference/DGR_fork/logs/timing_full_run_DGR_B_to_B.txt PointDSC: /home/amnon/reference/PointDSC_Fork/logs/timing_PointDSC_from_B_0.2_to_B_full_run.txt TEASER: /home/ad/old_drive/home/ad/PycharmProjects/reference/PointDSC_Fork/outputs/NuScenesBoston.Test.20210712_20_34_22/log.txt timing factor for TEASER is 0.75. \end{comment} \label{table:tight_results_train_B_test_B} \end{table} In \cref{table:tight_results_train_B_test_B} and \cref{fig:benchmark_B_to_B} we present the results of using the \emph{NuScenes-Boston-Balanced} dataset to compare between DGR, PointDSC, TEASER++, and RANSAC with two pre-filtering algorithms: MNN (with max-iterations=1M, confidence=0.9995), and GPF(3.0) (with max-iterations=1M, and confidence=0.999). The fastest results are achieved by RANSAC with MNN filtering. The highest accuracy is achieved by RANSAC with GPF. We analyze the failures of RANSAC in this experiment in Figure~\ref{fig:failure_analysis}. We show the distribution of successful registrations and failed ones, according to several measures: distance between the point clouds, overlap, time offset, and three axes of rotation. Large distance and small overlap emerge as the most influential parameters for failure. Other parameters seem to have little influence, except in the most extreme cases. \begin{figure*} \begin{center} \includegraphics[width=1\textwidth]{fail_analysis.png} \end{center} \caption{\textbf{Analysis of Failures.} We show the distribution of failed samples when running RANSAC (GPF) on the \emph{NuScenes-Boston-Balanced} dataset, with ICP refinement (see \cref{table:tight_results_train_B_test_B}). On the top row we show the distribution of successful registrations (blue) and failed ones (red), according to several parameters. On the bottom row, we show the ratio of failures for each bin in the corresponding top row histogram. Large distance and small overlap emerge as the most influential parameters for failure. Other parameters seem to have little influence, except in the most extreme cases. } \label{fig:failure_analysis} \end{figure*} \begin{table} \begin{center} \begin{tabular}{|c||c|c||c|c|}\hline & \multicolumn{2}{c|}{Algo. only} & \multicolumn{2}{c|}{with ICP} \\\cline{2-5} & Recall & Time(s) & Recall & Time(s) \\\hline\hline DGR & 44.95\% & 0.418 & 48.07\% & 0.462\\\hline PointDSC & 63.97\% & 0.234 & 66.78\% & 0.293\\\hline TEASER++ & 59.88\% & 0.146 & 71.99\% & 0.213 \\\hline RANSAC (mutual) & \uu{66.94\%} & \textbf{0.107} & \uu{74.31\%} & \textbf{0.171} \\\hline RANSAC (GPF) & \textbf{69.14\%} & \uu{0.113} & \textbf{77.70\%} & \uu{0.177} \\\hline \hline \end{tabular} \end{center} \caption{\textbf{Cross-Domain Evaluation}: Similar to \cref{table:tight_results_train_B_test_B}, except that here training was performed on \emph{Apollo-Southbay-Balanced} and testing on \emph{NuScenes-Boston-Balanced}. All recall values are lower than in \cref{table:tight_results_train_B_test_B}, indicating that FCGF features are not fully transferable. The best result in each column is in bold and the second best is underlined. The ordering between methods remains similar to \cref{table:tight_results_train_B_test_B}, except that here TEASER++ is faster than PointDSC.} \begin{comment} DGR: /home/amnon/reference/DGR_fork/logs/timing_full_run_DGR_A_to_B.txt PointDSC: /home/amnon/reference/PointDSC_Fork/logs/timing_PointDSC_from_A_0.2_to_B_full_run.txt TEASER on ad2021: /home/ad/old_drive/home/ad/PycharmProjects/reference/PointDSC_Fork/outputs/NuScenesBoston.Test.20210712_21_50_48/log.txt Timing factor for TEASER is 0.75 \end{comment} \label{table:tight_results_train_A_test_B} \end{table} In \cref{table:tight_results_train_A_test_B} we look at the setting of cross-domain testing. Here, all networks (FCGF, DGR and PointDSC) are trained on the \emph{Apollo-Southbay-Balanced} dataset, but the testing is on \emph{NuScenes-Boston-Balanced}. The ordering between algorithms remains the same as in the previous experiment, except here TEASER++ is faster than PointDSC. However, all recall values suffer a significant drop in the cross-domain case. \Cref{fig:cross_domain} visualizes this drop in accuracy for the case with ICP, showing a mean drop in recall of 16 percentage points. Cross-domain accuracies are significantly lower than the same-domain accuracies that we have seen in \cref{table:tight_results_train_B_test_B}. We believe this shows that though FCGF features are quite transferable, some of their learning is location specific. In this experiment, we use GPF(2.0) with max-iterations=50K, and confidence=0.999. To allow clearer comparison, we use the same parameters also for the same-domain experiment in \cref{fig:cross_domain}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{cross_domain_bars_tight.png} \caption{\textbf{The effects of cross-domain testing}. When FCGF features are trained using a training set which is substantially different from the test set, we see a drop in accuracy. Here, the test set is from \emph{NuScenes-Boston-Balanced}, and the training set is either from \emph{NuScenes-Boston-Balanced} (same-domain, blue), or instead from \emph{Apollo-Southbay-Balanced} (cross-domain, orange). We see a drop in accuracy across all algorithms, of approximately 16 percentage points on average.} \label{fig:cross_domain} \end{figure} \begin{table} \centering \begin{tabular}{|c|c||c|c|c|c|}\hline Test & Train & RANSAC & RANSAC & PointDSC & TEASER++ \\ & & (GPF) & (mutual) & & \\\hline\hline \multirow{3}{*}{A} & A & \textbf{98.97} & \uu{96.97} & 94.02 & 96.65 \\\cline{2-6} & B & \textbf{93.84} & \uu{93.25} & 88.53 & 92.62 \\\cline{2-6} & S & \textbf{97.52} & 94.86 & 93.54 & \uu{95.16} \\\Xhline{2\arrayrulewidth} \multirow{3}{*}{B} & A & \textbf{77.70} & \uu{75.31} & 66.40 & 72.11 \\\cline{2-6} & B & \textbf{91.13} & \uu{89.39} & 82.37 & 86.88 \\\cline{2-6} & S & \textbf{85.61} & \uu{80.79} & 75.39 & 79.63 \\\Xhline{2\arrayrulewidth} \multirow{3}{*}{S} & A & \textbf{88.43} & \uu{87.92} & 79.01 & 86.69 \\\cline{2-6} & B & \textbf{91.47} & \uu{90.59} & 82.02 & 89.16 \\\cline{2-6} & S & \textbf{94.60} & \uu{93.75} & 90.59 & 93.29 \\\hline \end{tabular} \caption{\textbf{All Registration Sets Cross-Domain}: Accuracy of several registration algorithms (all followed by ICP refinement), for every combination of training set and test set from our balanced registration sets: \emph{Apollo-Southbay-Balanced} (A), \emph{NuScenes-Boston-Balanced} (B) and \emph{NuScenes-Singapore-Balanced} (S). In all cases, accuracy drops when testing cross-domain. Training on A leads to the lowest cross domain results, while training on S leads to the highest. The highest result in each \emph{row} is in bold, the second underlined. The RANSAC methods are faster than the other methods (not shown here, see appendix).} \begin{comment} TEASER: /home/ad/old_drive/home/ad/PycharmProjects/reference/PointDSC_Fork/logs/full_table_TEASER.txt inlier stats: /home/amnon/reference/PointDSC_Fork/logs/inlier_stats_all.txt \end{comment} \label{tab:3_by_3} \end{table} \Cref{tab:3_by_3} presents a thorough test of our new datasets \emph{Apollo-Southbay-Balanced}, \emph{NuScenes-Boston-Balanced} and \emph{NuScenes-Singapore-Balanced}. In each of the 9 experiments, one dataset is used for training, and another for testing. We test four algorithms: PointDSC, TEASER++, RANSAC with MNN filtering, and RANSAC with GPF. ICP is used for refinement in all cases. Point clouds from the Apollo-Southbay dataset are $\sim\!\!\!2x$ larger than those from NuScenes, and this ratio is maintained even after mutual-nearest neighbor filtering. As a result, TEASER++ tends to get stuck often ($\sim\!\!\!\!15\%$ of cases) when working on Apollo-Southbay point clouds. To overcome this, we use two mechanisms. First, a stricter filtering than usual for TEASER++: we first filter with MNN, and then filter with GPF, keeping a maximum of 2000 pairs. Second, we use a time-out of 10 seconds, after which registration is marked as failed. This happens very rarely (less than 0.1\% of cases). The larger point-clouds in Apollo-Southbay also affect our settings for RANSAC+GPF. We use GPF(1.0) when testing on Apollo-Southbay, and GPF(2.0) when testing on NuScenes. All other settings for RANSAC are as in the previous experiment. We can see that \emph{Apollo-Southbay-Balanced} is in a sense the simplest: it achieves the highest same-domain and cross-domain test results, but when networks are trained on it, they achieve the lowest cross-domain accuracy. Training on the \emph{NuScenes-Singapore-Balanced} dataset, on the other hand, leads to the highest cross domain accuracies. As far as algorithm comparison, RANSAC (GPF) is the most accurate, and RANSAC (mutual) the second except in one setting where TEASER++ is the second. Both RANSAC variants are also faster than the other algorithms in all cases (see appendix for running times). Balanced datasets can also be used to compare local registration algorithms, such as ICP. Such algorithms take an initial coarse motion estimation, and refine it to achieve a high accuracy alignment. To use them with our balanced registration sets, we supply a standard set of \emph{initial motions}, produced by performing RANSAC registration with FCGF features. These initial motions are generally close enough to the ground truth motion to allow local registration algorithms to succees. In \cref{tab:refinement} we show the results of using the \emph{Apollo-Southbay-Balanced} dataset, and comparing three local registration algorithms: ICP~\cite{ICP}, symmetric-ICP~\cite{SymICP}, and BBR-F~\cite{BBR}. We use the official implementations of symmetric-ICP and BBR-F, and the open3d implementation of ICP. The point clouds are downsampled with a voxel-grid filter with a voxel size of 0.3 meters, and we set ICP's threshold to 0.6 meters (as we do in all experiments, following~\cite{DGR}). We report Recall, as well as translation error (TE) and rotation error (RE). We report mean, median and 95\textsuperscript{th} percentile of TE and RE, and these statistics are taken over \emph{all} test samples. The results show that ICP is more accurate than BBR-F, and both are considerably more accurate than symmetric-ICP. This differs from previous experiments in ~\cite{BBR} that used a subset of KITTI. We believe the central factor is overlap between point-clouds: small overlap is common in our sets but not in KITTI. ICP explicitly filters point pairs whose distance is above a threshold, and BBR-F uses spatial mutual-nearest neighbors. These elements apparently gives them an edge over Symmteric-ICP in this setting. \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|}\hline & Recall & \multicolumn{3}{c|}{TE (cm)} & \multicolumn{3}{c|}{RE (deg)} \\\cline{3-8} && mean & 50\% & 95\% & mean & 50\% & 95\% \\\hline\hline ICP & 98.99\% & 80.65 & 11.76 & 30.29 & 0.37 & 0.13 & 0.33 \\\hline BBR-F & 96.33\% & 86.98 & 15.10 & 52.84 & 0.47 & 0.19 & 0.66 \\\hline sym-ICP & 67.74\% & 548.85 & 17.68 & 3544.49 & 2.31 & 0.22 & 10.66 \\\hline \end{tabular} \end{center} \caption{\textbf{Refinement Experiment}: We compare refinement algorithms using \emph{Apollo-southbay-Balanced} and a set of initial (coarse) motions generated by RANSAC registration. We report mean, median and 95\textsuperscript{th} percentile taken over \emph{all} test samples. ICP does better than its competitors.} \label{tab:refinement} \begin{comment} /home/amnon/FCGF_RANSAC_private/outputs/ApolloSouthbay.Test.test.20210803_18_05_03/log.txt \end{comment} \end{table} \section{Ablation Studies} \subsection{RANSAC} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{RANSAC_ablation1.png} \caption{\textbf{RANSAC Ablation: PROSAC and ELC/SPRT}. We show the accuracy and running time of different variants of RANSAC, with ICP (right) and without (left). For each setting, we repeat the run 4 times and show the spread of results by a polygon (the convex hull). We also show their mean. The best results are when we use both PROSAC and ELC.} \label{fig:RANSAC_ablation_1} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{RANSAC_ablation_2.png} \caption{\textbf{RANSAC Ablation: Local-Optimization}. Using the same visualization as \cref{fig:RANSAC_ablation_1}, we show LO-RANSAC is superior to GC-RANSAC in our setting. } \label{fig:RANSAC_ablation_2} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{RANSAC_ablation_open3d.png} \caption{\textbf{RANSAC code bases: Open-3D vs. GC-RANSAC}. We compare the open3d implementation of RANSAC (with ELC), to the GC-RANSAC implementation in two settings: "compatible" which is as similar as possible to open3d, and "default" which is what we use in most of our experiments. The open3d implementation is slower than the GC-RANSAC one. It also does not offer PROSAC and LO-RANSAC, causing it to be less accurate. However, it is still faster and more accurate than all other algorithm we tested (TEASER++, PointDSC, DGR).} \label{fig:RANSAC_ablation_open3d} \end{figure} The version of RANSAC that we use in our experiments includes several improvements over classical RANSAC: \begin{enumerate} \item Prioritized selection of candidate sets (PROSAC). \item Quick rejection of candidate sets (with ELC). \item Local-Optimization step (LO-RANSAC). \end{enumerate} We perform ablation studies to show the importance of each element. We both train and test on \emph{NuScenes-Boston-Balanced}, and use the same settings as in the experiment shown in Tab. 2 of the main paper, for the nearest-neighbor filtering case. All variants of RANSAC tested in this section are both faster and more accurate than the other algorithms we consider in our paper: TEASER++, PointDSC and DGR. The results of our first experiment are shown in \cref{fig:RANSAC_ablation_1}. We compare PROSAC to random selection of candidate sets, and in the quick rejection step, we compare ELC to SPRT. To show variance, we repeat each experiment 4 times, and plot both the mean and the convex hull of the 4 results. The results demonstrate that adding PROSAC improve accuracy but also adds to running time, and that replacing SPRT with ELC improves both accuracy and running time. In \cref{fig:RANSAC_ablation_2} we show a comparison of LO-RANSAC to GC-RANSAC. In both cases we use PROSAC and ELC, and the only difference is the parameter \emph{spatial\_coherence\_weight}. To run LO-RANSAC we set it to 0. To run GC-RANSAC, we set it to its default value, 0.975. LO-RANSAC achieves higher recall than GC-RANSAC in our setting. We also tested other values of the parameter (not shown), and the best accuracy was achieved with 0 (i.e. LO-RANSAC). In \cref{fig:RANSAC_ablation_open3d} we compare the open3d implementation of RANSAC to the GC-RANSAC implementation which we use for most experiments (we refer to it as \emph{GC-code}). The open3d implementation includes ELC, but does not include local-optimization and PROSAC. For the fairest comparison, we run the GC-code in a "compatible" setting, also using ELC but no local-optimization and no PROSAC. For reference, we also run the GC-code with our default setting (ELC, PROSAC and LO-RANSAC). Open3D is considerably slower than either GC-code setting. It is less accurate than our default setting of GC-code, but interestingly more accurate than the "compatible" setting. Possibly, this is due to differences in the implementation of early stopping. Open3d RANSAC is both faster and more accurate than all other algorithms we tested, (compare \cref{fig:RANSAC_ablation_open3d} here to Fig.~6 in main paper). \subsection{GPF} In \cref{fig:GPF_ablation} we demonstrate the effect of the number of iterations and of the GPF parameter when running RANSAC+GPF. We can see that when adding iterations, running time always increases, but accuracy reaches saturation and plateaus at some point. Increasing the GPF parameter $\phi$, which corresponds to keeping a larger set of point-pairs, leads to an increase in both running time and in accuracy. However, the increase in accuracy does become considerably slower as we advance the parameter above $3.0$. In our main experiments we used the parameter values of $1.0$, $2.0$ and $3.0$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{GPF_ablation_iters.png} \includegraphics[width=0.5\textwidth]{GPF_ablation_param.png} \caption{\textbf{GPF Ablation}. We show the effects of different values of \emph{max-iteration} (top) and of $\phi$ (bottom). Increasing max-iterations improves accuracy only up to a point, after which accuracy plateaus while running time increases. Increasing $\phi$ improves accuracy and increases running time, and the plateau phenomenon is much less pronounced. } \label{fig:GPF_ablation} \end{figure} \section{GPU and CPU Running Time on Different Machines.} \label{sec:running_time} Some of the registration algorithms that we compare rely mostly on GPU for processing (PointDSC, DGR), while others mostly use the CPU (TEASER++, RANSAC). Therefore, a comparison of running times between these algorithms depends on the specific machine being used. We demonstrate this in \cref{tab:GPU_CPU_times}, by running the same experiment on two machines. The machines that we use are: \begin{enumerate}[label=\Alph*] \item GPU: 4x Titan X, CPU: 20-core 2.20GHz Xeon E5-2630 \item GPU: GTX 980 Ti, CPU: 8-core 4.00GHz i7-6700K \end{enumerate} On either machine, we use only one GPU for testing. The experiment consists of testing PointDSC and RANSAC on the \emph{NuScenes-Boston-Balanced} dataset (training was also performed on the same dataset). We report the running times on both machines. The ratio between the running times of PointDSC and RANSAC is different between the machines, reflecting the different mixes of CPU and GPU capabilities in each machine. For this experiment, we used the open3D implementation of RANSAC. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|}\hline Algorithm & Main & Machine A & Machine B \\ & Resource & Time (s) & Time (s) \\\hline\hline PointDSC & GPU & 0.236 & 0.330 \\\hline RANSAC & CPU & 0.109 & 0.135 \\\hline Ratio PointDSC/RANSAC & & 2.44 & 2.17 \\\hline \end{tabular} \end{center} \caption{\textbf{Running Times on Two Machines:} Comparison of times between CPU bound algorithms (such as RANSAC) and GPU bound ones (such as PointDSC) depends on the specific machine. We demonstrate this by running the same experiments on two machines. The ratio of running times between PointDSC and RANSAC is different on each machine, reflecting each machine's mix of CPU and GPU capabilities.} \label{tab:GPU_CPU_times} \begin{comment} \end{comment} \end{table} \begin{table} \centering \begin{tabular}{|c|c||c|c|c|c|}\hline Test & Train & RANSAC & RANSAC & PointDSC & TEASER++ \\ & & (GPF) & (mutual) & & \\\hline\hline \multirow{3}{*}{A} & A & \uu{0.326} & \textbf{0.292} & 0.691 & 0.781 \\\cline{2-6} & B & \uu{0.336} & \textbf{0.330} & 0.725 & 0.354 \\\cline{2-6} & S & \uu{0.346} & \textbf{0.317} & 0.702 & 0.449 \\\Xhline{2\arrayrulewidth} \multirow{3}{*}{B} & A & \uu{0.177} & \textbf{0.171} & 0.451 & 0.277 \\\cline{2-6} & B & \uu{0.157} & \textbf{0.098} & 0.432 & 0.486 \\\cline{2-6} & S & \uu{0.177} & \textbf{0.124} & 0.437 & 0.477 \\\Xhline{2\arrayrulewidth} \multirow{3}{*}{S} & A & \uu{0.228} & \textbf{0.202} & 0.616 & 0.250 \\\cline{2-6} & B & \uu{0.224} & \textbf{0.147} & 0.608 & 0.258 \\\cline{2-6} & S & \uu{0.237} & \textbf{0.119} & 0.589 & 0.846 \\\hline \end{tabular} \caption{\textbf{Running Times for All Registration Sets Cross-Domain Experiment}: Running times in seconds for the experiments in Tab.~4 in the main paper. Fastest in each row (always RANSAC mutual) is in bold, second fastest (always RANSAC+GPF) is underlined.} \begin{comment} TEASER: /home/ad/old_drive/home/ad/PycharmProjects/reference/PointDSC_Fork/logs/full_table_TEASER.txt \end{comment} \label{tab:3_by_3_runtime} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c||c|c|}\hline Test & Train & Initial & MNN-filtered \\\hline\hline \multirow{3}{*}{A} & A & 23520 & 2123 \\\cline{2-4} & B & 23520 & 1717 \\\cline{2-4} & S & 23520 & 1830 \\\Xhline{2\arrayrulewidth} \multirow{3}{*}{B} & A & 8091 & 766 \\\cline{2-4} & B & 8091 & 837 \\\cline{2-4} & S & 8091 & 841 \\\Xhline{2\arrayrulewidth} \multirow{3}{*}{S} & A & 10335 & 1106 \\\cline{2-4} & B & 10335 & 1104 \\\cline{2-4} & S & 10335 & 1198 \\\hline \end{tabular} \end{center} \caption{\textbf{Pair-Match Set Sizes}: The number of putative pair-matches for different experiments, before and after mutual-nearest neighbor (MNN) filtering. The datasets are \emph{Apollo-Southbay-Balanced} (A), \emph{NuScenes-Boston-Balanced} (B) and \emph{NuScenes-Singapore-Balanced} (S). The values shown are averaged over all samples in each dataset.} \label{tab:inlier_count} \begin{comment} inlier stats: /home/amnon/reference/PointDSC_Fork/logs/inlier_stats_all.txt \end{comment} \end{table} \section{Additional Data for All-Set Cross-Domain Experiment} In \cref{tab:3_by_3_runtime} we show the running times for the All-Set Cross Domain experiment (Tab.~4 in main paper). RANSAC with MNN is the fastest, and RANSAC+GPF the second fastest in every experiment. We mention in the paper that \emph{Apollo-Southbay-Balanced} has larger point clouds than the other datasets we use, and that this is also true after mutual-nearest neighbor filtering. In~\cref{tab:inlier_count} we show these sizes for all our experiments. \section{Code Bases} In our work we make use the following code bases: \textbf{FCGF~\cite{FCGF}}: \url{https://github.com/chrischoy/FCGF} (MIT License) \noindent\textbf{DGR~\cite{DGR}}: \url{https://github.com/chrischoy/DeepGlobalRegistration} (MIT License) \noindent\textbf{Minkowski Engine~\cite{minkowski}}: \url{https://github.com/NVIDIA/MinkowskiEngine} (MIT License) \noindent\textbf{PointDSC~\cite{PointDSC}}: \url{https://github.com/XuyangBai/PointDSC} \noindent\textbf{TEASER++~\cite{TEASER}}: \url{https://github.com/MIT-SPARK/TEASER-plusplus} (MIT License) \noindent\textbf{Open3d~\cite{Open3D}}: \url{https://github.com/isl-org/Open3D} (MIT License) \noindent\textbf{GC-RANSAC~\cite{GC_RANSAC}}: \url{https://github.com/danini/graph-cut-ransac} (new BSD License) \noindent\textbf{Symmetric-ICP~\cite{SymICP}}: \url{https://gfx.cs.princeton.edu/proj/trimesh2/} (GPL Version 2 License) \noindent\textbf{Best-Buddies Registration~\cite{BBR}}: \url{https://github.com/AmnonDrory/BestBuddiesRegistration} \end{appendices} \section{Conclusion} We propose an algorithm for producing balanced and challenging registration set for the automotive LiDAR setting, and use these sets to stress-test several point-cloud registration algorithms. In our experiments, the most accurate results are achieved by using deep-learned features (specifically FCGF) that were trained and tested on the same domain, combined with advanced RANSAC with Grid-Proritized Filtering (GPF). We believe that our provided set of tools will help in advancing the field. Clearly, while we demonstrated our approach on one set of learned features (FCGF), all our analysis can be used and is also applicable to other types of learned features \cite{D3Feat,PREDATOR}. {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-3345
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The very name \emph{noise radar} suggests the nature of its transmit signal: noise \cite{cooper1967random,narayanan1998design,lukin1998millimeter,dawood2001roc,narayanan2002UWB,tarchi2010SAR,kulpa2013signal,wasserzier2019noise,savci2020noise}. This sets it apart from other types of radars, such as frequency-modulated continuous-wave (FMCW) radars, whose transmit signals are deterministic. There is no denying that FMCW radars are more popular than noise radars. However, from a practical perspective, the randomness of their transmit signals endows them with desirable properties: low probability of intercept, immunity against noise and jamming, and a ``thumbtack'' ambiguity function \cite{thayaparan2006noise,narayanan2016noise}. For these reasons, there has always been a latent undercurrent of research aimed at building noise radars \cite{narayanan2004design,stove2016design,savci2019trials}. But there is a second reason why noise radars are a worthwhile subject of research. There exists at least one other type of radar whose transmit signal is also nondeterministic: \emph{quantum two-mode squeezing} (QTMS) radar, a type of quantum radar \cite{chang2018quantum,luong2019roc}. It turns out that noise radars are closely allied with QTMS radars \cite{luong2019cov}, which links them to quantum radars more generally. This motivates us to examine the theory of noise radar more carefully. Until recently, quantum radars were confined to the realm of theory \cite{lloyd2008qi,tan2008quantum,barzanjeh2015mqi,wilde2017gaussian} except for a handful of quantum lidar experiments \cite{lopaeva2013qi,zhuang2017entanglementlidars,england2018quantum}. However, in 2018, a team led by Wilson at the Institute for Quantum Computing (University of Waterloo) demonstrated the viability of a \emph{quantum-enhanced noise radar} at microwave frequencies \cite{chang2018quantum}. This experiment was later analyzed using more conventional radar engineering metrics, and \cite{luong2019roc} was the first scientific publication in the world to publish \emph{receiver operating characteristic} (ROC) \emph{curves} for a quantum radar experiment. This experiment, whose leading results were later confirmed by a similar experiment at the Institute of Science and Technology Austria \cite{barzanjeh2019experimental}, showed that microwave quantum radars can be built in the lab. Although we introduced the term \emph{QTMS radar} in \cite{luong2019roc} to emphasize the vastly different technology underlying the new quantum radar design, the term \emph{quantum-enhanced noise radar} highlights the theoretical similarities between QTMS radars and standard noise radars. Where detection performance is concerned, we can speak of them collectively as ``noise-type radars''. The main theoretical result that ties noise-type radars together is that they are characterized by a covariance matrix with a very specific structure \cite{luong2019cov}. The matrix depends on four parameters: the amplitude (or power) of the received signal, the amplitude of the internal signal used for matched filtering, the correlation coefficient between the two signals, and the relative phase between the signals. In previous work, we highlighted the importance of the correlation coefficient for target detection, and investigated a method for estimating the correlation coefficient \cite{luong2019rice}. This method was based on minimizing the Frobenius norm between the structured covariance matrix and the sample covariance matrix, the latter being calculated directly from the measurement data. The minimization was performed numerically, which is not practical in many radar systems. In this paper, we show that this minimization can be done analytically, which greatly increases the applicability of our results to real-world systems. We exhibit the exact, closed form estimate not only for the correlation coefficient, but for all four parameters in the noise radar covariance matrix. We also show that, by a curious coincidence, the same estimates are obtained via maximum likelihood parameter estimation. The remainder of this paper is organized as follows. In Sec.\ \ref{sec:background}, we introduce the covariance matrix that characterizes noise-type radars. In Sec.\ \ref{sec:estimating}, we give estimators for the four parameters in the covariance matrix. (The relevant proofs, however, have been relegated to the Appendixes.) In Sec.\ \ref{sec:pdfs}, we characterize the probability distributions of the estimators. Since some of these distributions are complicated, we also give approximations. In Sec.\ \ref{sec:target_detection}, we use these results to analyze the detection performance of noise-type radars. Sec.\ \ref{sec:conclusion} concludes the paper. \section{The Covariance Matrix for Noise-Type Radars} \label{sec:background} In \cite{luong2019cov}, we showed that, under certain conditions, noise-type radars are completely described by a $4 \times 4$ covariance matrix which we will now describe. It is well known that an electromagnetic signal can be described by a pair of real-valued time series, namely the \emph{in-phase} and \emph{quadrature} voltages of the signal. A noise-type radar, in the simplest case, has two signals associated with it (for a total of four time series): the signal received by the radar and a signal retained within the radar as a reference for matched filtering. We will denote by $I_1[n]$ and $Q_1[n]$ the in-phase and quadrature voltages, respectively, of the received signal. Similarly, let $I_2[n]$ and $Q_2[n]$ denote the in-phase and quadrature voltages of the reference signal. We assume that these voltages are digitized, so these are discrete time series indexed by $n$. Note that the \emph{transmitted} signal is not explicitly modeled here. All knowledge of the transmitted signal is encoded in the reference signal. The latter may be thought of as a ``copy'' of the transmitted signal, though it is important to note that this copy is necessarily imperfect. The uncertainty principle of quantum mechanics, as applied to in-phase and quadrature voltages, guarantees the existence of a certain amount of error between the transmitted and reference signals \cite{luong2020magazine}. This minimum error manifests itself as noise, which may be termed \emph{quantum noise}. We now make the assumption that justifies the name ``noise radar'': we assume that the transmitted and reference signals are stationary Gaussian white noise processes with zero mean. We also make the assumption that any other source of noise, such as system noise or atmospheric noise, may be modeled as additive white Gaussian noise. (Note that quantum noise is known to be Gaussian.) Consequently, the received signal is also a stationary Gaussian white noise process. In short, the four time series $I_1[n]$, $Q_1[n]$, $I_2[n]$, and $Q_2[n]$ are real-valued, zero-mean, stationary Gaussian white noise processes; this allows us to simplify the notation by dropping the index $n$. Finally, we assume that these four processes are pairwise independent unless the time lag between the voltages is zero. Under the above conditions, the received and reference signals of a QTMS radar are fully specified by the $4 \times 4$ covariance matrix $\expval{\vec{x}\vec{x}^\T}$, where $\vec{x} = [I_1, Q_1, I_2, Q_2]^\T$. In \cite{luong2019cov}, we proved that this matrix has a very specific structure. In block matrix format, we may write it as \begin{equation} \label{eq:QTMS_cov} \mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi) = \begin{bmatrix} \sigma_1^2 \mat{1}_2 & \rho \sigma_1 \sigma_2 \mat{R}'(\phi) \\ \rho \sigma_1 \sigma_2 \mat{R}'(\phi)^\T & \sigma_2^2 \mat{1}_2 \end{bmatrix} \end{equation} where $\sigma_1^2$ and $\sigma_2^2$ are the received and reference signal powers, respectively, while $\rho$ is a correlation coefficient, $\phi$ is the phase shift between the signals, $\mat{1}_2$ is the $2 \times 2$ identity matrix, and $\mat{R}'(\phi)$ is the reflection matrix \begin{equation} \mat{R}'(\phi) = \begin{bmatrix} \cos \phi & \sin \phi \\ \sin \phi & -\cos \phi \end{bmatrix} \! . \end{equation} Standard noise radars are described by a matrix of the same overall form, but with the rotation matrix \begin{equation} \mat{R}(\phi) = \begin{bmatrix} \cos \phi & \sin \phi \\ -\sin \phi & \cos \phi \end{bmatrix} \end{equation} taking the place of the reflection matrix. The results in this paper hold for both standard noise radars and QTMS radars after appropriate choices of sign as detailed below. We assume $\sigma_1 \geq 0$, $\sigma_2 \geq 0$, and $\rho \geq 0$ because their signs can always be accounted for by an appropriate choice of $\phi$. The contribution of this paper is the derivation of estimators for $\sigma_1$, $\sigma_2$, $\rho$, and $\phi$, as well as the presentation of results related to these estimators. \section{Estimating the Parameters of the Covariance Matrix} \label{sec:estimating} We will estimate the four parameters in \eqref{eq:QTMS_cov} via two methods. The first is a ``naive'' method which we might term the \emph{minimum Frobenius norm} (MFN) method. The second is maximum likelihood (ML) estimation. Both methods start with the sample covariance matrix \begin{align} \label{eq:sample_cov} \hat{\mat{S}} = \frac{1}{N} \sum_{n=1}^N \vec{x}[n] \vec{x}[n]^\T\!, \end{align} calculated from $N$ instances of the random vector $\vec{x}$---that is, $N$ samples each from the in-phase and quadrature voltages of the received and reference signals. In radar terminology, we say that we integrate over $N$ samples of the radar's measurement data. Note that, as a consequence of the assumptions outlined in Sec.\ \ref{sec:background}, each sample is independent and identically distributed. In the following, we will use an overline to denote the sample mean over $N$ samples. For example, $\hat{\mat{S}} = \overline{\vec{x}\vec{x}^\T}$. \subsection{Minimum Frobenius Norm Estimation} \label{subsec:MFN_est} The MFN method consists of minimizing the Frobenius norm between the structured covariance matrix \eqref{eq:QTMS_cov} and the sample covariance matrix \eqref{eq:sample_cov}. More concretely, we perform the minimization \begin{equation} \label{eq:minimization} \min_{\sigma_1, \sigma_2, \rho, \phi} \mleft\| \mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi) - \hat{\mat{S}} \mright\|_F \end{equation} subject to the constraints $0 \leq \sigma_1$, $0 \leq \sigma_2$, and $0 \leq \rho \leq 1$. (The subscript $F$ denotes the Frobenius norm.) The MFN estimators $\hat{\sigma}_1$, $\hat{\sigma}_2$, $\hat{\rho}$, and $\hat{\phi}$ are the arguments which minimize \eqref{eq:minimization}. In \cite{luong2019rice}, we obtained estimates of $\rho$ by performing the minimization \eqref{eq:minimization} numerically. This procedure is computationally expensive and would be impractical in many radar setups. The results in this paper allow us to do away with numerical optimization altogether. \subsection{Maximum Likelihood Estimation} The probability density function for a 4-dimensional multivariate normal distribution with zero mean and covariance matrix $\mat{\Sigma}$ is \begin{equation} f(\vec{x}|\mat{\Sigma}) = \frac{\exp \mleft( -\frac{1}{2} \vec{x}^\T \mat{\Sigma}^{-1} \vec{x} \mright)}{\sqrt{(2 \pi)^4 |\mat{\Sigma}|}} \end{equation} where $|\mat{\Sigma}|$ is the determinant of $\mat{\Sigma}$. When considered as a function of $\mat{\Sigma}$ instead of $\vec{x}$, this becomes the likelihood function. The ML estimators arise from maximizing the likelihood function, or equivalently, the log-likelihood function. For $N$ independently drawn samples $\vec{x}[1], \dots, \vec{x}[N]$, the log-likelihood is \begin{equation} \label{eq:log_l} \ell(\mat{\Sigma}) = -\frac{N}{2} \mleft( \ln |\mat{\Sigma}| + 4 \ln(2 \pi) - \overline{\vec{x}^\T \mat{\Sigma}^{-1} \vec{x}} \mright). \end{equation} \subsection{Parameter Estimates} One of the main results of this paper, and perhaps the most surprising of them, is that the MFN and ML methods lead to the same estimators. We will relegate the actual derivations of the estimators to the Appendixes. Here we present only the final result, namely the estimators themselves as obtained from both methods. In order to express the estimators in a compact form, we introduce the following auxiliary quantities: \begin{subequations} \begin{align} \label{eq:aux_P1} P_1 &= I_1^2 + Q_1^2 \\ \label{eq:aux_P2} P_2 &= I_2^2 + Q_2^2 \\ \label{eq:aux_Rc} R_c &= I_1 I_2 \mp Q_1 Q_2 \\ \label{eq:aux_Rs} R_s &= I_1 Q_2 \pm I_2 Q_1. \end{align} \end{subequations} For $R_c$ and $R_s$, the upper signs apply when the reflection matrix $\mat{R}'(\phi)$ is used in \eqref{eq:QTMS_cov} (QTMS radar); the lower signs apply when the rotation matrix $\mat{R}(\phi)$ is used (standard noise radar). Note that $\bar{P}_1$, $\bar{P}_2$, $\bar{R}_c$, and $\bar{R}_s$ are merely sums of the appropriate entries in the sample covariance matrix $\hat{\mat{S}}$. \begin{proposition} In terms of the auxiliary quantities \eqref{eq:aux_P1}--\eqref{eq:aux_Rs}, the MFN and ML estimators for the four parameters in \eqref{eq:QTMS_cov} are % \begin{subequations} \begin{align} \label{eq:est_sigma1} \hat{\sigma}_1 &= \sqrt{ \frac{\bar{P}_1}{2} } \\ \label{eq:est_sigma2} \hat{\sigma}_2 &= \sqrt{ \frac{\bar{P}_2}{2} } \\ \label{eq:est_rho} \hat{\rho} &= \sqrt{ \frac{\bar{R}_c^2 + \bar{R}_s^2}{\bar{P}_1 \bar{P}_2} } \\ \label{eq:est_phi} \hat{\phi} &= \atantwo(\bar{R}_s, \bar{R}_c) \end{align} \end{subequations} % where $\atantwo(y, x)$ is the two-argument arctangent. \end{proposition} \begin{proof} See Appendix \ref{app:mfn} for a proof that these are the MFN estimators, and Appendix \ref{app:ml} for a proof that these same estimators are also the ML estimators. \end{proof} \section{Probability Distributions for the Parameter Estimates} \label{sec:pdfs} In this section, we give expressions for the probability density functions (PDFs) of the estimators \eqref{eq:est_sigma1}--\eqref{eq:est_phi}. Of these, the most important is perhaps the one for $\hat{\rho}$ because of its importance for target detection, a connection which we will explore in Sec.\ \ref{sec:target_detection}. However, for completeness, we give PDFs for all four estimators. For $\hat{\rho}$ and $\hat{\phi}$, the exact PDFs are quite complicated, so we will give simple approximations to these distributions. In order to quantify the goodness of these approximations, we will make use of a metric on probability distributions known as the \emph{total variation distance} (TVD). Informally speaking, the TVD between two probability distributions is defined as the maximum possible difference between the probabilities assigned to the same event by the two distributions. It always lies in the interval $[0,1]$. According to Lemma 2.1 of \cite{tsybakov2009introduction}, when the distributions are described by PDFs, the TVD is \begin{equation} \label{eq:totVarDist} \mathit{TVD} = \frac{1}{2} \int \mleft| f(x) - g(x) \mright| \, dx \end{equation} where $f(x)$ and $g(x)$ are the PDFs of the two distributions, and the integral is taken over the whole domain of the PDFs. Apart from furnishing us with a concrete formula for the TVD, this expression gives us a simpler interpretation of the TVD: it is half the integrated absolute error between the PDFs. \subsection{PDFs for $\hat{\sigma}_1$ and $\hat{\sigma}_2$} The distributions of the estimated signal amplitudes $\hat{\sigma}_1$ and $\hat{\sigma}_2$ are nothing more than rescaled versions of the chi distribution, as shown in the following proposition. \begin{proposition} The PDF of $\hat{\sigma}_1$ for $x \geq 0$ is % \begin{equation} \label{eq:PDF_sigma1} f_{\hat{\sigma}_1}(x | \sigma_1, N) = \frac{2N^N}{\Gamma(N) \sigma_1^{2N}} x^{2N-1} \exp \mleft( -\frac{N x^2}{\sigma_1^2} \mright) \end{equation} % where $\Gamma(N)$ denotes the gamma function. This also holds for $\hat{\sigma}_2$ when $\sigma_1$ is replaced with $\sigma_2$. \end{proposition} \begin{proof} Note that $\bar{P}_1$ consist of a sum of squares of $2N$ independent and identically distributed normal random variables, namely $N$ instances each of $I_1$ and $Q_1$. Both $I_1$ and $Q_1$ have zero mean and standard deviation $\sigma_1$, as can be seen from \eqref{eq:QTMS_cov}. Thus, the rescaled random variable % \begin{equation} \sqrt{\frac{2N}{\sigma_1^2}} \hat{\sigma}_1 = \sqrt{\sum_{n=1}^N \mleft( \frac{i_1[n]}{\sigma_1} \mright)^{\!2} + \mleft( \frac{q_1[n]}{\sigma_1} \mright)^{\!2}}, \end{equation} % being the positive square root of the sum of squares of $2N$ standard normal variates, follows a chi distribution with $2N$ degrees of freedom. The proposition follows upon applying the standard change of variable formula to the PDF of the chi distribution. \end{proof} \begin{remark} The PDF \eqref{eq:PDF_sigma1} may be recognized as a Nakagami $m$-distribution \cite{nakagami1960m} with parameters $m = N$ and $\Omega = \sigma_1^2$. \end{remark} \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{figures/PDF_sigma1.pdf}} \caption{Probability density function of $\hat{\sigma}_1$ when $\sigma_1 = 1$, $N \in \{25, 100, 250\}$.} \label{fig:PDF_sigma1} \end{figure} Plots of $f_{\hat{\sigma}_1}(x | \sigma_1, N)$ are shown in Fig.\ \ref{fig:PDF_sigma1}. \subsection{Exact and Approximate PDFs for $\hat{\rho}$} The derivation of the PDF for the estimated correlation coefficient $\hat{\rho}$ is extremely involved. But luckily, our task has been done for us. We exploit an intriguing connection between noise radar and the theory of two-channel synthetic aperture radar (SAR), in which matrices analogous to \eqref{eq:QTMS_cov} appear. (Note, however, that the matrices in two-channel SAR are $2 \times 2$ complex-valued matrices instead of $4 \times 4$ real-valued matrices.) In two-channel SAR, the quantity analogous to $\rho$ is known as the \emph{coherence}. An estimator for the coherence, essentially identical to \eqref{eq:est_rho}, was investigated in \cite{touzi1996statistics,touzi1999coherence,gierull2001unbiased,sikaneta2004detection}. We now quote one of their results here. \begin{proposition} When $N > 2$ and $\rho \neq 1$, the PDF of $\hat{\rho}$ for $0 \leq x \leq 1$ is % \vspace{-\jot} \begin{multline} \label{eq:PDF_rho} f_{\hat{\rho}}(x | \rho, N) = 2 (N-1)(1-\rho^2)^N \\ \times x(1-x^2)^{N-2} {}_2F_1(N, N; 1; \rho^2 x^2) \end{multline} % where ${}_2F_1$ is the Gaussian hypergeometric function. \end{proposition} \begin{proof} See Sec.\ VI of \cite{touzi1996statistics}. \end{proof} This expression is both numerically and analytically unwieldy (except when $\rho = 0$). However, we are able to supply an empirical PDF which approximates \eqref{eq:PDF_rho} well when $N$ is larger than approximately 100. In \cite{luong2019rice}, we showed that the correlation coefficients estimated using the MFN method (albeit with a numerical minimization instead of an analytic one) approximately follow a Rice distribution. Recall that the PDF of the Rice distribution is \begin{equation} \label{eq:PDF_rice} f_\text{Rice}(x | \alpha, \beta) = \frac{x}{\beta^2} \exp \mleft( -\frac{x^2 + \alpha^2} {2\beta^2} \mright) I_0 \mleft( \frac{x\alpha}{\beta^2} \mright) \end{equation} where $\alpha$ and $\beta$ are the parameters of the distribution, and $I_0$ is the modified Bessel function of the first kind of order zero (not to be confused with the in-phase voltages $I_1$ or $I_2$). The approximation derived in \cite{luong2019rice} may be summarized as follows. \begin{proposition} \label{prop:approx_rice} When $N \gtrapprox 100$, $\hat{\rho}$ approximately follows a Rice distribution with parameters % \begin{subequations} \begin{align} \label{eq:rho_rice_alpha} \alpha &= \rho \\ \label{eq:rho_rice_beta} \beta &= \frac{1-\rho^2}{\sqrt{2N}}. \end{align} \end{subequations} % \end{proposition} \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{figures/totVarDist_rho.pdf}} \caption{Total variation distance between the exact probability density function of $\hat{\rho}$ and the approximation described in Proposition \ref{prop:approx_rice}, plotted as a function of $N$, for $\rho \in \{0.3, 0.6, 0.9\}$.} \label{fig:totVarDist_rho} \end{figure} Because this is an empirical approximation, we can only give plausibility arguments based on numerical results. In Sec.\ V of \cite{luong2019rice}, we showed that this approximation is a good one by simulating radar detection data for various values of $\rho$ and $N$ and fitting Rice PDFs to the resulting histograms. We now build on that work by calculating the total variation distance $\mathit{TVD}_{\hat{\rho}}$ between the exact PDF \eqref{eq:PDF_rho} and the Rician approximation. Fig.\ \ref{fig:totVarDist_rho} shows plots of $\mathit{TVD}_{\hat{\rho}}$ as a function of $N$ for various values of $\rho$. We see that $\mathit{TVD}_{\hat{\rho}}$ increases with $\rho$ and decreases with $N$. At $N = 100$, $\mathit{TVD}_{\hat{\rho}}$ is lower than 0.05 even for $\rho$ as high as 0.9. This is strong evidence that the Rician approximation is indeed a good one when $N \gtrapprox 100$. \begin{remark} Although the expressions \eqref{eq:rho_rice_alpha} and \eqref{eq:rho_rice_beta} were empirically determined, with no basis other than simulations, the fact that $\hat{\rho}$ is approximately Rician for large $N$ has some theoretical grounding. The basic idea is that a Rice distribution is the distribution of the norm of a bivariate normal random vector whose covariance matrix is proportional to the identity. To connect this idea to $\hat{\rho}$, begin by invoking the central limit theorem to approximate $\bar{R}_c$ and $\bar{R}_s$ in \eqref{eq:est_rho} as normally distributed random variables. Next, replace $\bar{P}_1$ and $\bar{P}_2$ with the expected values $\expval{P_1} = 2 \sigma_1^2$ and $\expval{P_2} = 2 \sigma_2^2$, respectively. The result, up to first order in $\rho$, is a Rice-distributed random variable with $\alpha = \rho$ and $\beta = 1/\sqrt{2N}$. For a more detailed development of this argument, see Proposition \ref{prop:approx_detDN} and its proof. \end{remark} \begin{figure*}[t] \centering \subfloat[]{\includegraphics[width=\columnwidth]{figures/PDF_rho_rho.pdf} \label{subfig:PDF_rho_rho}} \hfil \subfloat[]{\includegraphics[width=\columnwidth]{figures/PDF_rho_N.pdf} \label{subfig:PDF_rho_N}} \caption{Probability density function of $\hat{\rho}$, together with the Rice distribution approximations described in Proposition \ref{prop:approx_rice}. In (a), $N = 10$ and $\rho \in \{0, 0.4, 0.8\}$; in (b), $\rho = 0.1$ and $N \in \{25, 50, 75, 100\}$.} \label{fig:PDF_rho} \end{figure*} In Fig.\ \ref{fig:PDF_rho}, we present plots of $f_{\hat{\rho}}(x | \rho, N)$ for various values of $\rho$ and $N$, together with the Rice distribution approximations. In Fig.\ \ref{subfig:PDF_rho_rho}, we see that the Rice distribution is not always a good fit because $N$ is small. Fig.\ \ref{subfig:PDF_rho_N} shows that the fit becomes quite good as $N$ increases; indeed, at $N = 100$ there is hardly any visible difference between the exact and approximate PDFs. A word of warning is appropriate here. The Rice distribution approximation outlined in Proposition \ref{prop:approx_rice} must not be confused with the Rice distribution that appears in the context of continuous-wave (CW) radars. It is true that, when a radar transmits a sinusoidal signal and detects using a square-law detector, the detector output is Rice distributed; see e.g.\ Ch.\ 4 of \cite{mahafza2000radar}. However, this is a completely different case from Proposition \ref{prop:approx_rice}. Not only is the transmit signal totally different (sinusoidal waveform vs.\ Gaussian noise), Proposition \ref{prop:approx_rice} describes an \emph{approximation}, whereas the Rice distribution for CW radars is \emph{exact}. In the experience of the authors, the coincidental appearance of the Rice distribution in these two different contexts has led to confusion. Therefore, we emphasize that these two applications of the Rice distribution are unrelated. \subsection{Exact and Approximate PDFs for $\hat{\phi}$} Finally, we give the PDF of the estimated phase $\hat{\phi}$. Again, we are able to take over a result from two-channel SAR. \begin{proposition} The PDF of $\hat{\phi}$ is % \vspace{-\jot} \begin{multline} \label{eq:PDF_phi} f_{\hat{\phi}}(\theta | \rho, \phi, N) = \frac{\Gamma \big( N + \frac{1}{2} \big) (1 - \rho^2)^N \xi}{2 \sqrt{\pi} \Gamma(N) (1 - \xi^2)^{N + \frac{1}{2}}} \\ + \frac{(1-\rho^2)^N}{2\pi} {}_2F_1 \mleft( N, 1; \tfrac{1}{2}; \xi^2 \mright) \end{multline} % where % \begin{equation} \xi \equiv \rho \cos( \theta - \phi ). \end{equation} \end{proposition} \begin{proof} See Sec.\ 2 of \cite{lee1994statistics}. Alternative forms of the PDF are given in \cite[eq.\ (12)]{lopes1992phase} and \cite[eq.\ (10)]{joughin1994probability}. \end{proof} \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{figures/kappa_approx.pdf}} \caption{Concentration parameter $\kappa$ from the von Mises distribution when fitted to the distribution of $\hat{\phi}$, plotted as a function of $N\rho^2$. Also plotted are approximations to the best-fit $\kappa$.} \label{fig:kappa_approx} \end{figure} This expression is, if anything, even more unwieldy than \eqref{eq:PDF_rho}. However, after plotting the PDF \eqref{eq:PDF_phi} for many values of $\rho$ and $N$, we observed that it always has the same basic shape as the von Mises distribution. This is one of the most basic probability distributions in circular statistics, and can be thought of as the circular analog of the normal distribution. Its PDF is \begin{equation} f(\theta | \mu, \kappa) = \frac{e^{\kappa\cos(\theta - \mu)}}{2\pi I_0(\kappa)}, \end{equation} where $\mu$ and $\kappa$ are the parameters of the distribution. They correspond to the parameters of the normal distribution in the following sense: when $\kappa \to \infty$, the von Mises distribution approaches the normal distribution with mean $\mu$ and variance $1/\kappa$ (on an appropriate interval of length $2\pi$). Thus, $\mu$ is the mean and $\kappa$ is a ``concentration parameter'': the higher the $\kappa$, the narrower the distribution. In fitting the von Mises distribution to \eqref{eq:PDF_phi}, choosing $\mu$ is simple enough: since \eqref{eq:PDF_phi} is symmetric about $\phi$, we simply choose $\mu = \phi$. The concentration parameter $\kappa$, however, is less straightforward to choose. To fit a value for $\kappa$, we begin by calculating the so-called ``mean resultant length'', \begin{equation} \label{eq:phi_R} R = \mleft| \int_{-\pi}^{\pi} f_{\hat{\phi}}(\theta | \rho, \phi, N) e^{j\theta} \, d\theta \mright|. \end{equation} In \cite{sra2011vonMises}, an approximation of the parameter $\kappa$ is given in terms of the mean resultant length by \begin{equation} \label{eq:kappa_approx} \kappa \approx \frac{R(2 - R^2)}{1 - R^2}. \end{equation} In Fig.\ \ref{fig:kappa_approx}, we use \eqref{eq:phi_R} and \eqref{eq:kappa_approx} to plot $\kappa$ as a function of $N\rho^2$. The reason why we plot $\kappa$ against $N\rho^2$ is that $\kappa$ appears to depend on $\rho$ and $N$ only through this combination. This is not evident from \eqref{eq:PDF_phi}, but nevertheless this behavior holds good for a wide variety of values for $\rho$ and $N$. From this plot, we find that when $N\rho^2 \leq 1$, $\kappa \approx 2 \sqrt{N\rho^2}$, otherwise $\kappa \approx 2 N\rho^2$. These approximations are also shown in Fig.\ \ref{fig:kappa_approx}. This leads to the following proposition. \begin{proposition} \label{prop:approx_vonMises} The estimator $\hat{\phi}$ approximately follows a von Mises distribution with parameters % \begin{subequations} \begin{align} \mu &= \phi \\ \kappa &= \begin{cases} 2 \sqrt{N\rho^2}, & N\rho^2 \leq 1 \\ 2 N\rho^2, & N\rho^2 > 1. \end{cases} \end{align} \end{subequations} \end{proposition} \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{figures/totVarDist_phi.pdf}} \caption{Total variation distance between the exact probability density function of $\hat{\phi}$ and the approximation described in Proposition \ref{prop:approx_vonMises}, plotted as a function of $N$, for $\rho \in \{0.05, 0.1, 0.15, 0.2\}$.} \label{fig:totVarDist_phi} \end{figure} \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=\columnwidth]{figures/PDF_phi_rho.pdf} \label{subfig:PDF_phi_rho}} \\ \subfloat[]{\includegraphics[width=\columnwidth]{figures/PDF_phi_N.pdf} \label{subfig:PDF_phi_N}} \caption{Probability density function of $\hat{\phi}$, together with the von Mises distribution approximation described in Proposition \ref{prop:approx_vonMises}. In (a), $N = 10$ and $\rho \in \{0, 0.2, 0.4\}$; in (b), $\rho = 0.1$ and $N \in \{25, 50, 250\}$. For all cases, $\phi = 0$.} \label{fig:PDF_phi} \end{figure} To show the plausibility of this empirical result, we again turn to the TVD. Fig.\ \ref{fig:totVarDist_phi} shows plots of $\mathit{TVD}_{\hat{\phi}}$ as a function of $N$ for various values of $\rho$. (Unfortunately, numerical instabilities prevented us from producing plots when $\rho$ is large, but we expect the behavior to be largely the same.) Unlike $\mathit{TVD}_{\hat{\rho}}$, $\mathit{TVD}_{\hat{\phi}}$ does not appear to decay fully to 0 as $N$ increases. However, an inspection of the vertical axis in Fig.\ \ref{fig:totVarDist_phi} shows that $\mathit{TVD}_{\hat{\phi}}$ is small for \emph{all} values of $N$. There are peaks corresponding to $N\rho^2 = 1$, which may perhaps be expected: this point marks the transition between the square-root and linear regimes in Fig.\ \ref{fig:kappa_approx}. We conclude that Proposition \ref{prop:approx_vonMises} is well-substantiated by numerical evidence. Fig.\ \ref{fig:PDF_phi} shows plots of $f_{\hat{\phi}}(\theta | \rho, 0, N)$ for various values of $\rho$ and $N$, as well as the corresponding von Mises distribution approximations. (We show only the case $\phi = 0$ because the shape of the plots remains the same for any value of $\phi$; only the location of the peak changes.) In all cases, the exact distribution is well-approximated by a von Mises distribution. \section{Target Detection and the Correlation Coefficient} \label{sec:target_detection} In this section, we apply the preceding results to the analysis of detection performance for noise-type radars. Of the four parameters that appear in \eqref{eq:QTMS_cov}, the correlation coefficient $\rho$ is the most important for target detection. In the absence of clutter, the presence or absence of a target can be reduced to a hypothesis test on $\rho$: \begin{equation} \label{eq:hypotheses} \begin{alignedat}{3} H_0&: \rho = 0 &&\quad\text{Target absent} \\ H_1&: \rho > 0 &&\quad\text{Target present} \end{alignedat} \end{equation} The reason for this is as follows. If there exists a correlation between the reference and received signals, there must be a target to reflect the transmitted signal to the receiver. If there were no target, the only signal received by the radar would be uncorrelated background noise. Now, it is obvious from the form of \eqref{eq:QTMS_cov} that any correlation between signals can only occur when $\rho > 0$. This explains the form of the hypothesis test \eqref{eq:hypotheses}. \subsection{Generalized Likelihood Ratio Test} One of the best-known methods for hypothesis testing is the generalized likelihood ratio (GLR) test. This entails maximizing the likelihood function under the two hypotheses. In previous work, we considered the case where the values of the nuisance parameters $\sigma_1$, $\sigma_2$, and $\phi$ were known \cite{luong2022likelihood}. In this paper, since we have ML estimates for those parameters, we need not make the same assumption. In fact, calculating the GLR test statistic---or the GLR \emph{detector}---is a simple task since we have the ML parameters. Unlike the complicated GLR detector derived in \cite{luong2022likelihood} under the assumption that $\sigma_1 = \sigma_2 = 1$ and $\phi = 0$, the GLR detector takes on a relatively simple form when all the parameters are unknown. In fact, it is equivalent to $\hat{\rho}$ itself, as we will now prove. \begin{proposition} The GLR test is equivalent to using $\hat{\rho}$ as a test statistic. \end{proposition} \begin{proof} The GLR test statistic for the hypotheses \eqref{eq:hypotheses} may be written as a difference of log-likelihoods: % \begin{equation} \label{eq:D_GLR} D_\text{GLR} = -2[ \ell(\hat{\sigma}_1, \hat{\sigma}_2, 0, \hat{\phi}) - \ell(\hat{\sigma}_1, \hat{\sigma}_2, \hat{\rho}, \hat{\phi}) ]. \end{equation} % Notice that the same estimators appear in both terms. This is permissible because the ML estimates $\hat{\sigma}_1$ and $\hat{\sigma}_2$ are the same under both hypotheses in \eqref{eq:hypotheses}. (The likelihood function does not depend on $\phi$ when $\rho = 0$, so it does not matter what value of $\phi$ is substituted.) See Appendix \ref{app:ml} for details. Substituting \eqref{eq:est_sigma1}--\eqref{eq:est_phi} into \eqref{eq:log_l}, we obtain % \begin{align} D_\text{GLR} &= 2N \ln \mleft( \frac{\bar{P}_1 \bar{P}_2}{\bar{P}_1 \bar{P}_2 - \bar{R}_c^2 - \bar{R}_s^2} \mright) \nonumber \\ &= -2N \ln(1 - \hat{\rho}^2). \end{align} % This is a strictly increasing function of $\hat{\rho}$. Since applying a strictly increasing function to a test statistic is equivalent to reparameterizing the decision threshold, the test itself does not change. The proposition follows. \end{proof} The gold standard for evaluating radar detection performance is the ROC curve, which plots the probability of detection $\ensuremath{p_\mathit{d}}$ against the probability of false alarm $\ensuremath{p_\mathit{fa}}$. In the case where $\hat{\rho}$ is used as a detector, obtaining the exact ROC curve requires an integration of \eqref{eq:PDF_rho}, which is extremely difficult. However, with the help of Proposition \ref{prop:approx_rice}, we can derive a closed-form approximation of the ROC curve. \begin{proposition} When $N \gtrapprox 100$, the ROC curve for the $\hat{\rho}$ detector is % \begin{equation} \label{eq:ROC_rho} \ensuremath{p_\mathit{d}}(\ensuremath{p_\mathit{fa}} | \rho, N) = Q_1 \mleft( \frac{\rho \sqrt{2N}}{1 - \rho^2}, \frac{\sqrt{2N \big( 1 - \ensuremath{p_\mathit{fa}}^{1/(N-1)} \big)}}{1 - \rho^2} \mright). \end{equation} % where $Q_1(\cdot, \cdot)$ is the Marcum $Q$-function of order 1 (not to be confused with the quadrature voltage $Q_1$). \end{proposition} \begin{proof} In the case where $\rho = 0$, the hypergeometric function in \eqref{eq:PDF_rho} drops out and it is possible to integrate the expression directly, yielding the cumulative density function (CDF) % \begin{equation} F_{\hat{\rho}}(x | 0, N) = 1 - (1 - x^2)^{N-1}. \end{equation} % For a given detection threshold $T$, the probability of false alarm is the probability that $\hat{\rho} > T$ given that $\rho = 0$. This is given by % \begin{equation} \label{eq:det_rho_pFA} \ensuremath{p_\mathit{fa}}(T) = 1 - F_{\hat{\rho}}(x | 0, N) = (1 - T^2)^{N-1}. \end{equation} % Inverting this, we obtain % \begin{equation} \label{eq:det_rho_threshold} T = \sqrt{1 - \ensuremath{p_\mathit{fa}}^{1/(N-1)}}. \end{equation} % Because $\hat{\rho} \geq 0$, we retain only the positive square root. To obtain the probability of detection, we make use of the Rician approximation described in Proposition \ref{prop:approx_rice}. The CDF of the Rice distribution is % \begin{equation} F_\text{Rice}(x | \alpha, \beta) = 1 - Q_1 \mleft( \frac{\alpha}{\beta}, \frac{x}{\beta} \mright). \end{equation} % Substituting \eqref{eq:rho_rice_alpha} and \eqref{eq:rho_rice_beta} yields % \begin{equation} \label{eq:rho_cdf} F(x | \rho, N) = 1 - Q_1 \mleft( \frac{\rho \sqrt{2N}}{1 - \rho^2}, \frac{x \sqrt{2N}}{1 - \rho^2} \mright). \end{equation} % The probability of detection is % \begin{equation} \ensuremath{p_\mathit{d}}(T) = 1 - F(T | \rho, N); \end{equation} % the proposition follows upon substituting \eqref{eq:det_rho_threshold}. \end{proof} \begin{remark} In \cite{luong2019rice}, a slightly different expression for the ROC curve was derived: % \begin{equation} \label{eq:ROC_rho_old} \ensuremath{p_\mathit{d}}(\ensuremath{p_\mathit{fa}} | \rho, N) = Q_1 \mleft( \frac{\rho \sqrt{2N}}{1 - \rho^2}, \frac{\sqrt{-2 \ln \ensuremath{p_\mathit{fa}}}}{1 - \rho^2} \mright). \end{equation} % This form arises from using the Rician approximation to calculate both $\ensuremath{p_\mathit{d}}$ and $\ensuremath{p_\mathit{fa}}$. In the above proposition, we have replaced the latter with the exact value of $\ensuremath{p_\mathit{fa}}$. There is, however, not much difference between the two for large $N$. The reader may notice a curious connection between \eqref{eq:det_rho_pFA}, the appearance of $\ln \ensuremath{p_\mathit{fa}}$ in \eqref{eq:ROC_rho_old}, and the well-known representation of the exponential function as a limit, $e^x = \lim_{N \to \infty} (1 + x/N)^N$. \end{remark} \begin{figure*}[t] \centering \subfloat[]{\includegraphics[width=\columnwidth]{figures/ROC_rho_rho.pdf} \label{subfig:ROC_rho_rho}} \hfil \subfloat[]{\includegraphics[width=\columnwidth]{figures/ROC_rho_N.pdf} \label{subfig:ROC_rho_N}} \caption{ROC curves for $\hat{\rho}$, together with approximations calculated using \eqref{eq:ROC_rho}. In (a), $N = 10$ and $\rho \in \{0.2, 0.4, 0.6, 0.8\}$; in (b), $\rho = 0.2$ and $N \in \{10, 50, 100, 200\}$.} \label{fig:ROC_rho} \end{figure*} Fig.\ \ref{fig:ROC_rho} shows ROC curves for the $\hat{\rho}$ detector together with corresponding approximations obtained using \eqref{eq:ROC_rho}. In all cases, the approximation gives a fair idea of the behavior of the exact ROC curve. But even at $N = 50$---half the stated value of $N = 100$ for the validity of the approximation---the approximate curve is visually indistinguishable from the exact curve. \subsection{Target Detection and MFN Estimation} In \cite{dawood2001roc}, Dawood and Narayanan proposed and analyzed a design for a noise radar receiver which, in effect, calculates the detector \begin{equation} \label{eq:det_DN} D_\mathrm{DN} = \frac{N}{4} \sqrt{\bar{R}_c^2 + \bar{R}_s^2}. \end{equation} Comparing this with \eqref{eq:est_rho}, the connection between $D_\mathrm{DN}$ and $\hat{\rho}$ is obvious. It bears a similar relation to $\hat{\rho}$ as covariance does to correlation, one being a normalized form of the other. The main motivation for $D_\mathrm{DN}$ is that it arises naturally from performing matched filtering on the complex-valued signal $I_1[n] + j Q_1[n]$ using the reference signal $I_2[n] + j Q_2[n]$. However, it is interesting to note that $D_\mathrm{DN}$ can also be motivated using the MFN approach outlined in Sec.\ \ref{subsec:MFN_est}. One way is to calculate the norm of the difference between \eqref{eq:QTMS_cov} under the two hypotheses \eqref{eq:hypotheses}: \begin{equation} \mleft\lVert \mat{\Sigma}(\sigma_1, \sigma_2, 0, \phi) - \mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi) \mright\rVert_F = 2 \rho \sigma_1 \sigma_2 \end{equation} Substituting the MFN parameter estimates \eqref{eq:est_sigma1}--\eqref{eq:est_rho} yields $\sqrt{\bar{R}_c^2 + \bar{R}_s^2} = 4 D_\mathrm{DN}/N$. The factor of $4/N$, of course, does not affect the performance of the detector in any way. Another way to see the connection between $D_\mathrm{DN}$ and MFN parameter estimation is inspired by the GLR test. Instead of calculating the difference between log-likelihoods, we calculate the difference between the squares of the minimized Frobenius norms: \begin{align} &\min \, \mleft\lVert \mat{\Sigma}(\sigma_1, \sigma_2, 0, \phi) - \hat{\mat{S}} \mright\rVert_F^2 - \min \, \mleft\lVert \mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi) - \hat{\mat{S}} \mright\rVert_F^2 \nonumber \\ &\qquad = \bar{R}_c^2 + \bar{R}_s^2. \end{align} The second line follows from \eqref{eq:g1_min} and \eqref{eq:g1_rho_0} in Appendix \ref{app:mfn}. This can be interpreted as the (squared) excess error that accrues from modeling the radar measurement data using the diagonal covariance matrix $\mat{\Sigma}(\sigma_1, \sigma_2, 0, \phi)$ as opposed to the more general form $\mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi)$. If the excess error is small, then the data is well-described by a diagonal covariance matrix and the target is probably absent, while the opposite is true if the excess error is large. And when considered as a detector, this excess error is equivalent to $D_\mathrm{DN}$. For completeness, we quote the expressions for the PDF and CDF of $D_\mathrm{DN}$ that were derived by Dawood and Narayanan. \begin{proposition} The PDF of $D_\mathrm{DN}$ for $x \geq 0$ is % \vspace{-\jot} \begin{multline} \label{eq:det_DN_PDF} f_\text{DN}(x | \sigma_1, \sigma_2, \rho, N) = \\ \frac{8 \tilde{x}^N}{\sigma_1\sigma_2 (1-\rho^2) \Gamma(N)} K_{N-1} \mleft( \frac{2\tilde{x}}{1-\rho^2} \mright) I_0 \mleft( \frac{2\rho\tilde{x}}{1-\rho^2} \mright) \end{multline} % where $\tilde{x} \equiv 2x/(\sigma_1\sigma_2)$ and $K_{N-1}$ is the modified Bessel function of the second kind of order $N-1$. The CDF is % \vspace{-\jot} \begin{multline} \label{eq:det_DN_CDF} F_\mathrm{DN}(x | \sigma_1, \sigma_2, \rho, N) = \\ 1 - \frac{2 \tilde{x}^N}{\Gamma(N)} \sum_{m=0}^{\infty} \rho^m K_{N+m} \mleft( \frac{2\tilde{x}}{1-\rho^2} \mright) I_m \mleft( \frac{2\rho\tilde{x}}{1-\rho^2} \mright). \end{multline} \end{proposition} \begin{proof} See Sec.\ V of \cite{dawood2001roc}. \end{proof} Like \eqref{eq:PDF_rho} and \eqref{eq:PDF_phi}, these expressions are rather cumbersome to work with. In the spirit of the approximations given in Propositions \ref{prop:approx_rice} and \ref{prop:approx_vonMises}, we now derive an approximate expression for the distribution of $D_\mathrm{DN}$. This time, however, we are able to supply a proof of the proposition. \begin{proposition} \label{prop:approx_detDN} In the limit $N \to \infty$ and to first order in $\rho$, $D_\mathrm{DN}$ follows a Rice distribution with parameters % \begin{subequations} \begin{align} \alpha &= \frac{N}{2}\rho\sigma_1\sigma_2 \\ \beta &= \sqrt{\frac{N}{8}}\sigma_1\sigma_2. \end{align} \end{subequations} % \end{proposition} \begin{proof} According to the central limit theorem, the random vector $[\bar{R}_c, \bar{R}_s]^\T$ follows a bivariate normal distribution when $N \to \infty$: % \begin{equation} \begin{bmatrix} \bar{R}_c \\ \bar{R}_s \end{bmatrix} \sim \mathcal{N} \mleft( \begin{bmatrix} 2\rho \sigma_1\sigma_2 \cos \phi \\ 2\rho \sigma_1\sigma_2 \sin \phi \end{bmatrix} \! , \frac{\sigma_1^2 \sigma_2^2}{2N} [\mat{1}_2 + \rho^2 \mat{R}'(2\phi)] \mright). \end{equation} % The mean vector is obtained by simply reading off and summing the appropriate entries in \eqref{eq:QTMS_cov}. The covariance matrix can be calculated by repeatedly applying \cite[Eq.\ (13)]{bohrnstedt1969cov}, which gives an expression for the expected value of fourth-order terms such as $\expval{I_1 I_2 Q_1 Q_2}$. It is evident that, to first order in $\rho$, the covariance matrix of $[\bar{R}_c, \bar{R}_s]^\T$ is proportional to the identity matrix. Recall that, for any $\theta$, the Rice distribution arises from the Euclidean norm of a bivariate normal random vector as follows: % \begin{equation} \vec{X} \sim \mathcal{N} \mleft( \begin{bmatrix} \alpha \cos \theta \\ \alpha \sin \theta \end{bmatrix} \! , \beta^2\mat{1}_2 \mright) \implies \| \vec{X} \| \sim \mathrm{Rice}(\alpha, \beta). \end{equation} % Therefore, to first order in $\rho$, $\sqrt{\bar{R}_c^2 + \bar{R}_s^2}$ follows a Rice distribution with parameters $\alpha = 2\rho \sigma_1\sigma_2$ and $\beta = \sigma_1\sigma_2/\sqrt{2N}$ when $N \to \infty$. The proposition follows upon rescaling $\sqrt{\bar{R}_c^2 + \bar{R}_s^2}$ by a factor of $N/4$. \end{proof} \begin{remark} In \cite{dawood2001roc}, Dawood and Narayanan observe that when $\rho = 0$ and $N$ is large, $D_\mathrm{DN}$ is Rayleigh distributed with scale parameter $\sigma = \sigma_1\sigma_2\sqrt{N/8}$. The Rice distribution reduces to the Rayleigh distribution when $\alpha = 0$, so our result is in agreement with Dawood and Narayanan's observation. \end{remark} \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{figures/totVarDist_detDN.pdf}} \caption{Total variation distance between the exact probability density function of $D_\mathrm{DN}$ and the approximation described in Proposition \ref{prop:approx_detDN}, plotted as a function of $N$, for $\rho \in \{0.2, 0.4, 0.6\}$.} \label{fig:totVarDist_detDN} \end{figure} To quantify the goodness of the approximation in Proposition \ref{prop:approx_detDN}, we use the TVD as we did previously. Fig.\ \ref{fig:totVarDist_detDN} shows plots of $\mathit{TVD}_{D_\mathrm{DN}}$ as a function of $N$. We see that as $N$ becomes large, $\mathit{TVD}_{D_\mathrm{DN}}$ decreases to a steady-state value which increases with $\rho$. Hence, as expected, the Rician approximation becomes better when $N$ is large, but is only good when $\rho$ is small. Moreover, the smaller the value of $\rho$, the smaller the $N$ required for the approximation to be a good one. \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=\columnwidth]{figures/PDF_detDN_rho.pdf} \label{subfig:PDF_detDN_rho}} \\ \subfloat[]{\includegraphics[width=\columnwidth]{figures/PDF_detDN_N.pdf} \label{subfig:PDF_detDN_N}} \caption{Probability density function of $D_\mathrm{DN}$ as a function of the normalized detector output $\tilde{x} \equiv 2x/(\sigma_1\sigma_2)$, together with the Rice distribution approximation described in Proposition \ref{prop:approx_detDN}. In (a), $N = 10$ and $\rho \in \{0, 0.3, 0.6\}$; in (b), $\rho = 0.1$ and $N \in \{25, 50, 75, 100\}$.} \label{fig:PDF_detDN} \end{figure} In Fig.\ \ref{fig:PDF_detDN}, we plot $f_\text{DN}(x | \sigma_1, \sigma_2, \rho, N)$ as a function of the normalized detector output $\tilde{x} \equiv 2x/(\sigma_1\sigma_2)$. By this normalization, we eliminate the need for separate plots in which $\sigma_1$ and $\sigma_2$ are varied, and we need only consider $\rho$ and $N$. The same figure also shows the corresponding Rice distribution approximations. Note that as $\rho$ increases, the approximation becomes worse and worse; conversely, as $N$ increases, the approximation becomes better and better. Finally, we use the approximation in Proposition \ref{prop:approx_detDN} to give a closed-form approximation for the ROC curve of the $D_\mathrm{DN}$ detector. \begin{proposition} In the limit $N \to \infty$ and to first order in $\rho$, the ROC curve for the detector $D_\mathrm{DN}$ is % \begin{equation} \label{eq:ROC_detDN} \ensuremath{p_\mathit{d}}(\ensuremath{p_\mathit{fa}} | \rho, N) = Q_1 \mleft( \rho\sqrt{2N}, \sqrt{-2 \ln \smash{\ensuremath{p_\mathit{fa}}}} \mright). \end{equation} \end{proposition} \begin{proof} When the radar target is absent ($\rho = 0$), the Rice distribution reduces to the Rayleigh distribution, the CDF of which is well known. Using Proposition \ref{prop:approx_detDN}, it is easy to show that % \begin{equation} \ensuremath{p_\mathit{fa}}(T) = \exp \mleft( -\frac{4 T^2}{N \sigma_1^2 \sigma_2^2} \mright). \end{equation} % The remainder of the proof is the same as that of Proposition \ref{prop:approx_rice}, except that we use the parameters listed in Proposition \ref{prop:approx_detDN}. \end{proof} \begin{figure*}[t] \centering \subfloat[]{\includegraphics[width=\columnwidth]{figures/ROC_detDN_rho.pdf} \label{subfig:ROC_detDN_rho}} \hfil \subfloat[]{\includegraphics[width=\columnwidth]{figures/ROC_detDN_N.pdf} \label{subfig:ROC_detDN_N}} \caption{ROC curves for $D_\mathrm{DN}$, together with approximations calculated using \eqref{eq:ROC_detDN}. In (a), $N = 10$ and $\rho \in \{0.2, 0.4, 0.6, 0.8\}$; in (b), $\rho = 0.2$ and $N \in \{10, 50, 100, 200\}$. Due to numerical instabilities, the ROC curve for $\rho = 0.2$, $N = 200$ has been omitted, and only the approximation is shown.} \label{fig:ROC_detDN} \end{figure*} Fig.\ \ref{fig:ROC_detDN} shows ROC curve plots for the $D_\mathrm{DN}$ detector, together with approximations obtained from \eqref{eq:ROC_detDN}. We see that the approximation is good for small values of $\rho$, but \eqref{eq:ROC_detDN} overestimates the performance of the detector when $\rho$ is large. Incidentally, Fig.\ \ref{subfig:ROC_detDN_N} shows the value of the approximations derived in this paper: numerical instabilities prevented us from plotting the ROC curve for $\rho = 0.2$, $N = 200$, and we were only able to plot the approximate curve. \subsection{Comparison of ROC Curves for $\hat{\rho}$ and $D_\mathrm{DN}$} It should come as no surprise that the ROC curves for $\hat{\rho}$ and $D_\mathrm{DN}$ are the same when $N \to \infty$ and $\rho \ll 1$. To see this, consider the ROC curve for $\hat{\rho}$ in the form \eqref{eq:ROC_rho_old}; this is a good approximation to \eqref{eq:ROC_rho} when $N$ is large. When $\rho \ll 1$, the $\rho^2$ terms in \eqref{eq:ROC_rho_old} may be ignored; the result is exactly \eqref{eq:ROC_detDN}. Hence, under the stated conditions, the two detectors are essentially equivalent. We should note that the conditions $N \to \infty$ and $\rho \ll 1$ have more than a purely mathematical significance. In fact, the correlation coefficient $\rho$ is a decreasing function of range \cite{luong2022performance}; it also depends on factors such as the radar cross section of the target. Thus, the small-$\rho$ limit corresponds to the case where the target of the radar is small or far away. Under such conditions, the easiest way to compensate is by increasing the integration time---in other words, increasing $N$. (One could also compensate by increasing the transmit power; this would increase $\rho$ instead.) In summary, $\hat{\rho}$ and $D_\mathrm{DN}$ perform similarly when the target of the radar is small, far away, or otherwise difficult to detect. In this case, it may be preferable to use $D_\mathrm{DN}$, if only because \cite{dawood2001roc} includes an explicit block diagram showing how to build the detector using analog components such as mixers. \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{figures/ROC_comparison.pdf}} \caption{Comparison of ROC curves for $\hat{\rho}$ and $D_\mathrm{DN}$ when $N = 10$ and $\rho \in \{0.2, 0.5, 0.8\}$.} \label{fig:ROC_comparison} \end{figure} At the opposite extreme, however, it turns out that the two detectors can behave quite differently. When $\rho$ is large and $N$ is small, it is possible for $\hat{\rho}$ to outperform $D_\mathrm{DN}$. In Fig.\ \ref{fig:ROC_comparison}, we plot (exact) ROC curves for the two detectors for $N = 10$. When $\rho = 0.2$, the two detectors remain indistinguishable, but as $\rho$ increases, $\hat{\rho}$ achieves a far higher $\ensuremath{p_\mathit{d}}$ for a given $\ensuremath{p_\mathit{fa}}$. Therefore, when it is desired to detect a nearby target quickly, it is advantageous to use $\hat{\rho}$. \section{Conclusion} \label{sec:conclusion} This paper focused on deriving estimators for the four parameters that appear in the noise/QTMS radar covariance matrix \eqref{eq:QTMS_cov}, and elucidating certain statistical properties of these estimators. Our results may be summarized as follows: we derived estimators for the parameters, we characterized the probability distributions of the estimators, and we applied the results to the problem of target detection. In Sec.\ \ref{sec:estimating}, we considered two methods for obtaining estimates of the parameters $\sigma_1$, $\sigma_2$, $\rho$, and $\phi$. One of them was based on minimizing the Frobenius norm between the sample covariance matrix (calculated directly from radar measurement data) and the structured matrix \eqref{eq:QTMS_cov}. The other was maximum likelihood estimation. Remarkably, both methods give the same estimates. In Sec.\ \ref{sec:pdfs}, we gave expressions for the probability density functions for each of the four estimators. Another remarkable coincidence manifested here: for $\hat{\rho}$ and $\hat{\phi}$, we were able to reuse results from the theory of two-channel SAR, saving us the trouble of deriving the PDFs from scratch. Unfortunately, these PDFs were very complicated, involving the use of hypergeometric functions. However, we empirically found that these distributions could be approximated by much simpler distributions, namely the Rice distribution (for $\hat{\rho}$) and the von Mises distribution (for $\hat{\phi}$). Finally, in Sec.\ \ref{sec:target_detection}, we applied our results to the noise radar target detection problem. We found that the generalized likelihood ratio test was equivalent to using $\hat{\phi}$ as a detector; we also showed connections between the minimum Frobenius norm method for parameter estimation and the detector $D_\mathrm{DN}$ previously studied by Dawood and Narayanan in \cite{dawood2001roc}. Using the approximations from the previous section, we found closed-form equations for the ROC curves of $\hat{\rho}$ and $D_\mathrm{DN}$. In summary, this paper represents a broad overview of the basic statistical behavior of noise-type radars. We hope, in particular, that the various approximations will be found enlightening. The idea that $\hat{\rho}$ roughly follows a Rice distribution, for example, tells us more about $\hat{\rho}$ than the bare fact that it follows the exact PDF \eqref{eq:PDF_rho}. And from a more practical perspective, the estimators \eqref{eq:est_sigma1}--\eqref{eq:est_phi} are not computationally onerous, and should not be too difficult to incorporate into radar systems. The results in this paper suggest several avenues for future research. For example, we assumed that all external noise was additive white Gaussian noise. It is necessary to test, using an experimental noise radar (or even a QTMS radar), how well that assumption holds up in practice. Another subject for future research is the properties of other parameters that could be estimated from radar data, such as bearing or range. Range, in particular, is related to phase, an estimator for which is given in \eqref{eq:est_phi}. The peculiar square-root/linear behavior of this estimator, as seen in Fig.\ \ref{fig:kappa_approx}, suggests that the statistical properties of any estimator of the radar range should be carefully studied. Finally, we were able to reuse several results from the theory of two-channel SAR in this paper. It would be fascinating if we could unearth a deeper mathematical connection between noise radars and SAR in future work. \appendices \section{Derivation of the Minimum Frobenius Norm Estimators} \label{app:mfn} For convenience, instead of performing the minimization \eqref{eq:minimization} directly, we will minimize the square of the norm. The squared Frobenius distance between the theoretical QTMS covariance matrix $\mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi)$ and the sample covariance matrix $\hat{\mat{S}}$ is \begin{gather} \begin{aligned} &g_1(\sigma_1, \sigma_2, \rho, \phi) \equiv \mleft\| \mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi) - \hat{\mat{S}} \mright\|_F^2 \\ &\qquad = 2 (\sigma_1^4 + 2\rho^2 \sigma_1^2 \sigma_2^2 + \sigma_2^4) - 2 (\bar{P}_1 \sigma_1^2 + \bar{P}_2 \sigma_2^2) \\ &\qquad\phantom{=}\qquad - 4 \rho \sigma_1 \sigma_2 (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi) + \| \hat{\mat{S}} \|_F^2. \end{aligned} \end{gather} The estimators are obtained by minimizing $g_1(\sigma_1, \sigma_2, \rho, \phi)$ subject to the conditions $0 \leq \sigma_1$, $0 \leq \sigma_2$, and $0 \leq \rho \leq 1$. Note that $\lVert \hat{\mat{S}} \rVert_F^2$ is a constant that does not depend on any of the four parameters. The minimum of $g_1(\sigma_1, \sigma_2, \rho, \phi)$ must lie either at a stationary point or on the boundary of the parameter space over which we maximize. It turns out that the minimum does not occur on the boundary, but we will leave an analysis of the boundary for later and focus on the stationary points for now. The stationary points of $g_1(\sigma_1, \sigma_2, \rho, \phi)$ can be obtained by setting $\nabla g_1(\sigma_1, \sigma_2, \rho, \phi) = 0$ and solving for the parameters $\sigma_1$, $\sigma_2$, $\rho$, and $\phi$. The four elements of $\nabla g_1(\sigma_1, \sigma_2, \rho, \phi)$ are \begin{subequations} \begin{align} \label{eq:g1_dsigma1} \frac{\partial g_1}{\partial \sigma_1} &= 4 \sigma_1 (2\sigma_1^2 + 2 \rho^2 \sigma_2^2 - \bar{P}_1) - 4 \rho \sigma_2 (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi) \\ \label{eq:g1_dsigma2} \frac{\partial g_1}{\partial \sigma_2} &= 4 \sigma_2 (2\sigma_2^2 + 2 \rho^2 \sigma_1^2 - \bar{P}_2) - 4 \rho \sigma_1 (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi) \\ \label{eq:g1_drho} \frac{\partial g_1}{\partial \rho} &= \rho \sigma_1^2 \sigma_2^2 - 4 \sigma_1 \sigma_2 (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi) \\ \label{eq:g1_dphi} \frac{\partial g_1}{\partial \phi} &= 4 \rho \sigma_1 \sigma_2 (\bar{R}_c \sin \phi - \bar{R}_s \cos \phi). \end{align} \end{subequations} Solving $\partial g_1/\partial \phi = 0$ immediately yields the MFN estimator for $\phi$: \begin{equation} \label{eq:g1_sol_phi} \hat{\phi} = \atantwo(\bar{R}_s, \bar{R}_c). \end{equation} Substituting this into \eqref{eq:g1_drho} and rearranging the equation $\partial g_1/\partial \rho = 0$ gives \begin{equation} \label{eq:g1_rho_prelim} \rho = \frac{\sqrt{\bar{R}_c^2 + \bar{R}_s^2}}{2 \sigma_1 \sigma_2} \end{equation} Substituting \eqref{eq:g1_sol_phi} and \eqref{eq:g1_rho_prelim} into \eqref{eq:g1_dsigma1} yields \begin{equation} 0 = 8 \sigma_1^3 - 4 \bar{P}_1 \sigma_1, \end{equation} which yields the MFN estimator for $\sigma_1$: \begin{equation} \hat{\sigma}_1 = \sqrt{ \frac{\bar{P}_1}{2} }. \end{equation} The MFN estimator for $\sigma_2$ can be obtained from \eqref{eq:g1_dsigma2} in exactly the same manner: \begin{equation} \hat{\sigma}_2 = \sqrt{ \frac{\bar{P}_2}{2} }. \end{equation} Finally, substituting $\hat{\sigma}_1$ and $\hat{\sigma}_2$ into \eqref{eq:g1_rho_prelim} yields \begin{equation} \hat{\rho} = \sqrt{ \frac{\bar{R}_c^2 + \bar{R}_s^2}{\bar{P}_1 \bar{P}_2} }. \end{equation} To complete the proof, we will now show that $g_1$ is not minimized on the boundaries of our optimization problem. First, note that \begin{equation} \label{eq:g1_min} g_1(\hat{\sigma}_1, \hat{\sigma}_2, \hat{\rho}, \hat{\phi}) = \lVert \hat{\mat{S}} \rVert_F^2 - \frac{\bar{P}_1^2 + \bar{P}_2^2}{2} - \bar{R}_c^2 - \bar{R}_s^2. \end{equation} It is easy to show that in the case where $\sigma_1 = 0$, \begin{equation} \min_{\sigma_2, \rho, \phi} g_1(0, \sigma_2, \rho, \phi) = \lVert \hat{\mat{S}} \rVert_F^2 - \frac{\bar{P}_2^2}{2}. \end{equation} This is manifestly greater than \eqref{eq:g1_min}, so the minimum does not occur when $\sigma_1 = 0$. A similar result occurs when $\sigma_2 = 0$. Likewise, when $\rho = 0$, \begin{equation} \label{eq:g1_rho_0} \min_{\sigma_1, \sigma_2, \phi} g_1(\sigma_1, \sigma_2, 0, \phi) = \lVert \hat{\mat{S}} \rVert_F^2 - \frac{\bar{P}_1^2 + \bar{P}_2^2}{2} \end{equation} which again is greater than \eqref{eq:g1_min}, so the minimum does not occur when $\rho = 0$, either. The final case is $\rho = 1$, which in fact is a very complicated case requiring the use of a computer algebra system. Although we omit the relevant expressions here, we have verified that the minimum does not occur at $\rho = 1$. We may conclude, therefore, that the MFN estimators are indeed as given above. \section{Derivation of the Maximum Likelihood Estimators} \label{app:ml} As shown in \cite{burg1982estimation}, maximizing the likelihood function is equivalent to maximizing the function \vspace{-\jot} \begin{multline} g_2(\sigma_1, \sigma_2, \rho, \phi) = \\ -\ln |\mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi)| - \operatorname{tr} \mleft[ \mat{\Sigma}(\sigma_1, \sigma_2, \rho, \phi)^{-1} \hat{\mat{S}} \mright] \end{multline} As above, we impose the conditions $0 \leq \sigma_1$, $0 \leq \sigma_2$, and $0 \leq \rho \leq 1$. By a straightforward but tedious calculation, we find that \vspace{-\jot} \begin{multline} g_2(\sigma_1, \sigma_2, \rho, \phi) = - 2 \ln \mleft[ \sigma_1^2 \sigma_2^2 (1 - \rho^2) \mright] \\ - \frac{1}{1 - \rho^2} \mleft( \frac{\bar{P}_1}{\sigma_1^2} + \frac{\bar{P}_2}{\sigma_2^2} - \frac{2 \rho (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi)}{\sigma_1 \sigma_2} \mright) \end{multline} The maximum of $g_2(\sigma_1, \sigma_2, \rho, \phi)$ must lie either at a stationary point or on the boundary of the parameter space over which we maximize. Some parts of the boundary are easily taken care of: when $\sigma_1 = 0$, $\sigma_2 = 0$, or $\rho = 1$, $g_2(\sigma_1, \sigma_2, \rho, \phi)$ is undefined, so no maximum can occur at those points. This leaves only $\rho = 0$. For now, we will assume $\rho \neq 0$ and return to this case later. The stationary points of $g_2(\sigma_1, \sigma_2, \rho, \phi)$ can be obtained by setting $\nabla g_2(\sigma_1, \sigma_2, \rho, \phi) = 0$ and solving for the four parameters. The elements of $\nabla g_2(\sigma_1, \sigma_2, \rho, \phi)$ are \begin{subequations} \begin{align} \label{eq:g2_dsigma1} \frac{\partial g_2}{\partial \sigma_1} &= -\frac{4}{\sigma_1} + \frac{2}{1 - \rho^2} \mleft( \frac{\bar{P}_1}{\sigma_1^3} - \frac{\rho (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi)}{\sigma_1^2 \sigma_2} \mright) \\ \label{eq:g2_dsigma2} \frac{\partial g_2}{\partial \sigma_2} &= -\frac{4}{\sigma_2} + \frac{2}{1 - \rho^2} \mleft( \frac{\bar{P}_2}{\sigma_2^3} - \frac{\rho (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi)}{\sigma_1 \sigma_2^2} \mright) \\ \label{eq:g2_drho} \frac{\partial g_2}{\partial \rho} &= \frac{4\rho}{1 - \rho^2} + \frac{2(\bar{R}_c \cos \phi + \bar{R}_s \sin \phi)}{\sigma_1 \sigma_2 (1 - \rho^2)} \nonumber \\ &\phantom{=}\ - \frac{2 \rho}{(1 - \rho^2)^2} \mleft( \frac{\bar{P}_1}{\sigma_1^2} + \frac{\bar{P}_2}{\sigma_2^2} - \frac{2 \rho (\bar{R}_c \cos \phi + \bar{R}_s \sin \phi)}{\sigma_1 \sigma_2} \mright) \\ \label{eq:g2_dphi} \frac{\partial g_2}{\partial \phi} &= \frac{2 \rho (\bar{R}_s \cos \phi - \bar{R}_c \sin \phi)}{\sigma_1 \sigma_2 (1 - \rho^2)}. \end{align} \end{subequations} To begin, note that $\partial g_2/\partial \phi = 0$ can be solved immediately to yield the ML estimator for $\phi$: \begin{equation} \label{eq:g2_sol_phi} \hat{\phi} = \atantwo(\bar{R}_s, \bar{R}_c). \end{equation} Next, we combine \eqref{eq:g2_dsigma1} and \eqref{eq:g2_dsigma2} as follows: \begin{align} \label{eq:P_1_P_2} 0 &= \sigma_1 \frac{\partial g_2}{\partial \sigma_1} - \sigma_2 \frac{\partial g_2}{\partial \sigma_2} \nonumber \\ &= \frac{2}{1 - \rho^2} \mleft( \frac{\bar{P}_1}{\sigma_1^2} - \frac{\bar{P}_2}{\sigma_2^2} \mright) \end{align} It follows that $\sigma_2 = \sigma_1 \sqrt{\bar{P}_2/\bar{P}_1}$. Substituting this and \eqref{eq:g2_sol_phi} into \eqref{eq:g2_drho}, we find that, up to an unimportant prefactor, \begin{equation} 0 = \sqrt{ \frac{\bar{R}_c^2 + \bar{R}_s^2}{\bar{P}_1 \bar{P}_2} } (1 + \rho^2) + \frac{2 \sigma_1^2 \rho (1 - \rho^2)}{\bar{P}_1} - 2 \rho. \end{equation} Rearranging, we obtain \begin{equation} \sigma_1^2 = \bar{P}_1 \mleft[ \frac{1}{1 - \rho^2} - \sqrt{ \frac{\bar{R}_c^2 + \bar{R}_s^2}{\bar{P}_1 \bar{P}_2} } \frac{1 + \rho^2}{2 \rho (1 - \rho^2)} \mright]. \end{equation} Substituting this, \eqref{eq:g2_sol_phi}, and \eqref{eq:P_1_P_2} into \eqref{eq:g2_dsigma1} yields, after much simplification, \begin{equation} 0 = \bar{R}_c^2 + \bar{R}_s^2 - \rho \sqrt{ \bar{P}_1 \bar{P}_2 (\bar{R}_c^2 + \bar{R}_s^2) }. \end{equation} From this, we obtain the ML estimator for $\rho$: \begin{equation} \label{eq:sol_rho} \hat{\rho} = \sqrt{ \frac{\bar{R}_c^2 + \bar{R}_s^2}{\bar{P}_1 \bar{P}_2} }. \end{equation} Once we substitute \eqref{eq:g2_sol_phi}, \eqref{eq:P_1_P_2}, and \eqref{eq:sol_rho} into \eqref{eq:g2_dsigma2}, we find that \begin{equation} 0 = \bar{P}_2 - 2\sigma_2^2. \end{equation} The ML estimators for $\sigma_1$ and $\sigma_2$ follow immediately: \begin{align} \hat{\sigma}_1 &= \sqrt{ \frac{\bar{P}_1}{2} } \\ \hat{\sigma}_2 &= \sqrt{ \frac{\bar{P}_2}{2} }. \end{align} We now return to the possibility that the maximum of $g_2(\sigma_1, \sigma_2, \rho, \phi)$ may occur at the boundary where $\rho = 0$. It turns out that in this case, the estimators $\hat{\sigma}_1$ and $\hat{\sigma}_2$ remain the same; this is easily verified by substituting $\rho = 0$ into \eqref{eq:g2_dsigma1} and \eqref{eq:g2_dsigma2} and solving. As for $\hat{\phi}$, it loses all meaning because $\phi$ does not enter into the likelihood function when $\rho = 0$. But which estimator, $\hat{\rho} = 0$ or \eqref{eq:sol_rho}, actually maximizes $g_2(\sigma_1, \sigma_2, \rho, \phi)$? This is exactly the question that the likelihood ratio detector \eqref{eq:D_GLR} is designed to answer. Therefore, the appropriate estimator for $\rho$ depends on whether the target is predicted to be present or absent: if present, use \eqref{eq:sol_rho}; if absent, $\hat{\rho} = 0$. \section*{Acknowledgment} This work was supported by the Natural Science and Engineering Research Council of Canada (NSERC). D.\ Luong also acknowledges the support of a Vanier Canada Graduate Scholarship. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-3356
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Summary of the ZEC update algorithm} \label{sec:summary} As introduced in the main text, the essence of the Zero-energy Cluster (ZEC) update algorithm is to divide the spins into clusters, which can be flipped without energy cost. Here, we summarize the algorithm in a more formal way, to make rigorous arguments. The checkerboard lattice can be seen as an alternate pattern of plaquettes (with diagonal bonds) and blank squares. We start with the introduction of dual square lattice by placing sites on blank squares. We divide the sites on this dual square lattice into two sublattices, $\mathcal{A}$ and $\mathcal{B}$, in a bipartite way [Fig.~\ref{fig:SupFig1}]. In the ZEC update algorithm, we connect the neighboring sites on the same sublattice to make a bond across a plaquette, if the interactions across the bond sum up to zero. We classify the bonds into $\mathcal{A}$-bond and $\mathcal{B}$-bond, according to which sublattice sites the bond connects. If the bonds make a closed path, we call it a boundary, and classify it into $\mathcal{A}$-boundary and $\mathcal{B}$-boundary in the same way. We finally define a cluster as a set of spins separated by the boundaries. Given these definitions, the formal procedure of the ZEC update algorithm can be summarized as below: \begin{itemize} \item[(a)] Choose one (dual) sublattice and draw bonds on all the plaquettes \item[(b)] Allocate boundaries and divide the whole spins into clusters \item[(c)] Randomly choose one ferromagnetic plaquette \item[(d)] Flip the spins inside the cluster containing the ferromagnetic plaquette in (c) and return to (a)\\ \end{itemize} Regarding (a), it is crucial to choose only one sublattice to ensure the detailed balance condition. We address this point in the next section, Sec.~\ref{sec:DBC}. To implement the step (b) in a program code, a few practical tips would be helpful, which we introduce in Sec.~\ref{sec:BoundaryConstruction} \begin{figure}[h] \begin{centering} \includegraphics[width = 6cm]{SupFig1.eps} \caption{Dual sublattices ``$\mathcal{A}$'' and ``$\mathcal{B}$''.} \label{fig:SupFig1} \end{centering} \end{figure} \subsection{Detailed balance condition} \label{sec:DBC} We show the detailed balance condition is satisfied in the ZEC update algorithm. In this algorithm, once a cluster is specified and flipped, the flipped cluster can be flipped once again without energy cost. It naturally defines an inverse process. To ensure the detailed balance condition, the probabilities of the initial and inverse cluster flips must be the same. The probability of a cluster flip is equal to the product of two quantities: a probability of choosing one particular cluster and a probability of actually flipping it. For the latter, we set it to be 1: we adopt the Metropolis rule to flip a cluster with probability 1, once it is chosen. By definition, the cluster flip is an equal-energy transition. Accordingly, only the former probability is important to satisfy the detailed balance condition: a cluster must be chosen with the same probability in the initial and inverse processes. To consider the probability of cluster choice, suppose a certain spin configuration with $N_{\rm q}$ ferromagnetic plaquettes with $Q_p=\pm 4$, and divide the spins into zero-energy clusters, where the $i$th cluster has $n_i$ ferromagnetic plaquettes inside. Since our algorithm randomly chooses one ferromagnetic plaquette, and flip the cluster containing it, the probability of choosing the $i$th cluster is $n_i/N_{\rm q}$. So, if the spins are divided into exactly the same set of clusters after the flip, the inverse process also occurs with the probability $n_i/N_{\rm q}$, and the detailed balance condition is satisfied. Indeed, to preserve the cluster division after the flip, the sublattice selection plays an important role. Suppose a cluster flip is made along an $\mathcal{A}$-boundary, then the $\mathcal{A}$-bonds keeps the same configuration after the flip, while the $\mathcal{B}$-bond configuration changes. To see how the bond configuration changes, let us take one cluster. Plaquettes contained in a cluster are divided into two types. (I) \textit{Interior}: all the four spins on a plaquette are inside the cluster, and (II) \textit{surface}: only two spins are inside. After the cluster flip using an $\mathcal{A}$-boundary, the bonds on the interior plaquettes do not change, while the bonds on the surface plaquettes may change. More specifically, on a surface plaquette, the original bond, forming a part of the boundary, remains unchanged. Meanwhile, a new bond may be generated (or an old bond may be erased) perpendicular to the boundary. This newly generated bond connects the dual plaquettes of $\mathcal{B}$ sublattice, and changes the $\mathcal{B}$-bond configuration. This clearly points how the detailed balance condition is violated, if $\mathcal{A}$-boundaries and $\mathcal{B}$-boundaries are mixed in a single update: The cluster division changes after the flip, and the flipped cluster is not chosen with the same probability in the inverse process. To avoid this difficulty, we only need to use one species of boundaries for one update, then the cluster structure is preserved. The invariance of the cluster structure may seem unfavorable in terms of ergodicity. However, if updates with only-$\mathcal{A}$-boundaries and updates with only-$\mathcal{B}$-boundaries are carried out e.g. alternately, one can change cluster divisions and make a search in a large region of phase space without violating the detailed balance condition. Figure \ref{fig:SupFig3} shows how the absence of the sublattice selection leads to the breakdown of the detailed balance condition. In this configuration, the region between the solid line and the dashed line (left red shaded area) composes a cluster containing a pair of ferromagnetic plaquettes. Here, different types of boundaries are used at the same time. After this update, the spin configuration changes from the left to the right in Fig.~\ref{fig:SupFig3}. Then new bonds arise across the original cluster and the cluster containing the ferromagnetic plaquettes shrinks (right red shaded area). This example demonstrates that the mixed use of $\mathcal{A}$- and $\mathcal{B}$-boundaries breaks the detailed balance condition. \begin{figure}[h] \begin{centering} \includegraphics[width = 8.5cm]{SupFig3.eps} \caption{In the left figure, the red shaded area is surrounded by the solid line and the dashed line, which belong to different types of boundary. If this region is used as a cluster in an update, the cluster configuration changes to the right and the cluster containing the ferromagnetic plaquettes also changes. Consequently, the detailed balance condition breaks down.} \label{fig:SupFig3} \end{centering} \end{figure} \subsection{Details of boundary construction} \label{sec:BoundaryConstruction} In this section, we provide practical information to implement the algorithm for numerical calculation. While we assumed the spins were divided into a certain number of clusters in Sec.~\ref{sec:DBC}, we do not need to identify all the clusters in each update. Practically, we choose a ferromagnetic plaquette at random and find a cluster that contains this plaquette. To find a cluster, we resort to a sort of breadth first search. First, we take one spin on a ferromagnetic plaquette, and move to the neighboring plaquette which shares this spin, and draw bonds, according to the rule in Fig.~\ref{fig:SupFig2}. Then, we register the connected spins on this neighboring plaquette, i.e. the spins which are not separated by the drawn bonds from the spin on the starting plaquette. Here, we consider the bonds on only one sublattice and ignore those on the other sublattice, as we have discussed in Sec.~\ref{sec:DBC}. Then, we repeat this procedure for the four spins on the original plaquette. Next, we take one of the neighboring plaquettes and choose one of the connected spins. We then move to the plaquette which shares this spin and draw bonds, according to the rule in Fig.~\ref{fig:SupFig2}, and register the connected spin on this plaquette. We repeat this procedure to list up all the spins connected to the first plaquette. This repetition defines a sort of breadth first search algorithm on a plaquette network. And when this search stops, we can obtain a full list of spins composing the cluster that contains the original ferromagnetic plaquette. \begin{figure}[h] \begin{centering} \includegraphics[width = 8.5cm]{SupFig2.eps} \caption{Examples of bonds on a plaquette.} \label{fig:SupFig2} \end{centering} \end{figure} \subsection{Dangerous configurations} Although the ZEC update does not violate the detailed balance condition, it is not perfect. There are inconvenient layouts of ferromagnetic plaquettes. We show one of such patterns in Fig.~\ref{fig:SupFig4}. In Fig.~\ref{fig:SupFig4}(a), two chains of ferromagnetic plaquettes are embedded in a system of size $10\times 10$. At first glance, it seems that the ZEC update works with complete accuracy since flippable clusters can be easily found. However, we cannot make a cluster that includes plaquettes on both two chains. This indicates that the sum of the charges on each chain is preserved and that the resulting equilibrium depends on the initial charge configuration. This kind of unfavorable arrangements will appear more frequently if the density of the ferromagnetic plaquettes increases. \begin{figure}[h] \begin{centering} \includegraphics[width = 4cm]{SupFig4.eps} \caption{Example of a case where the ZEC update does not work with perfect accuracy. Charges on different chains of ferromagnetic plaquettes never form a zero-energy cluster.} \label{fig:SupFig4} \end{centering} \end{figure} \end{document}
proofpile-arXiv_065-3364
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Supplementary information} \section*{FS et. al.} \vspace{3mm} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \section{Total transmission over the entire range of slab thicknesses} In 1996 Durian developed a two-stream theory for the propagation of light through a randomly scattering slab. The theory provides a simple analytical expression for the total transmission coefficient covering the entire range of slab thicknesses from $L=0$ to thick slabs~\cite{durian1996two,lemieux1998diffusing}. The theory was later reformulated and generalized in terms of a telegrapher equation ~\cite{lemieux1998diffusing} but the expression we use here is the same in both works. In his two-stream theory, Durian describes scattering photon transport by two concentration currents, an up- (forward) and down- (backward) stream. The theory takes into account ballistic, diffusive scattering and the cross-over regime where the incident photons are converted into diffuse photons. For classical scattering, e.g. in the absence of PBG and SAL, the theory is exact in one dimension and describes transport in three dimensions more accurately than diffusion theory. Notably, it covers the cross over regime to thin slabs and reproduces the expected $T\left(L=0\right)=1$ in the absence of boundary reflectivity. In analogy to Eq.~\eqref{efflength} we can generalize the results by Durian (Eq.~$\left(14\right)$ in ref.~\cite{durian1996two}), see Eq.~\eqref{DurianT}. For a full treatment see \cite{haberko2020transition}. Using Eq.~\eqref{DurianT}, again with $\ell^\ast \simeq \ell$, we can describe the numerical transport data for $T\left(L/a, \nu^\prime\right)$ over the entire range of $L/a$ as shown in Figure~\ref{FitTall}. \begin{widetext} \begin{equation} T\left(L\right) = T_\text{b}+T_\text{d}\simeq e^{-L/\ell}+\left(1-R_0\right) \left[\frac{\left(1+z_0\right)}{2 z_0 + \tilde L/\ell^\ast}\left(1-e^{-L/\ell}\right) -\frac{\tilde L/\ell^\ast}{2 z_0 + \tilde L/\ell^\ast}e^{-L/\ell}\right]\label{DurianT} \end{equation} \begin{figure*}[h] \includegraphics[width=0.8\columnwidth]{TLPlotsSI} \caption{\label{fig:TLPlotsSI} Total transmission $T\left(L,\nu^\prime\right)$ as a function of the reduced slab thickness $L/a$ in log-log representation for two different frequencies $\nu^\prime$ in the localized and bandgap regime. Symbols denote the results from FDTD simulations averaged over 6 (thick slabs) to 15 (thin slabs) samples. The dash-dotted green line shows the curve fitted with Eq.~\eqref{DurianT} over $7a<L<18a$. a) $\nu^\prime=0.462$, $\ell/a=0.75$, $\xi/a$= 3.6 and $1-R_0=0.22$, b) $\nu^\prime=0.474$, $\ell/=0.62$, $\xi/a$= 4.5 and $ 1-R_0=0.09$. The extrapolation length ratio is $z_0=3.25$, from~\cite{haberko2020transition}.}\label{FitTall} \end{figure*} \end{widetext} \end{document}
proofpile-arXiv_065-3367
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \noindent The focus of this research is the recovery of a probability distribution $\mu$ in the following form: \begin{align*} \mu = \frac{1}{n} \sum_{i=1}^n \delta_{w_i}, \end{align*} where $w_i \in \Omega \subset {{\mathbb {R}}}^d$ lie on the unit sphere denoted by $\S_{d-1}$, and $\delta_{w_i}$ is Dirac measure. Given a feature map $\Phi:\S_{d-1} \to \H$ whose range lies in the Hilbert space $\H$, we define the following moments of $\mu$: \begin{align*} \Phi \mu= \int \Phi(w) d \mu(w) \in \H. \end{align*} \noindent \textit{Sparse measure recovery} aims at recovering $\mu$ from $\Phi \mu$. A standard recovery method is moment matching, namely \begin{align} \label{eq:recovery_program} \tag{moment matching} \min_{\nu} \left(L(\nu) = \| \Phi \nu -\Phi \mu\|^2_2\right), \quad \mbox{ for } \nu = \frac{1}{n} \sum_{i=1}^n \delta_{v_i}, \end{align} where $v_1, \dots, v_n \in \Omega \subset {{\mathbb {R}}}^d$. The measure-recovery has been the focus of research in theoretical computer science and neural computing. Theoretical computer scientists have extensively studied the recovery from polynomial moments in the form $\Phi(w) := \text{vector}(w^{\otimes k}) \in {{\mathbb {R}}}^{2^k}$ for tensor decomposition. While neural computing relies on non-polynomial moments. We study the influence of the moments on the computational complexity of the recovery. The focus of $k$-tensor decomposition is the recovery from $\Phi(w):= \text{vector}(w^{\otimes k}) \in {{\mathbb {R}}}^{d^k}$. For these moments, \ref{eq:recovery_program} does not necessarily recover $\mu$: The recovery is (information-theoretically) possible if $n<2^{k-1}$ \cite{liu2001cramer}. As $n$ grows, one requires to increase $k$ that rises the recovery cost. For example, \cite{ma2016polynomial} proposes a recovery method with \textit{sums-of-squares} that requires $k=\text{O}(1/\epsilon)$ to achieve an $\epsilon$-accurate solution in $\text{O}(d^{\text{poly}(1/\epsilon)})$ \cite{ma2016polynomial}, which increases with $1/\epsilon$ in an exponential rate. This exponential complexity is the best computational complexity established for the sparse measure recovery to the best of our knowledge. Here, we research moments $\Phi \mu$ yielding a polynomial complexity in $1/\epsilon$. The community of neural computing uses the following moments for the measure recovery: \begin{align} \label{eq:neural_measurment} \tag{neural moments} \Phi_x(w) = \varphi(x^\top w) \end{align} where $x \in {{\mathbb {R}}}^d$ is a random vector with the density $p$, and $\varphi:{{\mathbb {R}}} \to {{\mathbb {R}}}$ is a non-polynomial function. The recovery from neural moment matching is the following program: \begin{align} \label{eq:neuralmm} \tag{neural moment matching} \arg\min_{\nu} \left( L(\nu) := \int \left( \Phi_x \nu - \Phi_x \mu \right)^2 dp(x) \right). \end{align} The above program is equivalent to a recovery of a planted neural network~\cite{chizat2021convergence}. The support of $\mu$ i.e., $\{w_1, \dots, w_n \}$, contains the weights of a planted network whose output function $f$ obeys $ f(x) = \Phi_x \mu$. The support of $\nu$, i.e. $\{v_1, \dots, v_n \}$, constitutes the weights optimized to recover~$f$. Although optimizing a single neuron is known to be NP-hard~\cite{guruswami2009hardness}, the recovery complexity in planted settings is still an open problem. Contrary to tensor recovery, the recovery with neural moment matching becomes tractable as $n\to \infty$. In this regime, the gradient flow on $L$ recovers $\mu$ for a homogeneous $\varphi$ \cite{bach2021gradient,chizat2018global}. Although the setting of infinite neurons is an interesting theoretical regime, one can only implement neural networks with a finite number of neurons. The global convergence of gradient flow for a finite $n$ is unknown to the best of your knowledge. \subsection{Main results} We propose a poly-time algorithm that recovers a sparse measure from carefully designed moments. The next Theorem states this result. \begin{theorem} \label{thm:drecovery} Suppose that the support of $\mu$ is composed of distinct $\{ w_i \in \S_{d-1} \}_{i=1}^n$ (see \ref{assume:distinct}). There exists a feature map $\Phi\mu \in {{\mathbb {R}}}^m$, with a finite $m$, that allows an $\epsilon$-recovery of $\mu$. In particular, randomized Algorithm~\ref{alg:recovery_d_rand} obtains an $\epsilon$-recover of $\mu$ from $\Phi \mu$ in $\text{O}(\epsilon^{-2}\log(1/\epsilon)\text{poly}(n,d,\kappa))$ with probability at least $1-\kappa^{-1}$. \end{theorem} \noindent This is the first polytime complexity established for the measure recovery. The best known recovery algorithm, which is developed for tensor decomposition, has the exponential complexity $\text{O}(d^{\text{poly}(1/\epsilon)})$ \cite{ma2016polynomial}. The proof of the last Theorem is postponed to Section~\ref{sec:high_dimensional_recovery}. The proposed recovery Algorithm is based on gradient descent on \ref{eq:neuralmm}. For the easy of theoretical analysis, we use zero-one activation as \vspace{-0.3cm} \begin{align} \label{eq:step} \tag{zero-one activation} \varphi(a) = \begin{cases} 0 & a \leq 0 \\ 1 & a >0 . \end{cases} \end{align} The next Theorem establishes a polytime iteration complexity for gradient descent on \ref{eq:neuralmm}. \begin{theorem} \label{thm:main_result} Suppose that $\{w_1, \dots, w_n \in {{\mathbb {R}}}^2\}$ are distinct and belong to the upper half-circle, and $p$ is the uniform measure on the unit sphere; then, \ref{eq:gd_polar} in polar coordinates on \ref{eq:neuralmm} with \ref{eq:step}s obtains an $\epsilon$-recovery (defined in Sec.~\ref{sec:distance}) with $\text{O}(n^2/\epsilon)$ iterations. \end{theorem} The recovery with moment matching is a non-convex program. It is known that gradient descent does not generally converge to a global (or even a local) minimum for non-convex programming~\cite{ahmadi2020complexity}. When optimizing a continuous function over a compact domain, finding a stationary point of gradient descent is (PPAD $\cap$ PLS)-complete~\cite{fearnley2021complexity}. Polynomial-time convergence of gradient descent is established for only a few non-convex functions including Rayleigh quotient~\cite{neymeyr2001geometric,kohler2019exponential}, Polyak-Lojasiewicz functions~\cite{karimi2016linear}, star-convex functions \cite{nesterov2006cubic}, and transformations of convex functions~\cite{nesterov2003introductory}. Neural moment matching is not convex, yet does not belong to any of these non-convex programs. This program relates to the recovery of planted neural networks. The global convergence of gradient descent for the recovery of planted neural networks is a classical open question in neural computing~\cite{bach2017breaking,bach2021gradient}. Recent studies establish the global convergence of gradient descent in the asymptotic regime of $n \to \infty$ \cite{bach2021gradient}. These analyses rely on optimal transport theory. Taking inspirations from this asymptotic analysis, we establish non-asymptotic global convergence of gradient descent for neural networks with a finite number of neurons and two-dimensional inputs with \ref{eq:step}s. Remarkably, the last Theorem extends to non-Gaussian $x \sim p$. Under weak assumptions on $p$, we establish a similar convergence result for gradient descent. Although the gradient descent algorithm in Theorem~\ref{thm:main_result} enjoys a polytime iteration complexity, its implementation requires infinite moments. We propose an approximate gradient descent in Algorithm~\ref{alg:recovery} that uses $\text{O}(n/\epsilon)$ moments for an $\epsilon$-recovery with the total computational complexity $\text{O}(n^4 \log(1/\epsilon)/\epsilon^2)$. This recovery is established in Lemma~\ref{thm:finite_momments}, and it is experimentally validated in Section~\ref{sec:experiments}. Leveraging this algorithm, we prove Theorem~\ref{thm:drecovery} in Section~\ref{sec:high_dimensional_recovery}. \section{Intuition} We define the kernel associated with moments $\Phi$ as $k(w,w') = \langle \Phi(w),\Phi(w') \rangle$ and write $L$ alternatively as the minimum mean discrepancy (MMD): \begin{multline} \label{eq:mmd} \tag{MMD} L(\nu) = \text{MMD}_{k}(\nu,\mu) := \\ \int k(w,w') d\nu(w) d\nu(w') - 2 \int k(w,w') d\nu(w) d\mu(w') + \int k(w,w') d\mu(w) d\mu(w'). \end{multline} The above formulation sparks optimizing MMD by gradient flow in the space of probability distributions to recover $\mu$, which is studied by \cite{arbel2019maximum}. However, \cite{arbel2019maximum} shows that MMD does not obey displacement convexity~\cite{mccann1997convexity}, hence it is computationally tractable only under strong assumptions on the distribution dynamics. We prove the kernel associated with neural moments obeys an interesting property that allows polynomial convergence of gradient descent. When $x$ is uniformly drawn from unit sphere, the associated kernel with the neural moments has the following closed-form~\cite{cho09deepkernel}: \begin{align} \label{eq:kernel_step} \tag{neural kernel} k(w,w') := \int \Phi_x(w) \Phi_x(w') dp(x) = 1 - \frac{\phi(w,w')}{\pi}, \quad \cos (\phi(w,w')) = \frac{\langle w, w'\rangle}{\|w\| \|w'\|}. \end{align} We introduce notation $s(\theta) = (\sin(\theta),\cos(\theta)) \in \S_1$. For $d=2$, we use polar representations as $w_{i} = s(\omega_i)$, $v_i = s(\theta_i)$. The assumptions in Thm.~\ref{thm:main_result} ensure $\theta_i, \omega_i \in [0,\pi)$. Using \ref{eq:mmd} equation and \ref{eq:kernel_step}, we get the energy distance as\footnote{A detailed derivation is provided in Sec.~\ref{sec:indix_distribution}.} \begin{align} \label{eq:polar_2d_L} (\pi n^2) L(\theta) = 2 \sum_{i=1}^n \sum_{j=1}^n |\theta_i - \omega_j| - \sum_{i=1}^n \sum_{j=1}^n | \theta_i - \theta_j| - \sum_{i=1}^n \sum_{j=1}^n |\omega_i - \omega_j|. \end{align} Although $L(\theta)$ is not convex, the next lemma proves it admits only a unique critical point with distinct $\{\theta_i\}$'s. \begin{lemma} \label{lemma:critical} $L(\theta)$ admits a unique critical point with distinct $\{\theta_i \in [0,\pi)\}$ as long as all $\{\omega_i \in [0,\pi)\}$ are distinct. \end{lemma} \begin{proof} The partial (sub)derivative of $L$ with respect to $\theta_i$ reads as \begin{align} \label{eq:gradient_polar} \left( \pi n^2 \right) \frac{d L}{d\theta_i}= \sum_{j=1}^n \text{sign}(\theta_i - \omega_j)-\sum_{j\neq i,j=1}^n \text{sign}(\theta_i - \theta_j) , \end{align} where $\text{sign}$ denotes the sign function. Assuming that that all $\{\theta_i\}$ and $\{\omega_i\}$ are distinct and not equal, the sum of $n-1$ signs (the first term in the derivative) can not be equal to the sum of $n$ signs (the second term in the derivative). Therefore, $0 \not\in dL/d\theta_i$ unless $\theta_i = \omega_j$ holds for some $i,j \in \{1, \dots, n\}$. \end{proof} Notably, the uniqueness of the critical point does not necessarily guarantee the recovery of the sparse measure in polynomial-time, since $L$ is not a smooth function. When optimizing smooth functions, gradient descent converges to an approximate critical point in polynomial time~\cite{nesterov2003introductory}. However, this is not the case for non-smooth functions~\cite{zhang2020complexity}. We leverage a particular structure of~$L$ (in Lemma~\ref{lemma:gradient_dir}) to establish a convergence rate for gradient descent on $L$. \section{Neural moment matching} Throughout this section, we established the global convergence of gradient descent for two-dimensional neural moment matching presented in Theorem~\ref{thm:main_result}. \subsection{Wasserstein distance} \label{sec:distance} Let $\nu = \frac{1}{n} \sum_{i=1}^n \delta_{\theta_i}$, and $\mu = \frac{1}{n} \sum_{i=1}^n \delta_{\omega_i}$. Assuming $\omega_1 \leq \dots \leq \omega_n$, we introduce the following distance notion \begin{align} W(\nu,\mu) = \min_{\sigma \in \Lambda}\max_{i} |\theta_{\sigma(i)} - \omega_{i}|, \end{align} where $\Lambda$ is the set of all permutations of indices $\{1, \dots, n\}$. Indeed, $W$ is Wasserstein-$\infty$ distance and the permutation $\sigma$ is a transport map from $\mu$ to $\nu$~\cite{santambrogio2015optimal}. The next lemma proves that a permutation $\sigma$ sorting $\theta_{\sigma(i)}$ is the optimal transport for $W$. \begin{lemma}[Optimal transport] \label{lemma:optimal_transport} Without loss of generality, assume $\omega_1 \leq \dots \leq \omega_n$. For $\theta_{\sigma(1)} \leq \dots \leq \theta_{\sigma(n)}$, the following holds: \begin{align} W(\nu, \mu) = \max_{i \in \{1,\dots, n\}}| \theta_{\sigma(i)}-\omega_i |. \end{align} \end{lemma} \begin{proof} Let $\sigma^*$ denote the optimal transport obeying \begin{align} \sigma^* = \arg_{\sigma \in \Lambda}\underbrace{\min_i |\theta_{\sigma(i)} - \omega_i|}_{f(\sigma)}. \end{align} The proof idea is simple: if there exists $i<j$ such that $\theta_{\sigma^*(i)}>\theta_{\sigma^*(j)}$, then swapping $\sigma^*(i)$ with $\sigma^*(j)$ will not increase $f$. To formally establish this idea, we define transport $\sigma'$ obtained by the swap as \begin{align} \sigma'(q) = \begin{cases} \sigma^*(q) & q\neq i \\ \sigma^*(j) & q = i \\ \sigma^*(i) & q=j. \end{cases} \end{align} We prove that $f(\sigma') \leq f(\sigma)$. Let define the following compact notations: \begin{align} \Delta_{ij} & = \max \{ | \theta_{\sigma^*(i)} - \omega_i |, | \theta_{\sigma^*(j)} - \omega_j |\} \\ \Delta'_{ij} & = \max \{ | \theta_{\sigma'(i)} - \omega_i |, | \theta_{\sigma'(j)} - \omega_j |\} = \max \{ | \theta_{\sigma^*(j)} - \omega_i |, | \theta_{\sigma^*(i)} - \omega_j |\} . \end{align} \noindent According to the definition, \begin{align} f(\sigma') = \max \{ \Delta'_{ij}, \max_{q\neq i,j} | \theta_{\sigma^*(q)} - \omega_q | \}\leq \max \{ \Delta_{ij}, \max_{q\neq i,j} | \theta_{\sigma^*(q)} - \omega_q | \} =f(\sigma^*) \end{align} holds. Since $\theta_{\sigma^*(i)}>\theta_{\sigma^*(j)}$, $\Delta'_{ij}< \Delta_{ij}$ holds as it is illustrated in the following figure. \tikzset{ mynode/.style={fill,circle,inner sep=2pt,outer sep=0pt} } \begin{figure}[h!] \centering \begin{tikzpicture} \draw[olive,thick,latex-latex] (0,0) -- (7,0); \node[mynode,fill=red,label=above:\textcolor{red}{$\omega_i$}] (wi) at (2,0) {}; \node[mynode,fill=red,label=above:\textcolor{red}{$\omega_j$}] (wj) at (3,0) {}; \node[mynode,fill=blue,label=above:\textcolor{blue}{$\theta_{\sigma^*(j)}$}] (thj) at (5,0) {}; \node[mynode,fill=blue,label=above:\textcolor{blue}{$\theta_{\sigma^*(i)}$}] (thi) at (6,0) {}; \draw [ thick, decoration={ brace, mirror, raise=0.5cm }, decorate ] (wi) -- (thi) node [pos=0.5,anchor=north,yshift=-0.55cm] {$\Delta_{ij}$}; \draw [ thick, decoration={ brace, raise=1cm }, decorate ] (wi) -- (thj) node [pos=0.5,anchor=south,yshift=1cm] {$\Delta'_{ij}$}; \end{tikzpicture} \end{figure} \noindent Therefore, $f(\sigma') \leq W(\nu,\mu)$ holds, in that $\sigma'$ is also an optimal transport. Replacing $\sigma^*$ by $\sigma'$ and repeating the same argument ultimately concludes the proof. \end{proof} \subsection{Characterizing the gradient direction} We have seen that the optimal transport map from $\mu$ to $\nu$ is obtained by sorted $\theta_i$s. The next lemma characterizes a fundamental property of the gradient of $L$ for the sorted $\theta_i$'s: the signs of $d L/d\theta_i$ and $\theta_i - \omega_i$ are equal. \begin{lemma}[Gradient direction] \label{lemma:gradient_dir} For $\theta_1 < \dots <\theta_n$, the following bound holds \begin{align} \label{eq:decay_condition} -\text{sign}\left(\theta_i - \omega_{i}\right) \left( \frac{d L}{d \theta_i} \right) \leq -\frac{1}{ \pi n^2}. \end{align} \end{lemma} \noindent The last Lemma implies that moving $\theta_i$ in the direction of $-d L/(d\theta_i)$ contracts $\theta_i$ to $\omega_i$ for sorted $\theta_i$s and $\omega_i$s. Leveraging this property, we establish the global convergence of gradient flow in the next section. \begin{proof}[Proof of Lemma~\ref{lemma:gradient_dir}] The partial derivative $dL/d\theta_i$ consists of two additive components: \begin{align*} \left( \pi n^2 \right)\frac{d L}{d \theta_i } = \underbrace{\sum_{j} \text{sign}(\theta_i - \omega_j)}_{\Delta} -\underbrace{\sum_{j\neq i} \text{sign}(\theta_i - \theta_j)}_{2i-n-1}, \end{align*} where \begin{align} \Delta & = \left|\{ \omega_m < \theta_i \}\right|-|\{\omega_m > \theta_i \}| \\ & = 2 \left|\{ \omega_m < \theta_i \}\right| - n \label{eq:delta_decompostion1}\\ & = n - 2 |\{\omega_m > \theta_i \}|. \label{eq:delta_decompostion2} \end{align} Consider the following two cases: \begin{itemize} \item[i.] $\theta_i>\omega_i$: In this case, $|\{ \omega_m<\theta_i\}|\geq i$ (see Fig.~\ref{fig:elements_bound}). Plugging this into Eq.~\eqref{eq:delta_decompostion1}, we get $\Delta \geq 2i-n$ that yields $d L / d \theta_i \geq 1/(\pi n^2)$. \item[ii.] $\theta_i<\omega_i$: In this case, $| \{ \omega_m \geq \theta_i \} | \geq n-i+1$ demonstrated in Fig.~\ref{fig:elements_bound}. Using Eq.~\eqref{eq:delta_decompostion2}, we get $\Delta \leq 2i-n-2$ that leads to $d L / d \theta_i \leq -1/(\pi n^2) $. \end{itemize} Combining the above two results concludes the proof. \tikzset{ mynode/.style={fill,circle,inner sep=2pt,outer sep=0pt} } \begin{figure}[h!] \centering \begin{tabular}{c c} \begin{tikzpicture} \draw[olive,thick,latex-latex] (0,0) -- (7,0); \node[mynode,fill=red,label=above:\textcolor{red}{$\omega_1$}] (w1) at (1.5,0) {}; \node[label=above:\textcolor{red}{$\dots$}] (wk) at (2.25,0) {}; \node[mynode,fill=red,label=above:\textcolor{red}{$\omega_i$}] (wi) at (3,0) {}; \node[mynode,fill=blue,label=above:\textcolor{blue}{$\theta_{i}$}] (thi) at (4,0) {}; \draw [ thick, decoration={ brace, mirror, raise=0.5cm }, decorate ] (w1) -- (wi) node [pos=0.5,anchor=north,yshift=-0.55cm] {$|\{ \omega_m < \theta_i \}| \geq i $}; \end{tikzpicture} & \begin{tikzpicture} \draw[olive,thick,latex-latex] (0,0) -- (7,0); \node[mynode,fill=blue,label=above:\textcolor{blue}{$\theta_i$}] (thi) at (2,0) {}; \node[mynode,fill=red,label=above:\textcolor{red}{$\omega_i$}] (wi) at (3,0) {}; \node[label=above:\textcolor{red}{$\dots$}] (wm) at (4,0) {}; \node[mynode,fill=red,label=above:\textcolor{red}{$\omega_n$}] (wn) at (5,0) {}; \draw [ thick, decoration={ brace, mirror, raise=0.5cm }, decorate ] (wi) -- (wn) node [pos=0.5,anchor=north,yshift=-0.55cm] {$| \{ \omega_m > \theta_i \} | \geq n-i+1$}; \end{tikzpicture} \end{tabular} \caption{The cardinality bound. Left: $\theta_i>\omega_i$. Right: $\theta_i<\omega_i$.} \label{fig:elements_bound} \end{figure} \end{proof} \subsection{Gradient descent} Now, we study the convergence of gradient descent on $L$ which optimizes~$L$ through the following recurrence \begin{align} \label{eq:gd_polar} \tag{gradient descent} \theta_{i}^{(q+1)} = \theta_{i}^{(q)}- \gamma \frac{d L}{d\theta_i}(\nu_q), \quad \quad \quad \nu_q := \frac{1}{n} \sum_{i=1}^n \delta_{\theta_i^{(q)}}. \end{align} The next lemma establishes the global convergence of \ref{eq:gd_polar}. \begin{lemma} \label{lemma:gd_convergence} Suppose that $\{\omega_i \in (-\pi,\pi)\}$ holds. Starting from distinct $\{ \theta_i^{(0)} \in (-\pi,\pi) \}$, \ref{eq:gd_polar} obeys \begin{align} W(\nu_q, \mu) \leq \epsilon \end{align} for $q\geq\floor{\frac{ n^2 \pi^2}{\epsilon}}+1$, and $\gamma=\epsilon$. \end{lemma} \noindent The last Lemma concludes the proof of Thm.~\ref{thm:main_result}. Remarkably, the result of the last lemma is stronger than those of Theorem~\ref{thm:main_result}: The last Lemma requires $\omega_i \in (-\pi,\pi)$ while $\omega_i \in [0, \pi)$ is sufficient for Theorem~\ref{thm:main_result}. \begin{proof}[Proof of Lemma.~\ref{lemma:gd_convergence}] Without loss of generality, we assume that $\theta_1^{(q)}\leq \dots \leq \theta_n^{(q)}$; then, the~\ref{eq:gd_polar} obeys \begin{align} \left| \theta_i^{(q+1)} - \omega_i \right|= \left|\theta_i^{(q)} - \omega_i - \gamma \frac{d L}{d\theta_i}(\nu_q) \right|. \end{align} Invoking Lemma~\ref{lemma:gradient_dir} yields \begin{align} \left| \theta_i^{(q+1)} - \omega_i \right| & = \left|\left|\theta_i^{(q)} - \omega_i\right| - \gamma \left|\frac{d L}{d\theta_i}\right|\right|. \end{align} Using Lemma~\ref{lemma:optimal_transport}, we get \begin{align} W(\nu_{q+1},\mu) & \leq \max_{i} \left| \theta_i^{(q+1)} - \omega_i \right| \\ & \leq \max_i \left|\left|\theta_i^{(q)} - \omega_i\right| - \gamma \left|\frac{d L}{d\theta_i}\right|\right| \\ & \leq \begin{cases} \max\{ W(\nu_q,\mu) - \frac{\epsilon}{\pi n^2},\epsilon \} & W(\nu_q,\mu) \geq \epsilon \\ \epsilon & \text{otherwise} \end{cases} \label{eq:convergencebound_gd}. \end{align} We complete the proof by contradiction. Suppose that $W(\nu_m,\mu) \geq \epsilon$ holds for $m=1,\dots, q'$ where $q' = \floor{(\pi n)^2/\epsilon}+1$; then, the induction over $k$ yields \begin{align} W\left(\nu_{q'},\mu\right) < W(\nu_0,\mu) - \pi <0. \end{align} The above inequality contradicts to $W\geq 0$. Therefore, there exists $q_*<q'$ such that ${W(\nu_{q_*},\mu) \leq \epsilon}$. According to Eq.~\eqref{eq:convergencebound_gd}, $W(\nu_q,\mu) \leq \epsilon$ holds for all $q\geq q_*$. \end{proof} \section{Recovery from finite moments} \label{sec:finite_moments} So far, we established the convergence of ~\ref{eq:gd_polar}. Yet, this method uses infinite moments since $\int \partial k(\theta,\omega)d\mu(\omega)$ is required for computing the gradient. Now, we show a finite number of moments suffices for recovery. Our moment design is inspired by random Fourier features for shift-invariant kernels~\cite{rahimi2007random}. Here, we use a particular Fourier series expansion designed to approximate the gradient of $L$, in the following form \begin{align} \label{eq:required_moments} \Phi \mu = \left[ \Phi_1(\mu), \Phi_1'(\mu), \dots, \Phi_m(\mu), \Phi'_m(\mu) \right], \end{align} where \begin{align*} \Phi_m(\mu) &= \left(\frac{2}{\sqrt{(2m+1)\pi}}\right) \sum_{i=1}^n \sin\left((2m+1) \omega_i\right), \\ \Phi'_m(\mu) &= \left(\frac{2 \mathcal{I}}{\sqrt{(2m+1)\pi}}\right) \sum_{i=1}^n \cos\left( (2m+1) \omega_i\right). \end{align*} where $\mathcal{I}$ is the imaginary unit. The next Theorem proves that Algorithm~\ref{alg:recovery} recovers $\mu$ from $\Phi\mu$. \begin{algorithm}[t!] \caption{Measure recovery from finite moments}\label{alg:recovery} \begin{algorithmic} \Require $\Phi\mu$ defined in Eq.~\ref{eq:required_moments}. Integers $m$ and $K$; Stepsize $\gamma \in {{\mathbb {R}}}_+$, \State Let $\nu_q = \frac{1}{n}\sum_{i=1}^n \delta_{\theta_i^{(q)}}$, $K = \floor{200 \pi^2 n^2 \log(4\pi\gamma^{-1})\gamma^{-1}}+1$, and $m=800 n \gamma^{-1}$. \For{$i=1,\dots,n$} \For{$q < K$} \State $\theta_{i}^{(q+1)} = \theta_{i}^{(q)} -\frac{\gamma}{n^2 \pi} \left(\sum_{j\neq i}^n \text{sign}(\theta_i^{(q)} - \theta_j^{(q)}) + \langle \Phi(\mu), \Phi(\nu_q) \rangle \right)$. \EndFor \EndFor \State \textbf{Return} $\theta_1^{(K)},\dots, \theta_{n}^{(K)}$ \end{algorithmic} \end{algorithm} \begin{theorem} \label{thm:finite_momments} Suppose that $\min_{i\neq j} | \omega_i - \omega_j| = \beta$ and $\{ \omega_i \in (-\pi,\pi) \}_{i=1}^n$. Algorithm~\ref{alg:recovery} with parameters \begin{align} \gamma = \min \left\{ \frac{\epsilon}{2},\frac{\beta}{10} \right\}, \end{align} and starting from distinct $\{\theta_{i}^{(0)}\}_{i=1}^n$ obtains $ W(\nu_K, \mu) \leq \epsilon $ from $m=800n(\epsilon^{-1} + \beta^{-1})$ moments with total computational complexity $\text{O}(n^4 \log(1/\epsilon)/\epsilon^2+ n^4 \log(\beta)/\beta^2)$. \end{theorem} \noindent Up to the best of our knowledge, Algorithm~\ref{alg:recovery} is the first polytime recovery method from a finite number of moments. As will be illustrated in experiments, this algorithm recovers particles $\omega_1, \dots, \omega_n$ one by one. \begin{proof} The moments $\Phi_k$ are designed based on Fourier series expansion of $\text{sign}$, namely the following equation: \begin{align} \Delta \neq 0 \text{ and } \Delta \in (-\pi, \pi): \text{sign}(\Delta) = \sum_{r=0}^\infty \left(\frac{4}{\pi (2r+1)}\right) \sin((2r+1)\Delta). \end{align} Fourier series expansion of order $m$ approximate $\text{sign}(\Delta)$ by the following function \begin{align} g(\Delta) = \sum_{r=0}^m \left(\frac{4}{\pi (2r+1)}\right) \sin((2r+1)\Delta). \end{align} For $\Delta = \theta - \omega$, we get \begin{align} g(\theta- \omega) = \sum_{r=0}^m \frac{4}{\pi(2r+1)}\left(\sin((2r+1)\theta) \cos((2r+1)\omega) - \cos((2r+1)\theta)\sin((2r+1)\omega)\right). \end{align} \noindent The next lemma establishes two important approximation bounds for $g$. \begin{lemma} For all $\Delta \in {{\mathbb {R}}}- \{0\}$, we have \label{lemma:g_analysis} \begin{align} | g(\Delta) - \text{sign}(\Delta)| \leq 4 \left( \frac{1}{m |\Delta|} + \frac{1}{m}\right). \end{align} Furthermore, $|g(\Delta)|\leq 1.9$ for $|\Delta|\leq \frac{\pi}{4}$ and $m>12$. \end{lemma} \begin{proof} We prove for $\Delta>0$, and the proof easily extends to $\Delta<0$. According to the definition of~$g$, we have \begin{align} | g(\Delta) - \text{sign}(\Delta)| & = \left| \left(\frac{4}{\pi }\right) \underbrace{\sum_{r=m+1}^\infty \frac{\sin((2r+1)\Delta)}{2r+1}}_{S_{m+1}(\Delta)} \right| \end{align} Let define $F_m$ as \begin{align} F_m(\Delta) = \sum_{r=1}^m \frac{\sin(r\Delta)}{r}. \end{align} Given $F_m(\Delta)$, $S_{m+1}(\Delta)$ reads as: \begin{align} \label{eq:seq} S_{m+1}(\Delta) = F_{\infty}(\Delta) -F_{2m}(\Delta)- \frac{1}{2}\left( F_{\infty}(2\Delta)- F_{m}(2 \Delta)\right). \end{align} Using $\frac{1}{r}\sin(r\Delta) = \int_{0}^\Delta \cos(r t)dt$, it is easy to show that (see~\cite{sinseries} for a detailed derivation): \begin{align} \label{eq:fm} F_m(\Delta) = - \frac{\Delta}{2} + \underbrace{\int_{0}^\Delta \left( \frac{1}{2\sin(t/2)} - \frac{1}{t} \right) \sin\left(\left(m+\frac{1}{2}\right)t\right)dt}_{\epsilon_m(\Delta)} + \int_{0}^{\Delta(m+\frac{1}{2})} \left(\frac{\sin(t)}{t}\right) dt \end{align} holds for $\Delta>0$. Integration by parts yields \begin{align} |\epsilon_m(\Delta)| & \leq \left| \int_{0}^\Delta \left( \underbrace{\frac{1}{2\sin(t/2)}-\frac{1}{t}}_{h(t)}\right) \mathcal{I} e^{(m+\frac{1}{2})t\mathcal{I}} dt\right|\\ & = \left| \frac{1}{m+\frac{1}{2}} \left( h(\Delta) e^{(m+\frac{1}{2})\Delta \mathcal{I}} - \lim_{\epsilon \to 0} h(\epsilon) e^{\epsilon \mathcal{I}} + \int_{0}^\Delta h'(t) e^{(m+\frac{1}{2})t\mathcal{I}}dt \right) \right| \\ & \leq \frac{1}{m} \left( \int_{0}^\Delta \underbrace{|h'(t)|}_{\leq 1/\pi} dt + \underbrace{|h(\pi)|}_{\leq 1/2} \right) \\ & \leq \frac{3}{2m}. \label{eq:epsilonbound} \end{align} Replacing the above bound into Eq.\eqref{eq:seq} yields \begin{align} \label{eq:smbound} |S_{m+1}(\Delta) | \leq \left| \int_{\Delta(2m+\frac{1}{2})}^\infty \frac{\sin(t)}{t} dt \right| + \frac{1}{2}\left| \int_{\Delta(2m+1)}^\infty \frac{\sin(t)}{t} dt \right| + \frac{3}{m} \end{align} An application of the Laplace transform obtains (see~\cite{mathover}): \begin{align} \left|\int_{x}^\infty \frac{\sin(t)}{t} dt\right| & = \left|\int_{0}^\infty \left(\frac{ \sin(x)\cos(t)+\cos(x)\sin(t) }{t}\right) dt \right| \\ & = \left|\int_{0}^\infty \left(\frac{ s\sin(x)+\cos(x) }{1+s^2}\right) \exp(- x s) ds \right| \\ & \leq \left| \int_{0}^\infty \exp(- x s) ds \right| \leq \frac{1}{|x|} \label{eq:laplace_result}. \end{align} Replacing this into Eq.~\ref{eq:smbound} concludes the proof for the first inequality. \noindent For the second inequality, we use Eq.~\eqref{eq:seq} together with \eqref{eq:epsilonbound}: \begin{align} | g(\Delta) | & = \left|\frac{4}{\pi} \left(F_{2(m+1)}(\Delta) - \frac{1}{2} F_{m+1}(2\Delta)\right)\right| \\ & \leq \frac{2}{\pi} \left| \int_{0}^{(2m+\frac{5}{2}) \Delta} \left(\frac{\sin(t)}{t}\right) dt \right| + \frac{4}{\pi} \left| \int_{(2m+\frac{5}{2}) \Delta}^{(2m+3)\Delta} \frac{\sin(t)}{t} dt \right| + \frac{3}{m} \label{eq:gbound}. \end{align} Since the maximum of the sin integral is met for $\pi$, \begin{align} \left|\int_{0}^x \frac{\sin(t)}{t} dt\right| & \leq \left|\int_{0}^\pi \frac{\sin(t)}{t} dt\right| = \frac{\pi}{2}. \end{align} Furthermore for $0<|\Delta|< \frac{\pi}{4}$, we get \begin{align} \left| \int_{(2m+\frac{5}{2}) \Delta}^{(2m+3)\Delta} \frac{\sin(t)}{t} dt \right| \leq \int_{(2m+\frac{5}{2}) \Delta}^{(2m+3)\Delta} \left|\frac{\sin(t)}{t}\right| dt \leq \frac{\Delta}{2} \leq \frac{\pi}{8} \end{align} Replacing the last two bounds into Eq.~\eqref{eq:gbound} concludes: \begin{align} |g(\Delta)| \leq 1.9 \end{align} \end{proof} \noindent In Algorithm~\ref{alg:recovery}, the $\text{sign}$ is replaced by $g$ which leads to \begin{align} \theta_{i}^{(q+1)} = \theta_i^{(q)} - \frac{\gamma}{n^2\pi} \left(\underbrace{\sum_{j} \text{sign}(\theta_i^{(q)}-\theta_j^{(q)}) - g(\theta_i^{(q)}-\omega_j)}_{\widehat{g}_i(q)} \right) \end{align} \noindent Without loss of generality, we assume $\theta_1 \leq \dots \leq \theta_n$. For the ease of notations, we use the compact notation $\Delta_j = \theta_j^{(q)} - \omega_j $. The approximation error for the gradient is bounded as \begin{align} \label{eq:grad_bound} \left| \frac{d L}{d \theta_i}(\nu_q)- \widehat{g}_i \right| \leq \frac{1}{\pi n^2}\sum_{r} | g(\Delta_r) - \text{sign}(\Delta_r)| \leq \begin{cases} \frac{4}{\pi n m} \left(1 + \frac{1}{\gamma} \right) & |\Delta_j | \geq \gamma \\ \frac{4}{\pi n m} \left(1 + \frac{1}{\gamma} + \frac{1.9}{n^2 \pi}\right) & \text{otherwise} \end{cases}, \end{align} where we use the last Lemma to get the second inequality. Since $m \geq 800 n/\gamma$, we get \begin{align} \left| \frac{d L}{d \theta_i}(\nu_q)- \widehat{g}_i \right| \leq \begin{cases} 0.01/(n^2 \pi) & |\Delta_j | \geq \gamma \\ 1.91/(n^2 \pi) & \text{otherwise} \end{cases}. \end{align} If $|\Delta_j| \leq \gamma$ for $j \neq i$, then the cardinality argument in Fig.~\ref{fig:elements_bound} leads to the following inequality: \begin{align} n^2 \pi \left(\frac{d L}{d \theta_i}\right) \text{sign}(\theta_i - \omega_i) \geq 2. \end{align} Combining the above result with Eq.~\eqref{eq:grad_bound} yields \begin{align} \label{eq:grad_decay} |\theta_i^{(q+1)} - \omega_i | \leq |\theta_i^{(q)} - \omega_i | - \frac{ 0.08\gamma}{\pi n^2}, \quad |\Delta_i| \geq \gamma. \end{align} Since $|\theta_i^{(q)} - \omega_i|\leq 2\pi$ holds, we get \begin{align} |\theta_i^{(q+1)} - \omega_i | \leq \left( 1- \frac{0.02\gamma}{\pi^2 n^2}\right) | \theta_i^{(q)} - \omega_i | , \quad |\Delta_i| \geq \gamma. \end{align} Suppose that $|\theta_i^{(q)}-\omega_{i}|\geq \gamma$ for all $q = 1, \dots, K = 200 \pi^2 n^2 \log(2\pi/\gamma)/\gamma $; then, induction over $K$ yields \begin{align} |\theta_i^{(K)} - \omega_i | \leq \left(1-\frac{0.02 \gamma }{\pi^2 n^2}\right)^K |\theta_i^{(0)} - \omega_i | < \gamma. \end{align} Therefore, there exits $q_*\leq K$ such that $|\theta_{i}^{(q_*)}-\omega_i|\leq \gamma $. Lemma~\ref{lemma:g_analysis} proves that $| \widehat{g}_i| \leq 1$, hence $| \theta_{i}^{(q_*+1)} - \omega_i | \leq 2\gamma $. Similarly, we get $| \theta_{i}^{(q)}- \omega_i | \leq 2 \gamma$ for all $q>k_*+1$. \end{proof} \section{Experimental validations} \label{sec:experiments} \subsection{A warm-up experiment.} We experimentally validate the convergence of Algorithm~\ref{alg:recovery} established in Theorem~\ref{thm:finite_momments}. In particular, we run Algorithm~\ref{alg:recovery} to recover $\mu = \frac{1}{4}\sum_{i=1}^4 \delta_{\omega_i}$ where $\omega_i$ are drawn i.i.d. from uniform$[0,\pi)$. Figure~\ref{fig:recovery_result} illustrates the dynamics of Algorithm~\ref{alg:recovery} by tracking all $\{\theta_{i}^{(q)}\}_{i=1}^4$ where we observe that $\theta_{1}^{(q)}, \dots, \theta_{4}^{(q)}$ (blue, purple, gray and green lines) converge to $\omega_1,\dots, \omega_4$. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{img/particle_learning.pdf} \caption{ A simulation for Algorithm~\ref{alg:recovery} ; vertical axis: $\theta_i^{(q)}$ where $i$ is marked by colors; horizontal axis: $k$; red lines: $\omega_1, \dots, \omega_4$. We used $n=4$, $\gamma =.005$, $m=500$, and $K=2000$ for this experiment.} \label{fig:recovery_result} \end{figure} \subsection{Recovery of many particles} Now, we experimentally validate the recovery for $\mu$ with larger support containing $10$ and $100$ particles uniformly drawn from $[0,\pi)$. Figure~\ref{fig:recoveryn} $W(\nu_K,\mu)$ for different $K$, which controls computational complexity of Algorithm~\ref{alg:recovery}. In this Figure, we observe the convergence of $\mu_K$ towards $\mu$ as $K$ increases. For more particles, a larger $K$ is required, which is also confirmed by the complexity established in Theorem~\ref{thm:finite_momments}. Remarkably, all the implementations are available at \href{https://github.com/hadidaneshmand/measure_recovery_codes}{https://github.com/hadidaneshmand/measure\_recovery\_codes}. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{img/wq.pdf} \caption{Recovery of many particles from $m=1000$ moments with $\gamma=0.005$. Mean and 90\% confidence interval of 6 independent runs. } \label{fig:recoveryn} \end{figure} \section{High dimensional recovery} \label{sec:high_dimensional_recovery} Leveraging one-dimensional recovery, established in previous sections , we prove Theorem~\ref{thm:drecovery}. Let $\mu := \frac{1}{n} \sum_{i=1}^n \delta_{w_i}$ where $w_i$ is a $d$-dimensional vector with coordinates $[w_{i}]_1,\dots, [w_i]_d$. We first establish a recovery method for $[w_i]_q \in [-1,+1]$ under a strong condition, then we relax the condition using a randomized algorithm for $\{ w_i \in \S_{d-1} \}$. \subsection{Deterministic recovery} Algorithm~\ref{alg:recovery} recovers individual coordinates $\{[w_i]_q \in [-\pi,\pi] \}$ from moments of $\mu'_{q} := \frac{1}{n} \sum_{i=1}^n \delta_{[w_i]_q}$. To arrange these coordinates in vectors, we rely on the recovery of the following distributions: \begin{align} \mu_{q}''(w) = \frac{1}{n} \sum_{i=1}^n \delta_w([w_i]_1+[w_i]_q), \quad q\in \{2,\dots,n\} \end{align} The recovery of the above distributions allows us to recover vectors $w_1,\dots,w_n$ from individual coordinates $[w_i]_q$. Algorithm.~\ref{alg:recovery_d} outlines the details of the recovery. We study the recovery guarantees for this Algorithm. The following assumption significantly simplifies our theoretical studies. Yet, this assumption is not necessary for the recovery. In the next section, we establish the recovery under a weaker assumption. \begin{assumptioncost}{1}{\beta} Let $\Delta_{i,q} = [w_i]_1+[w_i]_q$. We assume that there exists a positive constant $\beta>0$ such that \begin{align} \beta \leq \min \{ \min_{i\neq j,q} | [w_i]_q -[w_j]_q|, \min_{i\neq j,q} | \Delta_{i,q} - \Delta_{j,q}| \}. \end{align} \end{assumptioncost} \begin{algorithm}[h!] \caption{$d$-dimensional measure recovery} \label{alg:recovery_d} \begin{algorithmic} \Require $\Phi \mu'_{q}, \Phi \mu''_{q}$ in Eq.~\eqref{eq:required_moments}, $\epsilon$, $\beta$ in \ref{assumption}. \For{$q=1,\dots,d$} \State Run Algorithm~\ref{alg:recovery}$\left(\Phi \mu'_{q}, \gamma=\min\{\frac{1}{2}\epsilon,\frac{1}{10}\beta\}\right)$ to get $\{[v_i]_{q}\}_{i=1}^n$ \EndFor \For{$q=1,\dots,d$} \State Run Algorithm~\ref{alg:recovery}$(\Phi \mu''_{q},\gamma= \min\{\frac{1}{2} \epsilon,\frac{1}{10}\beta\})$ to get $\{[y_i]_{q}\}_{i=1}^n$ \EndFor \For{$i=1,\dots,n$} \State $[v'_{i}]_1 \leftarrow [v_i]_1 $ \EndFor \For{$i, j,k=1,\dots,n$} \For{$q=2,\dots,d$} \If{ $|[v_i]_1 + [v_j]_q - [y_k]_q| < \beta/5 $} \State $[v'_{i}]_q \leftarrow [v_j]_q$ \EndIf \EndFor \EndFor \vspace{0.2cm} \State \textbf{Return} $\{ v'_1,\dots, v_{n}'\}$ \end{algorithmic} \end{algorithm} \noindent Leveraging the above assumption, the next theorem establishes the recovery of $\mu$ with Algorithm~\ref{alg:recovery_d}. \begin{lemma} \label{lemma:recovery_d} Suppose that \ref{assumption}($\beta$) holds; then, Algorithm~\ref{alg:recovery_d} returns $\{v'_i \in {{\mathbb {R}}}^d \}_{i=1}^n $ such that \begin{align} \min_{\sigma \in \Lambda}\max_{i} \|v'_i - w_{\sigma(i)} \|_{\infty} \leq \epsilon \end{align} holds with total computational complexity $\text{O}(n^4 d \left( \log(1/\epsilon)/\epsilon^2 + \log(1/\beta)/\beta^2 + 1\right))$. \end{lemma} \begin{proof} Invoking Thm.~\ref{thm:finite_momments} yields: There exists a permutation $\sigma$ of indices $\{1,\dots,n\}$ such that \begin{align} | [v_i]_q - [w_{\sigma(i)}]_q | \leq \min \{ \epsilon,\frac{1}{5}\beta \} \end{align} holds for all $i=1,\dots,n$. Similarly, there exits a permutation $\sigma'$ such that \begin{align} | [y_{\sigma'(i)}]_q - \left( [w_{\sigma(i)}]_q + [w_{\sigma(i)}]_q \right) | \leq \epsilon \end{align} holds for all $i=1,\dots,n$. \ref{assumption}$(\alpha)$ yields \begin{align} | [v_i]_q - [v_j]_q| \geq | [w_{\sigma(i)}]_q - [w_{\sigma(j)}]_q| - 2\epsilon \geq \beta/2 \quad \quad \quad (\epsilon \leq \beta/4). \end{align} Therefore \begin{align} | [v_i]_1 + [v_i]_q - [y_{\sigma'(i)}]_q | \leq | [v_i]_1 + [v_i]_q - [w_{\sigma(i)}]_1 - [w_{\sigma(i)}]_q | + \epsilon \leq 3\epsilon. \end{align} Furthermore for all $j \neq i$, we get \begin{align} | [v_i]_1 + [v_i]_q - [y_{\sigma'(j)}]_q | & \geq | [v_i]_1 + [v_i]_q - [w_{\sigma(j)}]_1 - [w_{\sigma(j)}]_q | - \epsilon \\ & \geq | [w_{\sigma(i)}]_1 + [w_{\sigma(i)}]_q - [w_{\sigma(j)}]_1 - [w_{\sigma(j)}]_q | - 3\epsilon \\ & \geq \beta - 3\epsilon \geq \beta/5. \end{align} Combining the last two inequalities concludes the proof. \end{proof} \subsection{Randomized recovery} We propose a randomized recovery of distinct $\{w_i\}$. \begin{assumptioncost}{2}{\ell} \label{assume:distinct} We assume that particles $\left\{w_i \in {{\mathbb {R}}}^d\right\}_{i=1}^n$ are distinct, namely \begin{align} \| w_i - w_j \| \geq \ell \end{align} holds for $i\neq j \in \{ 1, \dots, n \}$. \end{assumptioncost} Suppose that $Z \in {{\mathbb {R}}}^{d\times d}$ is matrix whose elements are i.i.d. Gaussian random variables. Let $Z' = Z/\| Z\|$. We define empirical distributions \begin{align} \widehat{\mu}_q = \frac{1}{n} \sum_{i=1}^n \delta( [Z' w_i]_q ), \quad \quad \Tilde{\mu}_r = \frac{1}{n} \sum_{i=1}^n \delta( [Z' w_i]_1 +[Z' w_i]_r ) \end{align} for $q\in \{1,\dots,n\}$, and $r \in \{2,\dots, n\}$. The next Lemma proves that Algorithm~\ref{alg:recovery_d_rand} recovers particles $\{ w_i \}_{i=1}^n$ from the moments of the above empirical distributions. \begin{lemma} Suppose $\{ w_i \in \S_{d-1} \}$ and obey \ref{assume:distinct}($\ell$). With probability at least $1-1/\kappa$, Algorithm~\ref{alg:recovery_d_rand} returns $\{v_1, \dots, v_n \in {{\mathbb {R}}}^d \}$ for which \begin{align} \min _{\sigma \in \Lambda}\max_{i} \| v_i - w_{\sigma(i)}\| = \text{O}( \epsilon) \end{align} holds with the total computational complexity \[\text{O}\left(n^{8}d^{4}\kappa^4\log\left(\frac{nd}{\ell}\right)\left(\frac{1}{\ell^2}\right) + n^4 d^3\left( 1 + \log\left(\frac{d}{\epsilon}\right)\frac{1}{\epsilon^2} \right) \right).\] \end{lemma} \begin{algorithm}[h!] \caption{$d$-dimensional randomized recovery} \label{alg:recovery_d_rand} \begin{algorithmic} \Require The Random matrix $Z' \in {{\mathbb {R}}}^{d\times d}$, and moments $\{\Phi \widehat{\mu}_q\}_{q=1}^n$ and $\{\Phi \Tilde{\mu}_r \}_{r=2}^n$ in Eq.~\eqref{eq:required_moments}, constant $\ell$ in \ref{assume:distinct}($\ell$) and $\epsilon'>0$. \State Run Algorithm~\ref{alg:recovery_d}$\left(\{\Phi \widehat{\mu}_q\}_{q=1}^n,\{\Phi \Tilde{\mu}_r \}_{r=2}^n, \beta = \Omega\left(\frac{\ell}{\kappa^2 d^{3/2} n^{2}}\right) , \epsilon = \frac{\epsilon'}{d}\right)$ and store outputs $\{v'_i\}_{i=1}^n$. \State \textbf{Return} $\{ (Z')^{-1} v'_1,\dots, (Z')^{-1} v_{n}'\}$ \end{algorithmic} \end{algorithm} \begin{proof} We will repeatedly use the following concentration bound for the condition number of the random matrix $Z$. \begin{lemma}[ \cite{szarek1991condition}] \label{lemma:condition} There are universal constants $c_1$ and $c_2$ such that \begin{align} P \left( \| Z \| \geq \alpha \sqrt{d}\right) & \leq (c_1 \alpha)^{d^2} \\ P\left( \| Z^{-1} \| \leq \alpha^{-1} \sqrt{d}\right) & \leq \exp(-c_2 \alpha^2) \end{align} holds for $\alpha>0$. \end{lemma} \noindent The idea is simple: Projected particles $\{Z' w_i\}$ obeys \ref{assumption}$(\beta)$ for a small $\beta$ with high probability. Therefore, we can leverage Algorithm~\ref{alg:recovery_d} for the recovery and its theoretical guarantees established in Lemma~\ref{lemma:recovery_d}. \begin{lemma} Under \ref{assume:distinct}($\ell$), $\{ Z' w_i \}_{i=1}^n$ obeys \ref{assumption} for $\beta =\Omega(\ell/(\kappa^2 d^{3/2} n^{2}))$ with probability at least $1-\frac{1}{\kappa}$. \end{lemma} \begin{proof} Let $v_{ij} := w_i - w_j$, and $z_q \in {{\mathbb {R}}}^d$ denote the rows of $Z$. Then, $ \zeta_q(ij) = (z^\top_q v_{ij})^2/\| v_{ij} \|^2 $ is a $\chi$-square random variable for which the following bound holds \begin{align} P\left( \| v_{ij} \|^2 \zeta_q(ij) \leq \kappa^2 \|v_{ij}\|^2 \right) \leq \kappa. \end{align} Setting $\kappa =1/(10dn^2)$ and union bound over all $i,j,$ and $q$ concludes \begin{align} P\left( \left\{\exists i,j,q: |z^\top_{q} (w_i-w_j)|\leq \frac{\ell}{2\kappa d n^2} \right\}\right) \leq \frac{1}{2\kappa}. \end{align} Therefore, \begin{align} |(z_1+z_{r})^\top (w_i-w_j)| \leq | z_1^\top (w_i-w_j) | + | z_r^\top (w_i-w_j)| \leq \ell/(\kappa d n^2 ) \end{align} holds with probability $\kappa^{-1}$. According to Lemma~\ref{lemma:condition}, the spectral norm of the random matrix $Z$ is $\text{O}(\kappa \sqrt{d})$ with probability $1-1/\kappa$. Combining the spectral bound and the last inequality completes the proof. \end{proof} \noindent Let define $\widehat{w}_i := Z w_i/\|Z\|$. According to Lemma~\ref{lemma:recovery_d}, Algorithm~\ref{alg:recovery_d} returns $\{ v'_i \}_{i=1}^n$ for which \begin{align} \max_{i} \min_{\sigma \in \Lambda} \|v'_i - \widehat{w}_{\sigma(i)} \|_{\infty} \leq \frac{\epsilon}{d} \end{align} holds with probability $1-1/\kappa$ and a permutation $\sigma$. According to Lemma~\ref{lemma:condition}, there exists a universal constant $c$ such that \begin{align} P( \| Z^{-1} \| \geq \kappa c \sqrt{d}) \leq 1/\kappa. \end{align} Recall the output of Algorithm~\ref{alg:recovery_d_rand} :$v_i = \|Z \| Z^{-1} v'_i$. Using the above bound, we complete the proof: \begin{align} \| v_i - w_{\sigma(i)}\| & = \|\| Z\| Z^{-1} v'_i - \|Z \| Z^{-1} \underbrace{ \left(Z w_{\sigma(i)}/\|Z\|\right)}_{\widehat{w}_{\sigma(i)}} \| \\ & \leq \|Z\|\| Z^{-1} \| \|v'_i - \widehat{w}_{\sigma_i}\| \\ & \leq \text{O}(d) \|v'_i - \widehat{w}_{\sigma_i}\|_\infty\\ & \leq \text{O}(\epsilon). \end{align} \end{proof} \section{The index distribution} \label{sec:indix_distribution} In this section, we extend Theorem~\ref{thm:main_result} to an abroad family of random vectors $x$. In particular, we assume $x = s(\alpha)$ where $\alpha$ has the density $p(\alpha)$ obeying the next assumption. \begin{assumptioncost}{1}{b,B} \label{assumption} Let $x = s(\alpha)$ where $\alpha \sim p$. We assume: \begin{itemize} \item[i.] ${p(\alpha+\pi/2) = p(\alpha+\pi)= p(\alpha+3\pi/2)}$ for $\alpha \in [0,\pi/2)$. \item[ii.] $p(\alpha)>b$ for all $\alpha \in[0,2\pi)$. \item[iii.] $p(\alpha) \leq B$ for all $\alpha \in[0,2\pi)$ \end{itemize} \end{assumptioncost} \noindent The next Prop. characterizes the kernel of \ref{eq:neural_measurment} with a random index $x$ obeying \ref{assumption}. \begin{proposition} \label{prop:kernelp} Consider \ref{eq:neural_measurment} $\Phi_x$ with random index $x \sim p$ obeying \ref{assumption}.i; then, \begin{align} k(\theta,\omega) = \int \Phi_{s(\alpha)}(s(\theta)))\Phi_{s(\alpha)}(s(\omega))) dp(\alpha)= 1 - 2\left| P(\theta) - P(\omega)\right|, \end{align} holds for $P(q) = \int_{0}^q p(\alpha)d\alpha$ and $\theta, \omega \in [0, \pi)$. \end{proposition} \begin{proof} According to the definition, we get \begin{align} k(w,v) & = \textbf{E} \left[ \varphi(\cos(\theta - \alpha)) \varphi(\cos(\omega - \alpha)) \right] \\ &= \left( 2 \left( \int_{0}^{a} + \int_{b}^{2 \pi}\right) p(\alpha)d\alpha \right), \end{align} where $a = \pi/2 + \min\{ \theta, \omega\}$ and $b=3\pi/2 + \max\{ \theta, \omega\}$. \ref{assumption} concludes \begin{align} k(w,v)/2 = P(a) + P(2\pi) - P(b)= \frac{1}{2} - |P(\theta) - P(\omega)|. \end{align} \end{proof} \noindent Replacing the above result into the \ref{eq:mmd} equation leads to the following recovery objective: \begin{align*} \left(\frac{n^2}{2} \right)L(\theta) =2 \sum_{i,j=1}^n |P(\theta_i) - P(\omega_j)| - \sum_{i,j=1}^n |P(\theta_i) - P(\theta_j)| - \sum_{i,j=1}^n |P(\omega_i) - P(\omega_j)|. \end{align*} \subsection{A polynomial-time recovery with gradient descent} We study the convergence of gradient descent: \begin{align} \label{eq:gd_lp} \tag{GD} \theta_{i}^{(q+1)} = \theta_{i}^{(q)}- \gamma \frac{d L}{d\theta_i} \left(\nu_q\right), \quad \nu_q := \frac{1}{n} \sum_{j=1}^n \delta_{\theta_j^{(q)}} \end{align} The next lemma establishes the convergence of the above algorithm. \begin{lemma} \label{lemma:gdlp_convergence} Under~\ref{assumption}$(b,B)$, \ref{eq:gd_lp} obeys \begin{align} W(\nu_q, \mu) \leq \gamma B \end{align} for $q\geq\floor{\frac{\pi n^2}{b \gamma}}+1$ as long as $\{\theta_k^{(0)},\omega_k \in [0,\pi)\}$s are distinct. \end{lemma} \begin{proof} \noindent The proof is based on incorporating the change of variable as $\Theta_i^{(q)} = P(\theta_i^{(q)})$ into the proof of Lemma~\ref{lemma:gd_convergence}. Without loss of generality, we assume that $\theta_1^{(q)}\leq \dots \leq \theta_n^{(q)}$; then, the \ref{eq:gd_lp} method obeys \begin{align} \left| \theta_i^{(q+1)} - \omega_i \right|= \left|\theta_i^{(q)} - \omega_i - \gamma \left(\frac{d L}{d\Theta_i}\right) p(\theta_i^{(q)}) \right|. \end{align} Since $P$ is monotonically increasing, $\text{sign}(\theta_i^{(q)}- \omega_i) = \text{sign}(\Theta_i^{(q)}- \omega_i)$. Using the above result together with Lemma~\ref{lemma:gradient_dir}, we get \begin{align} \left| \theta_i^{(q+1)} - \omega_i \right| & = \left|\left|\theta_i^{(q)} - \omega_i\right| - \gamma p(\theta_i^{(q)})\left|\frac{d L}{d\Theta_i} \right|\right|. \end{align} Then, invoking Lemma~\ref{lemma:optimal_transport} yields \begin{align} W(\nu_{q+1},\mu) & \leq \max_i \left|\left|\theta_i^{(q)} - \omega_i\right| - \gamma p(\theta_i^{(q)})\left|\frac{d L}{d\Theta_i} \right|\right| \\ & \leq \begin{cases} \max\{W(\nu_q,\mu) - \frac{\gamma b}{\pi n^2},\gamma B\} & W(\nu_q,\mu) \geq B\gamma \\ \gamma B & \text{otherwise}. \end{cases} \label{eq:convergencebound_gd} \end{align} The rest of the proof is the same as the proof of lemma~\ref{lemma:gd_convergence}. \end{proof} \section*{Acknowledgments and Disclosure of Funding} This work was partially funded by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001(PRAIRIE 3IA Institute). We also acknowledges support from the European Research Council (grant SEQUOIA 724063) and also Swiss National Science Foundation. \section{Appendix} \subsection{A note on the necessary assumption} \label{sec:assumption} Theorem~\ref{thm:main_result}, and Lemmas~\ref{lemma:gd_convergence}, and~\ref{lemma:gdlp_convergence} assume $\{w_i\}$ are on the upper-half of unit circle. The next Lemma proves that this assumption is necessary for the recovery. \begin{lemma} \label{lemma:assumption} For the general case $\{w_i \in \S_1\}$, \ref{eq:neuralmm} admits global minima not equivalent to $\mu$ if $\varphi$ is a step function. \end{lemma} \begin{proof} Suppose that for all $i \in \{1,\dots, n\}$ there exists a $i* \in \{1,\dots, n\}$ such that $w_i = -w_{i*}$. Consider $\nu = \frac{1}{n} \sum_{i=1}^n \delta_{v_i}$ where $v_i = -v_{i*}$ also holds for $i,i* \in \{1,\dots, n\}$. This symmetric property leads to \begin{align} \Phi_x \nu = \Phi_x \mu = 0 \implies L(\nu) = 0. \end{align} \end{proof} \bibliographystyle{amsplain}
proofpile-arXiv_065-3378
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection*{\bibname}} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathtools} \usepackage{bm} \usepackage[round]{natbib} \usepackage[inline]{enumitem} \usepackage{tikz} \usepackage{booktabs} \usepackage{multirow} \usepackage{graphicx} \usepackage{subcaption} \usepackage{mwe} \usepackage{array} \newcommand{\PreserveBackslash}[1]{\let\temp=\\#1\let\\=\temp} \newcolumntype{C}[1]{>{\PreserveBackslash\centering}p{#1}} \newcolumntype{R}[1]{>{\PreserveBackslash\raggedleft}p{#1}} \newcolumntype{L}[1]{>{\PreserveBackslash\raggedright}p{#1}} \usepackage{color} \usepackage{colortbl} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \definecolor{gray}{rgb}{0.7,0.7,0.7} \usepackage{hyperref} \hypersetup{ colorlinks = true, urlcolor = black, linkcolor = blue, citecolor = blue } \newtheorem{assumption}{Assumption} \newtheorem{problem}{Problem} \newtheorem{defn}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary} \newtheorem{fact}{Fact} \newcommand{\mathbb R}{\mathbb R} \DeclareMathOperator{\vcdim}{VCdim} \DeclareMathOperator{\ddim}{c_{\text{dd}}} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\nnz}{nnz} \DeclareMathOperator{\determinant}{det} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\softmax}{softmax} \newcommand{\mathbf I}{\mathbf I} \newcommand{\mathbf Q}{\mathbf Q} \newcommand{\mathbf P}{\mathbf P} \newcommand{\bar {\p}}{\bar {\mathbf P}} \newcommand{\bar {\pb}}{\bar {\bar {\p}}} \newcommand{\bm \pi}{\bm \pi} \newcommand{\epsilon_{\text{app}}}{\epsilon_{\text{app}}} \newcommand{\epsilon_{\text{est}}}{\epsilon_{\text{est}}} \newcommand{\parent}[1]{\texttt{parent}({#1})} \renewcommand{\star}[1]{{#1}^{*}} \newcommand{\bad}[1]{{#1}^{\textit{bad}}} \newcommand{\trans}[1]{{#1}^{T}} \newcommand{\ell}{\ell} \newcommand{\mathbf a}{\mathbf a} \newcommand{\mathbf v}{\mathbf v} \newcommand{\mathbf u}{\mathbf u} \newcommand{\mathbf w}{\mathbf w} \newcommand{\mathbf x}{\mathbf x} \newcommand{\mathbf y}{\mathbf y} \newcommand{\lone}[1]{{\lVert {#1} \rVert}_1} \newcommand{\ltwo}[1]{{\lVert {#1} \rVert}_2} \newcommand{\lp}[1]{{\lVert {#1} \rVert}_p} \newcommand{\linf}[1]{{\lVert {#1} \rVert}_\infty} \newcommand{\lF}[1]{{\lVert {#1} \rVert}_F} \newcommand{\dist}[2]{d_{{#1},{#2}}} \newcommand{\level}[1]{\texttt{level}({#1})} \newcommand{\depth}[1]{\texttt{depth}({#1})} \newcommand{\mathcal H}{\mathcal H} \newcommand{\mathcal D}{\mathcal D} \DeclareMathOperator*{\erm}{ERM} \newcommand{\fixme}[1]{\noindent{\color{red}\textbf{FIXME:} {#1}}} \newcommand{\fixmemike}[1]{\noindent{\color{blue}\textbf{FIXME (Mike):} {#1}}} \newcommand{\ignore}[1]{} \begin{document} \twocolumn[ \aistatstitle{The Tree Loss: Improving Generalization with Many Classes} \aistatsauthor{ Yujie Wang \And Mike Izbicki } \aistatsaddress{ Claremont Graduate University \And Claremont McKenna College } ] \begin{abstract} Multi-class classification problems often have many semantically similar classes. For example, 90 of ImageNet's 1000 classes are for different breeds of dog. We should expect that these semantically similar classes will have similar parameter vectors, but the standard cross entropy loss does not enforce this constraint. We introduce the tree loss as a drop-in replacement for the cross entropy loss. The tree loss re-parameterizes the parameter matrix in order to guarantee that semantically similar classes will have similar parameter vectors. Using simple properties of stochastic gradient descent, we show that the tree loss's generalization error is asymptotically better than the cross entropy loss's. We then validate these theoretical results on synthetic data, image data (CIFAR100, ImageNet), and text data (Twitter). \end{abstract} \section{Introduction} The cross entropy loss is the most widely used loss function for multi-class classification problems, and stochastic gradient descent (SGD) is the most common algorithm for optimizing this loss. Standard results show that the generalization error decays at a rate of $O(\sqrt{k/n})$, where $k$ is the number of class labels, and $n$ is the number of data points. These results (which we review later) make no assumptions about the underlying distribution of classes. In this paper, we assume that the class labels have an underlying metric structure, and we introduce the tree loss for exploiting this structure. The doubling dimension $c$ of a metric space is a common measure of the complexity of the metric, and we show that SGD applied to the tree loss will converge at a rate of $O(\sqrt{\log k/n})$ when $c\le 1$ or $O(\sqrt{k^{1-1/c}/n})$ when $c\ge 1$. These improvements are obviously most dramatic for small $c$, and the tree loss outperforms the cross entropy loss best in this regime. The tree loss is the first multi-class loss function with provably better generalization error than the cross entropy loss in any regime. Our paper is organized as follows. We begin in Section \ref{sec:related} by discussing limitations of related loss functions and how the tree loss addresses those limitations. Section \ref{sec:problem} then formally defines the problem setting, and Section \ref{sec:tree} formally defines the tree loss. We emphasize that the tree loss is simply a reparameterization of the standard cross entropy loss, and so it is easy to implement in common machine learning libraries. Section \ref{sec:theory} reviews standard results on the convergence of stochastic gradient descent, and uses those results to prove the convergence bounds for the tree loss. Finally, Section \ref{sec:experiment} conducts experiments on synthetic, real world image (CIFAR100, ImageNet) and text (Twitter) data. We show that in practice, the tree loss essentially always outperforms other multi-class loss functions. \begin{figure*} \resizebox{\columnwidth}{!}{ \begin{tikzpicture} [ level distance=1.5cm , level 1/.style={sibling distance=3cm} , level 2/.style={sibling distance=1.5cm} ] \node[draw, rounded corners=0.1in, inner sep=0.1in] at (-1.5in,0){\textbf{$U$-Tree}}; \node {\texttt{truck}} child {node {\texttt{boxer}} child {node {\texttt{tiger}}} child {node {\texttt{skunk}}} child {node {\texttt{boxer}} child {edge from parent[draw=none]} child {node {\texttt{bulldog}}} child {node {\texttt{boxer}}} child {node {\texttt{husky}}} child {node {\texttt{sheepdog}}} } child {node {\texttt{bear}}} child {edge from parent[draw=none]} } child {node {\texttt{truck}} child {edge from parent[draw=none]} child {node {\texttt{truck}}} child {node {\texttt{bus}}} } child {node {\texttt{toaster}}} ; \end{tikzpicture} } ~~~ \resizebox{\columnwidth}{!}{ \begin{tikzpicture} [ level distance=1.5cm , level 1/.style={sibling distance=3cm} , level 2/.style={sibling distance=1.5cm} ] \node[draw, rounded corners=0.1in, inner sep=0.1in] at (-1.5in,0){\textbf{$V$-Tree}}; \node {\textit{pseudo1}} child {node {\textit{pseudo2}} child {node {\texttt{tiger}}} child {node {\texttt{skunk}}} child {node {\textit{pseudo3}} child {edge from parent[draw=none]} child {node {\texttt{bulldog}}} child {node {\texttt{boxer}}} child {node {\texttt{husky}}} child {node {\texttt{sheepdog}}} } child {node {\texttt{bear}}} child {edge from parent[draw=none]} } child {node {\textit{pseudo4}} child {edge from parent[draw=none]} child {node {\texttt{truck}}} child {node {\texttt{bus}}} } child {node {\texttt{toaster}}} ; \end{tikzpicture} } \caption{ Example label tree structures for a subset of 10 ImageNet classes. The $U$-tree has class labels that repeat at multiple levels, and the $V$-tree introduces ``pseudoclasses''. The pseudoclass \textit{pseudo3} represents the class of ``dogs'', and the pseudoclass \textit{pseudo2} represents the class of ``animals''. } \label{fig:labeltree} \end{figure*} \section{Related Work} \label{sec:related} Previous work on multi-class loss functions has focused on improving either the statistical or computational performance. Statistical work includes loss functions designed for hierarchichally structured labels \citep{cesa2006incremental,wu2017hierarchical,bertinetto2020making}, loss functions for improving top-$k$ accuracy \citep{lapin2016loss}, or focusing on noisy labels \citep{sukhbaatar2014training,zhang2018generalized}. The work most similar to ours is the SimLoss \citep{Kobs2020SimLossCS}, which also makes a metric assumption about the class label structure. Our work improves on all of this statistical work by being the first to provide convergence bounds that improve on the bounds of the cross entropy loss. Other work focuses on improving the speed of multiclass classification. The hierarchical softmax \citep{morin2005hierarchical} is a particularly well studied modification of the cross entropy loss with many variations \citep[e.g.][]{Peng2017IncrementallyLT,Jiang2017ExplorationOT,Yang2017OptimizeHS,Mohammed2018EffectivenessOH}. It is easily confused with our tree loss because both loss functions involve a tree structure. The difference, however, is that the hierarchical softmax focuses on improving runtime performance, and most variants actually sacrifice statistical performance to do so. The tree loss, in contrast, maintains the runtime performance of the standard cross entropy loss but improves the statistical performance. \section{Problem Setting} \label{sec:problem} We consider the multi-class classification setting with $k$ classes and $d$ input feature dimensions. The cross entropy loss is the standard loss function for this setting. It is defined to be \begin{equation} \label{eq:xentropy} \ell(W;(\mathbf x,y)) = - \log \frac {\exp(-\trans\mathbf w_y \mathbf x)}{\sum_{j=1}^k \exp(-\trans \mathbf w_j \mathbf x)} \end{equation} where for each class $i\in[k]$, $\mathbf w_i : \mathbb R^d$ is the parameter vector associated with class $i$; the variable $W : \mathbb R^{k \times d} = (\mathbf w_1; \mathbf w_2; ...; \mathbf w_k)$ is the full parameter matrix; $\mathbf x : \mathbb R^d$ is the input feature vector; and $y \in [k]$ is the input class label. The cross entropy loss has no constraints on the weight matrix that cause similar classes to have similar parameter vectors. The tree loss adds these constraints, resulting in faster convergence and better generalization. \section{The Tree Loss} \label{sec:tree} The main idea of the tree loss is that similar classes should be forced to ``share'' some of their parameters. The tree loss refactors the cross entropy loss's weight matrix in order to enforce this sharing. We propose two variants of the tree loss which enforce this sharing in different ways. We begin by introducing the $U$-tree loss, which is simpler to explain and analyze. Then, we introduce the $V$-tree loss, which improves on the $U$-tree loss and is the loss we suggest using in practice. Whenever we refer to the tree loss without a qualifier, we mean the $V$-tree loss. \subsection{The $U$-Tree Loss} Explaining our new $U$-tree parameterization requires introducing some notation. We define a $U$-tree over the class labels to be a tree where each leaf node is represented by a label, and each internal node shares a label with a child node. Figure \ref{fig:labeltree} (left) shows an example $U$-tree structure. The definition of a $U$-tree is motivated by the cover tree data structure \citep{beygelzimer2006cover,izbicki2015faster}, which generates $U$-tree structures given a metric over the class labels. For each class $i$, we let the sequence $P_i$ denote the path from the leaf node to the root with duplicates removed. For example, using the $U$-tree in Figure \ref{fig:labeltree}, we have \begin{equation*} \begin{split} P_{\texttt{bear}} &= (\texttt{bear}, \texttt{boxer}, \texttt{truck}) \\ P_{\texttt{sheepdog}} &= (\texttt{sheepdog}, \texttt{boxer}, \texttt{truck}) \\ P_{\texttt{truck}} &= (\texttt{truck}) \end{split} \end{equation*} For each class $i$, we define $\parent{i}$ to be the first node in $P_i$ not equal to $i$. In other words, $\parent{i}$ is the parent of the upper-most node for class $i$. We are now ready to introduce the $U$-tree parameterization. We associate for each class $i$ a new parameter vector $\mathbf u_i$ defined to be \begin{equation} \mathbf u_i = \begin{cases} \mathbf w_i & \text{if $i$ is the root} \\ \mathbf w_{\parent{i}} - \mathbf w_i & \text{otherwise} \end{cases} , \end{equation} and we define the parameter matrix $U : \mathbb R^{k\times d}$ to be $(\mathbf u_1;\mathbf u_2;...;\mathbf u_k)$. We can recursively rewrite the original parameter vectors in terms of this new parameterization as \begin{equation} \mathbf w_i = \sum_{j\in P_i} \mathbf u_i . \label{eq:uu} \end{equation} The $U$-tree loss is then defined by substituting \eqref{eq:uu} into \eqref{eq:xentropy} to get \begin{equation*} \ell(U;(\mathbf x,y)) = - \log \frac {\exp(-\sum_{j\in P_y}\trans\mathbf u_j \mathbf x)}{\sum_{i=1}^k \exp(- \sum_{j\in P_i}\trans\mathbf u_j \mathbf x)} \end{equation*} We use the same $\ell$ notation for the standard cross entropy loss function and the $U$-Tree loss because the function is exactly the same; the only difference is how we represent the parameters during the training procedure. \subsection{The $V$-Tree Loss} The $V$-tree is constructed from the $U$-tree by replacing all non-leaf nodes with ``pseudoclasses''. Let $k'$ denote the total number of classes plus the total number of pseudoclasses. We let $P_i$ be defined as the path from leaf node $i$ to the root as before, but note that there are now no duplicates to remove and so the paths are longer. For example, \begin{equation*} \begin{split} P_{\texttt{bear}} &= (\texttt{bear}, \textit{pseudo2}, \textit{pseudo1}) \\ P_{\texttt{sheepdog}} &= (\texttt{sheepdog}, \textit{pseudo3}, \textit{pseudo2}, \textit{pseudo1} ) \\ P_{\texttt{truck}} &= (\texttt{truck}, \textit{pseudo4}, \textit{pseudo1} ) \end{split} \end{equation*} The $\mathbf v_i$ and $V$ parameters are now defined analogously to the $\mathbf u_i$ and $U$ parameters, but over the $V$-tree structure instead of the $U$-tree structure. An important distinction between the $V$ and $U$ parameter matrices is that they will have different shapes. The $U$ matrix has shape $k \times d$, which is the same as the $W$ matrix, but the $V$ matrix has shape $k' \times d$, which is potentially a factor of up to 2 times larger. The $V$-tree loss is defined by using the $V$ matrix to parameterize the cross entropy loss, giving \begin{equation} \ell(V;(\mathbf x,y)) = - \log \frac {\exp(-\sum_{k\in P_y}\trans\mathbf v_k \mathbf x)}{\sum_{j=1}^k \exp(- \sum_{k\in P_j}\trans\mathbf v_k \mathbf x)} . \end{equation} We continue to use the $\ell$ function to represent both the standard cross entropy loss and the $V$-tree loss function to emphasize that these are the same loss functions, just with different parameterizations of the weight matrix. \subsection{Intuition} The intuition behind the $V$-tree reparameterization is that whenever we perform a gradient update on a data point with class $i$, we will also be ``automatically'' updating the parameter vectors of similar classes. To see this, note that when we have a \texttt{sheepdog} data point, we will perform gradient updates on all $\mathbf v_i$ in $P_\texttt{sheepdog}$; i.e. $\mathbf v_{sheepdog}$, $\mathbf v_{\textit{pseudo3}}$, $\mathbf v_{\textit{pseudo2}}$, and $\mathbf v_{\textit{pseudo1}}$. This will cause the $\mathbf w_{\texttt{husky}}$ and $\mathbf w_{\texttt{bear}}$ parameters (among many others) to update because they also depend on the pseudoclass vectors $\mathbf v_{\textit{pseudo3}}$ and $\mathbf v{_\textit{pseudo2}}$. Because $P_\texttt{husky}$ has a larger overlap with $P_\texttt{sheepdog}$ than $P_\texttt{bear}$, the parameters of this more similar class will be updated more heavily. This reparameterization is reminiscent of the way that fastText \citep{bojanowski2017enriching} reparameterizes the word2vec \citep{Mikolov2013EfficientEO} model to improve statistical efficiency. Two notable differences, however, are that fastText is a domain-specific adaptation and provides no theoretical guarantees; our tree loss works on any domain and provides theoretical guarantees. \subsection{Implementation Notes} Both the $U$-tree and $V$-tree losses are easy to implement in practice using the built-in cross entropy loss function in libraries like PyTorch \citep{NEURIPS2019_9015} or Tensorflow \citep{tensorflow2015-whitepaper}. The only change needed is to represent the $W$ parameter matrix in your code as an appropriate sum over the $U$ or $V$ parameter matrices. The magic of automatic differentiation will then take care of the rest and ensure that the underlying $U$ or $V$ matrix is optimized. Our TreeLoss library\footnote{\url{https://github.com/cora1021/TreeLoss}} provides easy-to-use functions for generating these matrices, and so modifying code to work with the tree loss is a 1 line code change. In practice, the tree loss is slightly slower than the standard cross entropy loss due to the additional summation over the paths. In our experiments, we observed an approximately 2x slowdown. Modifications to the cross entropy loss that improve runtime (such as the hierarchical softmax or negative sampling) could also be used to improve the runtime of the tree loss, but we do not explore this possibility in depth in this paper. \section{Theoretical Results} \label{sec:theory} We use standard properties of stochastic gradient descent to bound the generalization error of the tree loss. In this section we formally state our main results, but we do not review the details about stochastic gradient descent or prove the results. The Appendix reviews stochastic gradient descent in detail and uses the results to prove the theorems below. To state our results, we first must introduce some learning theory notation. Define the true loss of our learning problem as \begin{equation} L_D(A) = \E_{(\mathbf x,y)\sim D} \ell(A; (\mathbf x, y)) \end{equation} where $A \in \{W, U, V\}$ is a parameterization of the cross entropy loss. We define $\bar A$ to be the result of running SGD on $n$ data points to optimize parameterization $A$, and \begin{equation} \star A = \argmin_{A} L_D(A) \end{equation} to be the optimal parameterization. Our goal is to bound the generalization error \begin{equation} \E L_D(\bar A) - L_D(\star A). \end{equation} In order to bound this error, we make the following standard assumption about our data. \begin{assumption} \label{ass:lip} For each feature vector $\mathbf x$ in the data set, $\ltwo{\mathbf x} \le \rho$ . \end{assumption} This assumption is equivalent to stating that $L_D$ (or $\ell$) is $\rho$-Lipschitz. Now our key observation is that $\lF{\star A}$ bounds the generalization error, as formalized in the following lemma. \begin{lemma} \label{ref:cor:A} Under Assumption \ref{ass:lip}, we have that for any parameterization $A$ of the cross entropy loss, \begin{equation} \E L_D(\bar A) - L_D(\star A) \le \frac{\lF{\star A}\rho}{\sqrt n}. \label{eq:Aconv} \end{equation} \end{lemma} We will next show how to use Lemma \ref{ref:cor:A} to recover the standard convergence rate of multi-class classification by bounding $\lF{\star W}$. We make the following assumption. \begin{assumption} \label{ass:B} For each class $i$, the optimal parameter vector $\star\mathbf w_i$ satisfies $\ltwo{\star\mathbf w_i} \le B$ . \end{assumption} It follows that \begin{equation} \lF{\star W}^2 = \sum_{i=1}^k \ltwo{\star\mathbf w_i} \le kB^2. \label{eq:starW} \end{equation} Substituting Eq \eqref{eq:starW} into \eqref{eq:Aconv} gives the following bound. \begin{corollary} \label{theorem:xentropy} Under assumptions \ref{ass:lip} and \ref{ass:B}, then the generalization error of the standard cross entropy parameterization when trained with SGD is \begin{equation} \E L_D(\bar W) - L_D(W^*) \le \frac {\sqrt kB\rho}{\sqrt n} . \end{equation} \end{corollary} Next, we bound $\lF{\star U}$ and $\lF{\star V}$ in order to bound the generalization error of the $U$/$V$ parameterizations. The analysis is divided into two parts. First, we make no assumptions that the $U$-tree structure is good, and we show that even in the worst case $\lF{\star U} = O(\lF{\star W})$. This implies that using the tree loss cannot significantly hurt our performance. \begin{lemma} \label{lemma:starU} Under assumption \ref{ass:B}, the following bound holds for all $U$/$V$-tree structures: \begin{equation} \lF{\star V} \le \lF{\star U} \le 2\sqrt{k}B. \end{equation} \end{lemma} Now we consider the more important case of when we have a tree structure that meaningfully captures the similarity between classes. This idea is captured in our final assumption. \begin{assumption} \label{ass:metric} Let $\lambda \ge 1$, and $d$ be a distance metric over the labels such that for all labels $i$ and $j$, \begin{equation} \frac 1 \lambda d(i,j) \le \ltwo{\star \mathbf w_i - \star \mathbf w_j} \le \lambda d(i, j). \end{equation} We denote by $c$ the doubling dimension of the metric $d$, and we assume that the $U$-tree structure is built using a cover tree \citep{beygelzimer2006cover}. \end{assumption} The $\lambda$ parameter above measures the quality of our metric. When $\lambda=1$, the metric $d$ is good and perfectly predicts the distance between parameter vectors; when $\lambda$ is large, the metric $d$ is bad. The Cover Tree was originally designed for speeding up nearest neighbor queries in arbitrary metric spaces. The definition is rather technical, so we do not restate it here. Instead, we mention only two important properties of the cover tree. First, it can be constructed in time $O(k)$. This is independent of the number of training examples $n$, so building the $U$-tree structure is a cheap operation that has no meaningful impact on training performance. Second, the cover tree has a hyperparameter which we call \texttt{base}. This hyperparameter controls the fanout and depth of the tree because at any node $i$ at depth $\depth{i}$ in the tree, the cover tree maintains the invariant that $d(i, \parent{i}) \le \texttt{base}^{-\depth{i}}$. Increasing the $\texttt{base}$ hyperparameter results in shorter, fatter trees, and decreasing it results in taller, narrower trees. Our analysis follows the convention of setting $\texttt{base}=2$, but we show in the experiments below that good performance is achieved for a wide range of base values. The following Lemma uses Assumption \ref{ass:metric} and properties of the cover tree to bound the norm of $\lF{\star U}$. It is our most difficult technical result. \begin{lemma} \label{lemma:main} Under Assumptions \ref{ass:B} and \ref{ass:metric}, when $c\le1$, we have that \begin{equation} \lF{\star U} \le \tfrac{1}{\sqrt2}\lambda B \sqrt{\log_2 k}, \label{eq:c<=1} \end{equation} and when $c>1$, we have that \begin{equation} \lF{\star U} \le \sqrt{5}\lambda B \sqrt{k^{(1-1/c)}}. \label{eq:c>1} \end{equation} \end{lemma} We note that embedding techniques can be used to reduce the intrinsic dimension of the metric ($c$) at the expense of increasing the metric's distortion ($\lambda$), but we make no attempt to optimize this tradeoff in this paper. We now state our main result. It is an immediate consequence Lemmas \ref{ref:cor:A}, \ref{lemma:starU} and \ref{lemma:main}. \begin{corollary} \label{cor:main} Under assumptions \ref{ass:lip}, \ref{ass:B}, and \ref{ass:metric}, when $c\le1$, the generalization error of the tree loss is bounded by \begin{equation} \E L_D(\bar V) - L_D(V^*) \le \frac {\lambda B\rho \sqrt{\log_2 k}}{\sqrt 2n} . \end{equation} When $c>1$, the generalization error of the tree loss is bounded by \begin{equation} \E L_D(\bar V) - L_D(V^*) \le \frac {\lambda B\rho \sqrt{5 k^{(1-1/c)}}}{\sqrt n} . \end{equation} \end{corollary} These convergence rates are asymptotically better than the convergence rates for the standard parameterization of the cross entropy loss. \begin{figure*} \centering \includegraphics[width=\columnwidth,height=1.45in]{fig/images/accuracy_vs_n.png} \includegraphics[width=\columnwidth,height=1.45in]{fig/images/accuracy_vs_d.png} \includegraphics[width=\columnwidth,height=1.45in]{fig/images/accuracy_vs_class.png} \includegraphics[width=\columnwidth,height=1.45in]{fig/images/accuracy_vs_sigma.png} \caption{ Results of Synthetic Experiment I. The x-axis of each plot shows which problem parameter is being varied. As our theory predicts, the Tree Loss outperforms the baseline loss functions in all data regimes. } \label{fig:synth:1} \end{figure*} \section{Experiments} \label{sec:experiment} We evaluate the tree loss on 4 synthetic and 3 real world experiments. We compare the tree loss to the standard cross entropy loss, the recently proposed SimLoss \citep{Kobs2020SimLossCS}, and the hierarchical softmax \citep{morin2005hierarchical}. The results confirm our theoretical findings from Section \ref{sec:theory} and demonstrate that the tree loss significantly improves the performance of these baseline losses in a wide range of scenarios. \subsection{Synthetic data} Experiments on synthetic data let us control various hyperparameters of the dataset in order to see how the tree loss behaves in a wide range of scenarios. The next subsection introduces our data generation procedure, and then subsequent subsections describe 4 different experiments baed on this procedure. \subsubsection{Data Generation Procedure} \label{sec:exp:synth:problem} Our data generation procedure has 4 key hyperparameters: the number of data points $n$, the number of feature dimensions $d$, the number of classes $k$, and a randomness parameter $\sigma$. Let $\mathcal N$ be the standard normal distribution. Then sample the true parameter matrix as \begin{align} \star W \sim \mathcal N^{k\times d} . \end{align} Standard results on random matrices show that $\lF{\star W} = O(\sqrt{kd})$ with high probability, as assumed by our theory. For each data point $i\in[n]$, we sample the data point according to the following rules: \begin{align} y_i &\sim \text{Uniform}([k]), \text{and} \\ \mathbf x_{i} &\sim \mathcal N(\mathbf w^*_{y_i}; \sigma). \end{align} Observe that larger $\sigma$ values result in more noise in the data points, making the classes harder to distinguish, and increasing the bayes error of the problem. Also observe that for any two classes $i$ and $j$, the distance between the two classes $\ltwo{\star\mathbf w_i - \star\mathbf w_j} = O(\sqrt{d})$. This implies that as $d$ increases, the separation between data points from different classes will also increase, reducing the bayes error of the problem. \subsubsection{Experiment I: Data Regimes} \label{sec:synth:1} Our first experiment investigates the tree loss's performance in a wide range of statistical settings controlled by the dataset hyperparameters ($n$, $d$, $k$, and $\sigma$). We use a baseline experimental setting with $n=100$, $d=64$, $k=10$, and $\sigma=1$. For each of the hyperparameters, we investigate the effect of that hyperparameter's performance on the problem by varying it over a large range. For each value in the range, we: (1) randomly sample 50 different $\star W$ using the procedure described in Section \ref{sec:exp:synth:problem}; (2) for each $\star W$, we: (2a) sample a training set with $n$ data points and a test set with 10000 data points\footnote{The size of the test set is fixed and large to ensure that our results have high statistical significance.}; (2b) train a separate model for each of our loss functions; (2c) report the average test accuracy across 50 samples from step (1). Figure \ref{fig:synth:1} shows the results of these experiments. As our theory suggests, the tree loss outperforms all other losses in all regimes. Consider the top-left plot. This plot has a low bayes error, and so all the variation in performance we see is due to the generalization error of the models. As the number of data points $n$ grows large, its influence on the standard cross entropy's convergence rate of $O(\sqrt{k/n})$ dominates, and the model generalizes well. When the number of data points is small, then the dependence on $k$ becomes more important. The tree loss's dependence of $O(\sqrt{k^{1-1/c}/n})$ is strictly better than the cross entropy loss's, explaining the observed improved performance. A similar effect explain's our model's observed improved performance in the bottom-left plot as $k$ varies. Now consider the top-right plot. The bayes error of our problem setup is inversely related to the problem dimension $d$ (see observations in Section \ref{sec:exp:synth:problem}), so this plot compares performance on different problem difficulties. On maximally easy (large $d$) and maximally hard (small $d$) problems, the tree loss performs similarly to the other loss functions because the performance is dominated by the bayes error and no improvement is possible. The main advantage of the tree loss is in the mid-difficulty problems. In this regime, performance is dominated by the generalization ability of the different losses, and the tree loss's improved generalization results in noticeable improvements. The bottom-right plot tells a similar story but controls the difficulty of the problem directly by tuning the randomness $\sigma$. \subsubsection{Experiment II: Parameter Norms} Lemma \ref{ref:cor:A} suggests that the norm of the parameter matrix is what controls convergence rate of SGD, and the proof of our main result in Corollary \ref{cor:main} relies on bounding this norm. In this experiment, we directly measure the norm of the parameter matrices and show that $\lF{V}$ is significantly less than $\lF{W}$, justifying the good convergence rates we observed in Synthetic Experiment I above. We set $n=1000$, $d=10$, and $\sigma=1$, and vary the number of classes $k$. Figure \ref{fig:synth:norm} shows that $\lF{\star V}$ grows much slower than $\lF{\star W}$. Notice in particular that $\lF{\star W}$ grows at the rate $O(\sqrt{k})$ and $\lF{\star V}$ grows at a rate strictly less than $o(\sqrt{k})$ as our theory predicts. \begin{figure} \includegraphics[width=\columnwidth,height=1.5in]{fig/images/class_v_norm.png} \caption{ Results of Synthetic Experiment II. As we increase $k$, $\lF{\star V}$ grows significantly slower than $\lF{\star W}$. Since the convergence rate of SGD is proportional to $\lF{\cdot}$ (by Lemma \ref{ref:cor:A}), the tree loss converges faster than the standard cross entropy loss. } \label{fig:synth:norm} \end{figure} \subsubsection{Experiment III: Tree Shapes} In this experiment, we study how the shape of our tree structure impacts performance. We set $n=1000$, $d=10$, $k=100$, and $\sigma=1$. We use a relatively large number of classes $k$ compared to the standard problem in Section \ref{sec:synth:1} above in order to ensure that we have enough classes to generate meaningfully complex tree structures. The basic trends we observe are consistent for all other values of $n$, $d$, and $\sigma$. Recall that the cover tree has a parameter \texttt{base} which controls the rate of expansion between layers of the tree. Reducing \texttt{base} increases the height of the tree and increasing \texttt{base} reduces the height of the tree. This will affect the performance of our tree loss because taller trees result in more parameter sharing. Figure \ref{fig:ct:acc} plots the accuracy of the tree loss as a function of the \texttt{base} hyperparameter. Across all ranges, the tree loss outperforms the cross entropy loss. Interestingly, the tree loss's accuracy is maximized when $\texttt{base}\approx1.3$. The original cover tree paper \citep{beygelzimer2006cover} also found that a \texttt{base} of 1.3 resulted in the fastest nearest neighbor queries. We do not know of a good theoretical explanation for this phenomenon. \begin{figure} \includegraphics[width=\columnwidth,height=1.5in]{fig/new_img/accuracy_vs_base.png} \caption { Results of Synthetic Experiment III. The tree loss outperforms the standard cross entropy loss for all values of \texttt{base}. } \label{fig:ct:acc} \end{figure} \subsubsection{Experiment IV: Metric Quality} \label{sec:synth:eps} Recall that our theoretical results state that the convergence rate of SGD depends on a factor $\lambda$ that measures how well our metric over the class labels is able to predict the distances between the true parameter vectors. (See Assumption \ref{ass:metric}.) In this experiment, we study the effect of this $\lambda$ parameter on prediction accuracy. We fix the following problem hyperparameters: $n=100$, $d=64$, $k=10$, and $\sigma=1$. Then we construct a bad parameter matrix $\bad W$ using the same procedure we used to construct the optimal parameter matrix; that is, \begin{equation} \bad W \sim \mathcal N ^ {k\times d} . \end{equation} Each row $\bad \mathbf w_i$ of $\bad W$ is now a $d$ dimensional random vector that has absolutely no relation to the true parameter vector. Next, we define a family of ``$\epsilon$-bad'' parameter vectors that mix between the bad and optimal parameter vectors: \begin{equation} \mathbf w^\epsilon_i = (1-\epsilon) \mathbf w^*_i + \epsilon \bad\mathbf w_i. \end{equation} Finally, we define our $\epsilon$-bad distance metric as \begin{equation} d_\epsilon(i,j) = \ltwo{\mathbf w_i^\epsilon - \mathbf w_j^\epsilon} \end{equation} and build our cover tree structure using $d_\epsilon$. When $\epsilon=0$, this cover tree structure will be the ideal structure and $\lambda$ will be equal to 1; when $\epsilon=1$, this cover tree structure will be maximally bad, and $\lambda$ will be large. Figure \ref{fig:synth:eps} shows the performance of the tree loss as a function of $\epsilon$. Remarkably, the tree loss outperforms the standard cross entropy loss even when using a perfectly bad tree structure (i.e. when $\epsilon=1$). This surprising empirical finding actually agrees with our theoretical results in two ways. First, Lemma \ref{lemma:starU} implies that that $\lF{\star V} = O(\lF{\star W})$, and so no tree structure can perform asymptotically worse than the standard cross entropy loss. Second, when $\epsilon=1$ and we have a perfectly random tree structure, the distance $d_\epsilon$ can still be embedded into the ideal metric $d$ with some large (but finite!) $\lambda$. Asymptotically, Corollary \ref{cor:main} then implies that SGD converges at the rate $O(\sqrt{k^{1-1/c}})$, which is less than the convergence rate of the standard cross entropy loss. \begin{figure} \includegraphics[width=\columnwidth,height=1.5in]{fig/new_img/loss_vs_structure.png} \caption{ Results for Synthetic Experiment IV. As $\epsilon\to1$, the tree structure becomes perfectly random. The tree loss still outperforms the cross entropy loss in this worst-case scenario. } \label{fig:synth:eps} \end{figure} \subsection{Real World Data} \begin{table*} \centering \input{fig/real_world.tex} \caption{Experimental results on real world datasets. For all performance measures, larger numbers are better. The tree loss achieves the best results in all cases.} \label{table:results} \end{table*} We now validate the tree loss on real world image (CIFAR100, ImageNet) and text (Twitter) data. For each dataset, we report standard top-1 accuracy scores, and the less well known \emph{similarity accuracy} (SA) score. SA is a top-1 accuracy considering similarity. The idea is that misclassifying a \texttt{sheepdog} as a \texttt{husky} should be penalized less than misclassifying a \texttt{sheepdog} as a \texttt{bus} because the \texttt{sheepdog} and \texttt{husky} classes are more similar to each other. Table \ref{table:results} shows that on each of these datasets and for each metric, the tree loss outperforms the baseline cross entropy loss and SimLoss.\footnote{We do not evaluate against the hierarchical softmax due to limited computational resources. The results on the synthetic data, however, suggest that the hierarchical softmax would have performed poorly.} The remainder of this section describes the experimental procedures for each dataset in detail. \subsubsection{CIFAR100} CIFAR100 is a standard image dataset with 100 classes \citep{krizhevsky2009learning}. It is more difficult than MNIST and CIFAR10, but small and so less computationally demanding than ImageNet. In this experiment, we exactly follow the procedure used in the SimLoss paper \citep{Kobs2020SimLossCS}, and find that under their experimental conditions, the tree loss has the best performance. First, we generate our distance metric over the class labels using a word2vec model \citep{Mikolov2013EfficientEO} pretrained on GoogleNews. The distance between class labels is defined to be the distance between their corresponding word2vec embeddings. There are 4 class labels with no corresponding word2vec embedding, and so following the SimLoss paper we discard these classes. We train 3 ResNet20 models on CIFAR100 \citep{He2016DeepRL} using hyperparameters specified in the original paper. The only difference between the three models is the final layer: one model uses the standard cross entropy loss, one uses the tree loss, and one uses the SimLoss. The results shown in Table \ref{table:results} shows that the tree loss significantly outperforms the other losses. \subsubsection{ImageNet} ImageNet is the gold-standard dataset for image classification tasks \citep{Russakovsky2015ImageNetLS}. It has 1000 class labels and 1.2 million images. We generate our distance metric over the image labels using a pretrained fastText model \citep{bojanowski2017enriching}. We use fastText rather than word2vec because many of the class labels contain words not in the word2vec model's vocabulary; since fastText is naturally able to handle out-of-vocabulary words, there is no need to discard class labels like we did for CIFAR100. Many ImageNet class label names contain multiple words, and for these classes we generate the embedding with a simple average of the fastText embedding over all words. We again train a ResNet50 model \citep{He2016DeepRL} using standard hyperparameters, replacing only the last layer of the model. As with CIFAR100, Table \ref{table:results} shows that the tree loss significantly outperforms the other losses. Since the publication of ResNet50 in 2016, there have been many newer network architectures with better performance on ImageNet \citep[e.g.][]{howard2017mobilenets,huang2017densely,pmlr-v97-tan19a}. We unfortunately did not have sufficient computational resources to train these more modern models with the tree loss, and we emphasize that we are not trying to compete directly with these network architectures to achieve state-of-the-art performance. Instead, our goal is to show that for a given network architecture, replacing the cross entropy loss with the tree loss results in improved performance. \subsubsection{Twitter Data} We now consider the problem of predicting the emotions of Twitter text. This is a very different domain than the image problems considered above, and demonstrates that the tree loss works across many domains. We use the dataset collected by \citet{izbicki2019geolocating} and \citet{stoikos2020multilingual}. This dataset contains all geolocated tweets written in more than 100 languages sent over the 4 year period between October 2017 and October 2021---approximately $5.5\times10^9$ tweets in total. Tweets in the dataset are preprocessed to have all emojis, usernames, and urls removed from the text, and then the goal is to predict the emoji given only the remaining text. Since most emojis represent emotions, the task of emoji prediction serves as a proxy for emotion prediction. We generate our distance metric over the class labels using the pretrained emoji2vec model \citep{Eisner2016emoji2vecLE}, which associates each emoji with a vector embedding. We then follow the procedure in \citet{stoikos2020multilingual} to train multilingual BERT models \citep{Feng2020LanguageagnosticBS} with the last layers replaced by the three different loss functions. Table \ref{table:results} shows the tree loss significantly outperforming the baseline models. \section{Conclusion} The tree loss is a drop-in replacement for the cross entropy loss for multiclass classification problems. It can take advantage of background knowledge about the underlying class structure in order to improve the convergence rate of SGD. Both theoretical and empirical results suggest there is no disadvantage for using the tree loss, but potentially large advantages. \bibliographystyle{plainnat} \section{Introduction} This note introduces \emph{tree regularization}. For multi-class learning problems, tree regularization achieves a generalization error of $O(\sqrt{\log k / m})$ which is comparable to the tree loss's error of $O(\sqrt{1/m})$. Tree regularization has several advantages over the tree loss, however: \begin{enumerate} \item It can work with any loss function. For example, we can apply tree regularization to multiple regression, multi-label prediction, and objection detection problems. \item The optimal tree structure can be learned at training time and the problem remains convex. \item The analysis works for all optimization algorithms, not just stochastic gradient descent. \end{enumerate} \section{Notation} Many regularized learning problems take the form \begin{equation} \argmin_{W} \mathcal{L}(Z ; W) + \lambda R(W)^2 \end{equation} where $Z = \{z_1, ..., z_m\}$ is the training set, $W$ is the model parameters, \begin{equation} \mathcal{L}(Z ; W) = \frac 1m \sum_{i=1}^m \ell(z_i; W) \end{equation} and $R : W \to \mathbb R^+$ is a regularization function. We assume that the parameters $W$ are given in matrix as a $k \times d$ matrix. In the multi-class prediction problem, each row $i$ is a $d$ dimensional vector that represents the parameter matrix for class $i$. \section{Tree Regularization} The Tree Regularizer is defined to be \begin{equation} R(W, V, A) = \sum_{i} \left\lVert \mathbf w_i - \sum_{j}A_{i,j} \mathbf v_j \right\lVert = \sum_{i} \left\lVert \mathbf w_i - A_i V \right\lVert \end{equation} The idea of the convergence proof is to show that any loss function that is $\rho$-Lipschitz with respect to the L2 norm is $\rho/k$ Lipschitz with respect to the tree regularizer (for fixed $V$,$A$ and near the optimal solution). Then applying the exercise at the end of the chapter solves the problem. \end{document} \subsubsection*{\bibname}} \begin{document} \twocolumn[ \aistatstitle{Instructions for Paper Submissions to AISTATS 2022} \aistatsauthor{ Author 1 \And Author 2 \And Author 3 } \aistatsaddress{ Institution 1 \And Institution 2 \And Institution 3 } ] \begin{abstract} The Abstract paragraph should be indented 0.25 inch (1.5 picas) on both left and right-hand margins. Use 10~point type, with a vertical spacing of 11~points. The \textbf{Abstract} heading must be centered, bold, and in point size 12. Two line spaces precede the Abstract. The Abstract must be limited to one paragraph. \end{abstract} \section{GENERAL FORMATTING INSTRUCTIONS} The camera-ready versions of the accepted papers are 8 pages, plus any additional pages needed for references. Papers are in 2 columns with the overall line width of 6.75~inches (41~picas). Each column is 3.25~inches wide (19.5~picas). The space between the columns is .25~inches wide (1.5~picas). The left margin is 0.88~inches (5.28~picas). Use 10~point type with a vertical spacing of 11~points. Please use US Letter size paper instead of A4. Paper title is 16~point, caps/lc, bold, centered between 2~horizontal rules. Top rule is 4~points thick and bottom rule is 1~point thick. Allow 1/4~inch space above and below title to rules. Author descriptions are center-justified, initial caps. The lead author is to be listed first (left-most), and the Co-authors are set to follow. If up to three authors, use a single row of author descriptions, each one center-justified, and all set side by side; with more authors or unusually long names or institutions, use more rows. Use one-half line space between paragraphs, with no indent. \section{FIRST LEVEL HEADINGS} First level headings are all caps, flush left, bold, and in point size 12. Use one line space before the first level heading and one-half line space after the first level heading. \subsection{Second Level Heading} Second level headings are initial caps, flush left, bold, and in point size 10. Use one line space before the second level heading and one-half line space after the second level heading. \subsubsection{Third Level Heading} Third level headings are flush left, initial caps, bold, and in point size 10. Use one line space before the third level heading and one-half line space after the third level heading. \paragraph{Fourth Level Heading} Fourth level headings must be flush left, initial caps, bold, and Roman type. Use one line space before the fourth level heading, and place the section text immediately after the heading with no line break, but an 11 point horizontal space. \subsection{Citations, Figure, References} \subsubsection{Citations in Text} Citations within the text should include the author's last name and year, e.g., (Cheesman, 1985). Be sure that the sentence reads correctly if the citation is deleted: e.g., instead of ``As described by (Cheesman, 1985), we first frobulate the widgets,'' write ``As described by Cheesman (1985), we first frobulate the widgets.'' The references listed at the end of the paper can follow any style as long as it is used consistently. \subsubsection{Footnotes} Indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Use 8 point type for footnotes. Place the footnotes at the bottom of the column in which their markers appear, continuing to the next column if required. Precede the footnote section of a column with a 0.5 point horizontal rule 1~inch (6~picas) long.\footnote{Sample of the second footnote.} \subsubsection{Figures} All artwork must be centered, neat, clean, and legible. All lines should be very dark for purposes of reproduction, and art work should not be hand-drawn. Figures may appear at the top of a column, at the top of a page spanning multiple columns, inline within a column, or with text wrapped around them, but the figure number and caption always appear immediately below the figure. Leave 2 line spaces between the figure and the caption. The figure caption is initial caps and each figure should be numbered consecutively. Make sure that the figure caption does not get separated from the figure. Leave extra white space at the bottom of the page rather than splitting the figure and figure caption. \begin{figure}[h] \vspace{.3in} \centerline{\fbox{This figure intentionally left non-blank}} \vspace{.3in} \caption{Sample Figure Caption} \end{figure} \subsubsection{Tables} All tables must be centered, neat, clean, and legible. Do not use hand-drawn tables. Table number and title always appear above the table. See Table~\ref{sample-table}. Use one line space before the table title, one line space after the table title, and one line space after the table. The table title must be initial caps and each table numbered consecutively. \begin{table}[h] \caption{Sample Table Title} \label{sample-table} \begin{center} \begin{tabular}{ll} \textbf{PART} &\textbf{DESCRIPTION} \\ \hline \\ Dendrite &Input terminal \\ Axon &Output terminal \\ Soma &Cell body (contains cell nucleus) \\ \end{tabular} \end{center} \end{table} \section{SUPPLEMENTARY MATERIAL} If you need to include additional appendices during submission, you can include them in the supplementary material file. You can submit a single file of additional supplementary material which may be either a pdf file (such as proof details) or a zip file for other formats/more files (such as code or videos). Note that reviewers are under no obligation to examine your supplementary material. If you have only one supplementary pdf file, please upload it as is; otherwise gather everything to the single zip file. You must use \texttt{aistats2022.sty} as a style file for your supplementary pdf file and follow the same formatting instructions as in the main paper. The only difference is that it must be in a \emph{single-column} format. You can use \texttt{supplement.tex} in our starter pack as a starting point. Alternatively, you may append the supplementary content to the main paper and split the final PDF into two separate files. \section{SUBMISSION INSTRUCTIONS} To submit your paper to AISTATS 2022, please follow these instructions. \begin{enumerate} \item Download \texttt{aistats2022.sty}, \texttt{fancyhdr.sty}, and \texttt{sample\_paper.tex} provided in our starter pack. Please, do not modify the style files as this might result in a formatting violation. \item Use \texttt{sample\_paper.tex} as a starting point. \item Begin your document with \begin{flushleft} \texttt{\textbackslash documentclass[twoside]\{article\}}\\ \texttt{\textbackslash usepackage\{aistats2022\}} \end{flushleft} The \texttt{twoside} option for the class article allows the package \texttt{fancyhdr.sty} to include headings for even and odd numbered pages. \item When you are ready to submit the manuscript, compile the latex file to obtain the pdf file. \item Check that the content of your submission, \emph{excluding} references, is limited to \textbf{8 pages}. The number of pages containing references alone is not limited. \item Upload the PDF file along with other supplementary material files to the CMT website. \end{enumerate} \subsection{Camera-ready Papers} If your papers are accepted, you will need to submit the camera-ready version. Please make sure that you follow these instructions: \begin{enumerate} \item Change the beginning of your document to \begin{flushleft} \texttt{\textbackslash documentclass[twoside]\{article\}}\\ \texttt{\textbackslash usepackage[accepted]\{aistats2022\}} \end{flushleft} The option \texttt{accepted} for the package \texttt{aistats2022.sty} will write a copyright notice at the end of the first column of the first page. This option will also print headings for the paper. For the \emph{even} pages, the title of the paper will be used as heading and for \emph{odd} pages the author names will be used as heading. If the title of the paper is too long or the number of authors is too large, the style will print a warning message as heading. If this happens additional commands can be used to place as headings shorter versions of the title and the author names. This is explained in the next point. \item If you get warning messages as described above, then immediately after $\texttt{\textbackslash begin\{document\}}$, write \begin{flushleft} \texttt{\textbackslash runningtitle\{Provide here an alternative shorter version of the title of your paper\}}\\ \texttt{\textbackslash runningauthor\{Provide here the surnames of the authors of your paper, all separated by commas\}} \end{flushleft} Note that the text that appears as argument in \texttt{\textbackslash runningtitle} will be printed as a heading in the \emph{even} pages. The text that appears as argument in \texttt{\textbackslash runningauthor} will be printed as a heading in the \emph{odd} pages. If even the author surnames do not fit, it is acceptable to give a subset of author names followed by ``et al.'' \item The camera-ready versions of the accepted papers are 8 pages, plus any additional pages needed for references. \item If you need to include additional appendices, you can include them in the supplementary material file. \item Please, do not change the layout given by the above instructions and by the style file. \end{enumerate} \subsubsection*{Acknowledgements} All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. To preserve the anonymity, please include acknowledgments \emph{only} in the camera-ready papers. \subsubsection*{References} References follow the acknowledgements. Use an unnumbered third level heading for the references section. Please use the same font size for references as for the body of the paper---remember that references do not count against your page length total. \subsubsection*{\bibname}} \begin{document} \onecolumn \aistatstitle{Instructions for Paper Submissions to AISTATS 2022: \\ Supplementary Materials} \section{FORMATTING INSTRUCTIONS} To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2022.sty} as a style file and to follow the same formatting instructions as in the main paper. The only difference is that the supplementary material must be in a \emph{single-column} format. You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files. Note that reviewers are under no obligation to examine your supplementary material. \section{MISSING PROOFS} The supplementary materials may contain detailed proofs of the results that are missing in the main paper. \subsection{Proof of Lemma 3} \textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]} \section{ADDITIONAL EXPERIMENTS} If you have additional experimental results, you may include them in the supplementary materials. \subsection{The Effect of Regularization Parameter} \textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]} \vfill \end{document}
proofpile-arXiv_065-3385
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgment} We are very grateful to the organizers of the DiCOVA 2021 Challenge for their efforts in providing the participants with data and a platform for the competition. This research is partially supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: [AISG-100E-2020-055 and AISG-GC-2019-001-2A]). \input{EMBC_2022.bbl} \bibliographystyle{IEEEtran} \end{document} \section{Conclusion} In order to identify patients infected with COVID-19 more accurately using audio data, we proposed a unified framework for reliable COVID-19 detection that incorporates multiple useful technologies. First, Gaussian noise-based data augmentation and Focal Loss were introduced to deal with imbalanced data. Based on the ResNet-50 pre-trained on ImageNet, we integrated the fine-tuning techniques with transfer learning to adjust the weights of the deep neural network for COVID-19 detection. In addition, in order to make our proposed model more robust and generalizable, we adopted ensemble learning and uncertainty estimation to integrate the predictions from multiple base models. Our experimental results show that the proposed method can effectively identify the infected persons with COVID-19 and is superior to other state-of-the-art methods. The fast diagnoses of other respiratory diseases might also benefit from this unified framework and we leave this challenging extension for future work. \section{Introduction} By September 22nd 2021, the total number of Coronavirus disease 2019 (COVID-19) confirmed cases has reached 230 million worldwide, and unfortunately, the pandemic is still ongoing. The Reverse Transcription Polymerase Chain Reaction (RT-PCR) is the current gold standard for COVID-19 screening. However, the RT-PCR is costly in terms of time, manpower and resources \cite{cevik2020virology, vogels2020analytical}. For remote and less developed regions, it can be difficult for people to afford large scale TR-PCR tests to detect all cases. Even for developed regions, the costly nature of RT-PCR tests can also lead to considerable delays in diagnosis when facing a large number of suspicious cases. Therefore, researchers are constantly searching for more cost-effective and easy-to-access test methods that ideally can identify the infected individuals on the spot. It then comes to researchers' attention that cough is one of the most common respiratory symptoms in the early stage of infections. Some study has also shown that cough audios can be potentially adapted for quick diagnoses of COVID-19~\cite{laguarta2020covid,coppock2021covid,xia2021uncertainty}. Since respiratory audios are relatively easy to obtain at a lower cost, we envision that the respiratory audio-based diagnosis approach can be a fast and cheap COVID-19 detection solution for rural and underdeveloped areas \cite{vogels2020analytical}. At present, researchers have developed some machine learning\cite{pahar2021covid} or deep learning \cite{laguarta2020covid,pal2021pay,casanova2021deep} algorithms to diagnose COVID-19 through the respiratory audio. Their success, to a certain degree, has proved the possibility of detecting COVID-19 through audio. Although the great performance of algorithms demonstrates that audio-based detection of COVID-19 can be considered an effective method, the datasets used to develop these algorithms are collected by individual research groups or institutions, which might not be easily accessed by other researchers to investigate and validate new methodologies. Unlike clinical datasets above, crowdsourced data is a new type of data that uses the existing web environment to collect data from volunteers in different regions, allowing a large amount of experimental data to be obtained at the beginning of a respiratory epidemic such as COVID-19. These data can be used as a form of open-source data to facilitate a wide range of research for a variety of tasks. However, these data also have several serious drawbacks. The first one is the data imbalance. Since crowdsourced data is based on voluntary contributors, it is difficult to keep a balance between positive and negative samples. To achieve balance, randomly removing negative samples or replicating positive samples makes it difficult to generate a robust model. In addition, crowdsourced data is collected on online platforms, leading to the discrepancy in data quality in different circumstances. In this study, we proposed a unified framework for rapidly diagnosing COVID-19 using crowdsourced data. The key components of the framework (Figure \ref{fig:framework_1}) include: (A) \textit{Data Augmentation} Data augmentation had been used in our framework by adding some random noises to the audio to produce new data in the same label and it can help in the circumstances of datasets with a limited number of positive samples. (B) \textit{ImageNet-pretrained ResNet-50} Even though there are significant differences between the Mel spectrograms and images, a lot of research work has explored the application of ImageNet-Pretrained ResNet-50 to the field of audio speech for finetuning \cite{gong2021psla,palanisamy2020rethinking}. We also here experimentally investigated that pre-trained ResNet-50 weights lead to a significant improvement in crowdsourced COVID-19 detection framework compared to randomly initialised parameters. (C) \textit{Cost-sensitive Loss} Cost-sensitive learning focuses more on the costs of prediction errors from minor classes when training a machine learning model. (D) \textit{Deep Ensemble Learning} Deep ensemble learning method has been deployed to integrate different base classifiers for better model generalizability. (E) \textit{Uncertainty Estimation} The qualified uncertainty can be used for selective prediction: keeping low-uncertain outputs but referring high uncertain (unsafe) predictions to authoritative doctors for external checks, which can help to improve the system robustness. \section{Methods} In this section, we will illustrate the key components in our general framework for the COVID-19 detection task. \subsection{Gussian Noise-based Data Augmentation} To increase the diversity of the training datasets, data augmentation is frequently required. Furthermore, data augmentation can reduce the domain mismatch between the enrolled and test data, according to \cite{Jiang2020TheXS}. Flip, rotation, scale, crop, translation, and other data augmentation techniques are common. For data augmentation, we chose to add Gaussian noise to the raw data in our investigation. A pseudorandom number generator can generate Gaussian noise, which has a mean of zero and a standard deviation of one. Gaussian noise was used to produce additional synthetic minority samples for our proposed model, which helps to lower the incidence of overfitting in DNNs. \subsection{Finetuning on ImageNet-Pretrained ResNet-50} ResNet has proven to be a powerful backbone in the field of audio classification. However, due to the model's large number of parameters, learning on short datasets with random initial parameters is insufficient. As a result, much audio classification research has attempted to finetune parameters from ImageNet-pretrained ResNet-50, and this method has been shown to deliver significant improvements in audio tagging\cite{gong2021psla}, audio classification\cite{palanisamy2020rethinking}, and environmental audio classification\cite{9533654}. On the DiCOVA dataset, we fine-tuned the ImageNet-pretrained ResNet-50-based backbone inspired by the previous work. \subsection{Cost-sensitive Loss (Focal Loss)} In the training of deep learning models, the loss function is used to measure the degree of difference between the predicted values and the ground truth values. The loss function plays the role of "supervisor" in DNNs, which guides the model training to the global minimum. Typically, Cross Entropy (CE) is often used as the loss function, which is calculated as follows Eq.(\ref{eq5}). \begin{equation} L_{CE} = -\sum_{i=1}^{m}{y_i\cdot{log(p_i)}} \label{eq5} \end{equation} where $y_i$ represents the label of sample $i$, and $p_i$ is the probability that sample $i$ is predicted to be positive. Cross Entropy can help to generate a good model when the number of samples doesn't differ so much in each class, but it is no longer effective when the data is imbalanced. For this, Lin et al. \cite{lin2017focal} proposed a new loss function called Focal Loss which is calculated as Eq.(\ref{eq6}). \begin{equation} L_{FL} = -\sum_{i=1}^{m}{\alpha_i(1-p_i)^\gamma {log(p_i)}} \label{eq6} \end{equation} where $\gamma$ $(\gamma \geq 0)$ is the focusing parameter which is used to adjust the weights of difficult samples and easy samples, and the $(1-p_i )^\gamma$ is called the modulating factor. In addition, $\alpha_i$ is used to adjust the weights of positive and negative samples. From Eq.(\ref{eq6}), there are two important properties of the Focal Loss: (1) When the value of $p_i\rightarrow1$, $(1-p_i )^\gamma \rightarrow0$, it indicates that the prediction of the model is accurate and the contribution of these easy samples to the loss is quite small. (2) The focusing parameter $\gamma$ smoothly adjusts the rate at which easy examples are down-weighted. When $\gamma=0$, Focal Loss is equivalent to CE, and as $\gamma$ increases, the weights of easy samples will be further reduced. Therefore, using Eq.(\ref{eq6}) as the loss function of the model in this paper, the problem of data imbalance will be greatly alleviated. \subsection{Deep Ensemble Learning} To enhance the predictive performance of independent trained identical models, we introduce an ensemble model to overcome the problem of diversity introduced by differences in initialization and mini-batch orderings. It appears that a random initialization of the NN parameters as well as random shuffling of the data points is sufficient for obtaining good performance \cite{lakshminarayanan2016simple,chang2021dicova}. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figure/DeepEnsmeble.png} \caption{The models used in deep ensemble all have the same architecture, with the variability coming from the order in which the training data is passed during training, so that multiple models can be predicted in parallel during prediction and the results are averaged and integrated before prediction.} \label{fig:framework_2} \end{figure} Figure \ref{fig:framework_2} shows our ensemble learning techniques, and we used the average bagging method in ensemble learning to integrate base learners for model calibration. In order to predict audio classification scores, we trained $M$ independent models using the same architecture, hyperparameter settings, and training procedures. To calculate the final probability, Eq.(\ref{eq7}) is calculated as the average of soft-max outputs of these $M$ individually trained models. \begin{equation} y_{final} = \frac{1}{M}\sum_{m=1}^{M}{y_m} \label{eq7} \end{equation} where $y_m$ is the soft-max output from individual model. \subsection{Uncertainty Estimation} We identified the level of disagreement across models within the ensemble suite as the measure of uncertainty because softmax probability was unable to convey model confidence. Deep ensemble uncertainty has also been found to be more effective than other estimating methodologies~\cite{lakshminarayanan2016simple}. To be more exact, we measured uncertainty using the standard deviation across all $N$ models as follows: \vspace{-0.06in} \begin{equation} \small \sigma(y) = \sqrt{\frac{1}{N}\sum_{i=1}^{N}(P(y=1|x,X_i,Y_i,\theta_i)-\mu)^2}, \vspace{-0.06in} \end{equation} where $\mu$ is the averaged probability. If the uncertainty $sigma(y)$ is greater than a predetermined threshold, the model's prediction during digital pre-screening is considered reliable.\\ \section{Experiments and Results} In this section, we aim to address the following questions: \noindent$\bullet$ \textbf{Q1:} Do data balancing techniques and deep ensemble learning contribute to model performance improvement? \noindent$\bullet$ \textbf{Q2:} Is it ImagNet-pretrained ResNet-50 better? \noindent$\bullet$ \textbf{Q3:} Is uncertainty of proposed models reliable? \noindent\textbf{Dataset} To be objective, we evaluated our model on the DiCOVA2021 Task 1 dataset\footnote{\url{https://dicova2021.github.io/#home}}. The challenge organizers have already split the dataset into train set and validation set. The train set contains 822 samples which includes 772 non-COVID-19 ones and 50 COVID-19 ones. The validation set contains 218 samples which includes 193 non-COVID-19 ones and 25 COVID-19 ones.\\ \noindent\textbf{Evaluation Metrics} The area under the receiver operating characteristics curve, or \textit{ROC-AUC}, was generated to further analyze the effectiveness of our approach. In addition, the mean and standard deviation of the aforementioned parameters were provided across 5-fold runs. \noindent\textbf{Hyperparameter}. For training, we set the number of iterations with \textit{20} epochs and used \textit{Adam} as the optimizer. The batch size is \textit{16}. \subsection{Results and Discussions} Three sets of experiments were conducted to evaluate the performance of the proposed model. Firstly, to demonstrate the usefulness of the data balancing technique in our model, we first exhibit the results of the various components of the proposed model on the test set. Secondly, we evaluated the differences in model performance between random initial weights and ImageNet-pretrained weights. Lastly, we investigated how the deep ensemble methods help to calibrate the prediction results. \subsubsection{Data Balancing Techniques and Deep Ensemble Learning for Model Performance} We compared the performance of the ImageNet-pretrained ResNet-50-based model using two different loss functions (Cross Entropy (CE) and Focus Loss (FL)) and two different data augmentation methods (Simple Duplicated Minor Class (Dul) and Gaussian Noise (Gua)). In addition, we also added three baselines given by DiCOVA2021 into comparison, all of which are based on three traditional classifiers, SVM, Random Forest and Multilayer Perceptron using traditional acoustic MFCC features. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figure/DiCOVA2021.png} \caption{DiCOVA2021 Test Result} \label{fig:framework_3} \end{figure} As is shown in Figure~\ref{fig:framework_3}, the performance of all types of our proposed ImageNet-pretrained ResNet-50-based models are better than the challenge’s official baselines. In terms of the loss function, the model with CE performs better than that with FL on the test set, and overfitting occurs when FL is used for training on the original training set. However, after using Dul or Gua for data augmentation, the model with FL performed better than that with CE. It proves that a combination of multiple data balancing techniques will contribute more to the model performance. Moreover, it can be observed that using Gua to process imbalanced data is better than directly replicating minority samples. Deep ensemble learning has also been applied with FL and Gua to generate the calibrated model. The results in Figure \ref{fig:framework_3} shows the the contribution of deep ensemble approach to the performance improvement. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Results on DiCOVA2021 Validation dataset} \label{tab:trainingSummary} \centering \begin{tabular}{cc} \hline Methods & Test AUC \\ \hline Random\_initial & 43.48\% \\ Imagenet\_initial & 64.50\% \\ \hline \end{tabular} \end{table} \subsubsection{Random Initial Weights vs ImageNet-pretrained Weights} In this section, we further experimentally compared the performance of the ResNet-50-based model with random initial weights and the ResNet-50-based model with ImageNet-pretrained initial weights on the dataset, where we randomly selected 20\% of the entire dataset (train+val) as the test set, 15\% of the dataset as the validation dataset, and the rest as the training set. The performance on the test dataset shows that the ResNet-50-based model with ImageNet-pretrained initial weights can substantially improve the performance of the model. Additionally, the curves of the models with two different weight initialization (Figure \ref{fig:framework_4}) show that although the small training dataset allows both initialization methods to achieve an AUC of 1 in less than 10 epochs, the ResNet-50-based model with ImageNet-pretrained initial weights converges faster. Thus, it can be concluded that using ImageNet-pretrained initialization has a huge advantage in generalization and converging time over random initialization of weights. \begin{figure}[h] \centering \includegraphics[scale=0.5]{figure/train_auc.png} \caption{Train Dataset AUC changes in Every Epoch} \label{fig:framework_4} \end{figure} \subsubsection{Estimation and Application of Uncertainty} In this section, we estimated the uncertainty of the prediction results of our proposed framework. According to Table~\ref{tab:reliable}, we divided the test dataset into two comparison datasets: the one with low uncertainty and the other one with high uncertainty, according to the distribution of uncertainty of the proposed model's prediction results in the test set. According to the accuracy of test set prediciton, we found that the accuracy in the test dataset with low uncertainty were 0.91 compared to 0.86 for that with high uncertainty. In practice, the predictions of the model can be selected based on the uncertainty from our proposed model to perform reliable COVID-19 diagnoses. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Uncertainty Comparison Result} \label{tab:reliable} \centering \begin{tabular}{ccc} \hline & Low Uncertainty & High Uncertainty \\ \hline Case Number & 119 & 99 \\ Accuracy & 0.91 & 0.86 \\ \hline \end{tabular} \end{table} \indent
proofpile-arXiv_065-3387
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A Hochschild cohomology is a well-known mathematical structure (see \cite{9}) and it is a large amount of papers studying it: see \cite{2, 5, 7, 8, 11, 18}, including author's paper \cite{6}, where one could find full list of articles about Hochschild cohomology. Tradler was first who introduced and developed $BV$-algebra structure on the Hochschild cohomology $HH^*(R)$, where $R$ is a finite dimensional symmetric algebra (see \cite{16}). According to his works, it is possible to define Gerstenhaber bracket (and a Lie algebra structure) on $HH^*(R)$ (see \cite{4} for example). There are plenty of papers developing the notion of $BV$-structure: see articles of Menichi \cite{14, 15}, Yang \cite{19}, Tradler \cite{16} and Ivanov \cite{12}, as well as Volkov's paper about $BV$-structure of Frobenius algebras (see. \cite{17}). \par There is one big issue for computations: $BV$-structure is described in terms of bar-resolution, which makes it almost impossible to compute such structure for concrete examples, because the dimension of members of resolution grows exponentially. In order to avoid this we are using the method of comparsion morphisms for resolutions.\par This article stands as a second paper in a cycle of articles developing $BV$-structure on Hochschild cohomology for algebras of quaternion type (according to Erdmann's classification in \cite{3}). We consider a family of algebras $\{R(k,0,d)\}_{d \in K}$ over an algebraically closed field of characteristic 2 in case of $k=2$. It should be mentioned that one crucial partial case of $R(2,0,0)$ was calculated in \cite{10}. So our main Theorem generalizes the result of \cite{10}. One should note that the case of $BV$-structure on Hochschild cohomology for algebras $R(k,0,d)$ for even parameter $k>2$ was studied in the first paper of this cycle: see \cite{20}.\par \section{Main definitions} \subsection{Hochschild (co)homology} Consider an associative algebra $A$ over the field $K$ of characteristic 2. Define its universal enveloping algebra as $A^e= A \otimes A^{op}$. Also we consider the free bar-resolution of $A$ $$\CD A @<{\mu}<< A^{\otimes 2} @<{d_1}<< A^{\otimes 3} @<{d_2}<< ... @<{d_n}<< A^{\otimes n+2} @<{d_{n+1}}<< A^{\otimes n+3} ...\endCD$$ with differentials defined by rule $$d_n(a_0 \otimes \dots a_{n+1}) = \sum \limits_{i=0}^{n} (-1)^i a_0 \otimes \dots \otimes a_i a_{i+1} \otimes \dots \otimes a_{n+1}.$$ One can construct the {\it normalized bar-resolution} by setting $\overline{Bar}(A)_n = A \otimes \overline{A}^{\otimes n} \otimes A$, where $\overline{A} = A/ \langle 1_A \rangle$, and the differentials are induced by that of the bar-resolution. \begin{definition} The $n$-th Hochschild {\it cohomology} is a space $HH^n(A) = \Ext^n_{A^e} (A,A)$ for any natural $n \ge 0$. \end{definition} \begin{definition} The $n$-th Hochschild {\it homology} $HH_n(A)$ is an $n$-th homology of the complex $A \otimes_{A^e} Bar_{\bullet}(A) \simeq A^{\bullet+1}$, where map $\partial_{n+1} : A^{\otimes (n+1)} \longrightarrow A^{\otimes n}$ sends $a_0 \otimes \dots \otimes a_n$ to $\sum_{i=0}^{n-1} (-1)^i a_0 \otimes \dots \otimes a_{i} a_{i+1} \otimes \dots \otimes a_n + (-1)^{n} a_na_0 \otimes \dots \otimes a_{n-1}$ and the differential is induced by that map. \end{definition} One can define a cup-product on the Hochschild cohomology. For any classes $a \in HH^n(A)$ and $b \in HH^m(A)$ we define its cup-product $a \smile b \in HH^{n+m}(A)$ as the class of cup-product of their representatives $a \in \Hom_k(A^{\otimes n}, A)$ and $b \in \Hom_k(A^{\otimes m}, A)$. So by linear extension $$\smile : HH^n(A) \times HH^m(A) \longrightarrow HH^{n+m}(A)$$ the Hochschild cohomology space $HH^{\bullet}(A) = \bigoplus \limits_{n \ge 0} HH^n(A)$ becomes a graded-commutative algebra. \subsection{Gerstenhaber bracket} For any $f \in \Hom_k(A^{\otimes n}, A)$ and $g \in \Hom_k(A^{\otimes m}, A)$ define $f \circ_i g \in \Hom_k(A^{\otimes n+m-1}, A)$ with the following rules: \begin{enumerate} \item if $n \ge 1$ and $m \ge 1$, then let $f \circ_i g (a_1 \otimes \dots \otimes a_{n+m-1})$ be the formula $f(a_1 \otimes \dots a_{i-1} \otimes g(a_i \otimes \dots a_{i+m-1}) \otimes \dots \otimes a_{n+m-1})$. \item if $n \ge 1$ and $m=0$, then let $f \circ_i g (a_1 \otimes \dots \otimes a_{n-1})$ be the formula $f(a_1 \otimes \dots a_{i-1} \otimes g \otimes a_i \otimes \dots \otimes a_{n+m-1})$. \item otherwise let $f \circ_i g$ be equal to zero. \end{enumerate} \begin{definition} Define {\it Gerstenhaber bracket} of $f \in \Hom_k(A^{\otimes n}, A)$and $g \in \Hom_k(A^{\otimes m}, A)$ by the formula \[[f,g] = f \circ g - (-1)^{(n-1)(m-1)} g \circ f, \] where $a \circ b = \sum \limits_{i=1}^n (-1)^{(m-1)(i-1)} a \circ_i b$. \end{definition} Obviously we have $[f,g] \in \Hom_k(A^{\otimes n+m-1}, A)$. Now for any $a \in HH^n(A)$ and $b \in HH^m(A)$ one can define $[a,b] \in HH^{n+m-1}(A)$ as a class of Gerstenhaber bracket of representatives of $a$ and $b$. So there exists a correctly defined map $$ [-,-]:HH^{*} (A) \times HH^{*} (A) \longrightarrow HH^{*} (A),$$ which defines a structure of a graded Lie algebra on the $HH^*(R)$. This map will also be called {\it Gerstenhaber bracket} and it is not hard to see that $(HH^{*} (A), \smile, [-,-])$ is a Gerstenhaber algebra (see. \cite{4}). \subsection{$BV$-structure} \begin{definition} By the {\it Batalin-Vilkovisky algebra} (or $BV$-algebra for short) we call Gerstenhaber algebra $(A^{\bullet}, \smile, [-,-])$ together with an operator $\Delta^{\bullet}$ of degree $-1$, such that $\Delta \circ \Delta = 0$ and $$[a,b] = - (-1)^{(|a|-1)|b|} (\Delta(a \smile b) - \Delta(a) \smile b - (-1)^{|a|} a \smile \Delta(b)$$ for all homogeneous $a, b \in A^{\bullet}$. \end{definition} For $a_0 \otimes \dots \otimes a_n \in A^{\otimes (n+1)}$ define $\mathfrak{B}(a_0 \otimes \dots \otimes a_n)$ by the formula \[\mathfrak{B}(a_0 \otimes \dots \otimes a_n)= \sum_{i=0}^n (-1)^{in} 1 \otimes a_i \otimes \dots \otimes a_n \otimes a_0 \otimes \dots \otimes a_{i-1} +\] \[ + \sum_{i=0}^n (-1)^{in} a_i \otimes 1 \otimes a_{i+1} \otimes \dots \otimes a_n \otimes a_0 \otimes \dots \otimes a_{i-1}. \] It is easy to see that $\mathfrak{B}(a_0 \otimes \dots \otimes a_n) \in A^{\otimes (n+2)} \simeq A \otimes_{A^e} A^{\otimes (n+2)}$, so it can be lifted to the chain complex map such that $\mathfrak{B} \circ \mathfrak{B} = 0$. So this is a correctly defined map on Hochschild homology. \begin{definition} The map $\mathfrak{B}: HH_{\bullet}(A) \longrightarrow HH_{\bullet+1}(A)$ is said to be Connes' $\mathfrak{B}$-operator. \end{definition} \begin{definition} Algebra $A$ is called symmetric algebra, if it is isomorphic (as $A^e$-module) to its dual $DA = \Hom_K(A,K)$. \end{definition} For a symmetric algebra $A$ one can always find the non-degenerate symmetric bilinear form $\langle -,- \rangle : A \times A \longrightarrow K$, and, obviously, reversed statement holds: for any such form the algebra $A$ is symmetric. So in case of symmetric algebras Hochschild homology and colomology are dual: $$\Hom_K(A \otimes_{A^e} Bar_{\bullet}(A),K) \simeq \Hom_{A^e} (Bar_{\bullet}(A), \Hom_K(A,K)) \simeq \Hom_{A^e}(Bar_{\bullet}(A), A),$$ and one can define $\Delta: HH^n(A) \longrightarrow HH^{n-1}(A)$ as a dual to Connes' $\mathfrak{B}$-operator. \par Hence Hochschild cohomology of a symmetric algebra $A$ is a $BV$-algebra (see \cite{16}), and Connes' $\mathfrak{B}$-operator on homoplogy corresponds to $\Delta$ on cohomology. \begin{theorem}[Theorem 1, see \cite{13}] Defined above cup-product, Gerstenhaber bracket and operator $\Delta$ induces a structure of $BV$-algebra on $HH^{*}(A)$. Moreover, for any $f \in \Hom_K(A^{\otimes n}, A)$ the element $\Delta(f) \in \Hom_K(A^{\otimes (n-1)}, A)$ defined properly by formula $$\langle \Delta(f)(a_1 \otimes \dots \otimes a_{n-1}),a_n \rangle = \sum_{i=1}^n (-1)^{i(n-1)} \langle f(a_i \otimes \dots \otimes a_{n-1}\otimes a_n \otimes a_1 \otimes \dots \otimes a_{i-1}), 1 \rangle$$ for any $a_i \in A$. \end{theorem} \begin{remark} All constructions here can be defined and used in terms of the normalized bar-resolution. \end{remark} \section{Weak self-homotopy} \subsection{Resolution} Let $K$ be an algebraically closed field of characteristic 2 and fix $d \in K$. Consider an algebra $R(2, 0, d) = K \langle X,Y \rangle /I$, where $I = \langle X^2+YXY, Y^2+XYX + d(XY)^2, X(YX)^2, Y(XY)^2\rangle$ is an ideal in $K \langle X,Y \rangle$ (ans so one could easily check that $(XY)^2 + (YX)^2 \in I$). Let $B$ be the standard basis of the algebra $R=R(2, 0, d)$, so the set $B_1 = \{u \otimes v \mid u, v \in B\}$ is a basis for the enveloping algebra $\Lambda = R \otimes R^{op}$.\par It should be mentioned that all algebras $R(2,0,d)$ are symmetric. This could be checked immediately by using such symmetric non-degenerate bilinear form: $$\langle b_1, b_2\rangle = \begin{cases} 1, & b_1 b_2 \in Soc(R) \\ 0, & \text{otherwise.} \end{cases}$$ So in order to define graded Lie algebra structure on $HH^*(R)$ one only need to know how $\Delta$ (see definition 4) acts on the Hochschild cohomology. \par Note that right multiplication by $\lambda \in \Lambda$ induces an endomorphism $\lambda^{*}$ of the left $\Lambda$-module $\Lambda$; and we will denote it $\lambda$ as well. Sometimes we consider an endomorphism of the right $\Lambda$-module $\Lambda$, which is induced by left multiplication of $\lambda$. Let us denote such endomorphism as ${}^{*}\lambda$. \par Now construct $4$-periodical resolution in the category of (left) $\Lambda$-modules $$ \CD P_0 @<{d_0}<< P_1 @<{d_1}<< P_2 @<{d_2}<< P_3 @<{d_3}<< P_4 @<{d_4}<< \dots\\ \endCD$$ where $P_0=P_3=\Lambda$, $P_1 = P_2 = \Lambda^2$, and differentials defined by the formulae $$d_0 = \begin{pmatrix} x \otimes 1 + 1\otimes x & y \otimes 1 + 1 \otimes y \end{pmatrix}, $$ $$d_1=\begin{pmatrix} x \otimes 1 + 1\otimes x + y \otimes y, & 1 \otimes yx + xy \otimes 1 + d \otimes yxy + dxy \otimes y \\ 1 \otimes xy + yx \otimes 1, & y \otimes 1 + 1\otimes y + x \otimes x + d x \otimes xy + dxyx \otimes 1 \end{pmatrix},$$ $$d_2 = \begin{pmatrix} x \otimes 1 + 1\otimes x \\ y \otimes 1 + 1\otimes y + dy\otimes y + 1 \otimes dxyx+d^2y\otimes xyx\\ \end{pmatrix}, \quad d_3 = \lambda^{*},$$ where $ \lambda = \sum \limits_{0}^{2}(xy)^i \otimes (xy)^{2-i} + yx \otimes yx +\sum \limits_{0}^{1}y(xy)^i \otimes x(yx)^{1-i} +\sum \limits_{0}^{1}x(yx)^i \otimes y(xy)^{1-i} + dxyx \otimes xyx .$ Consider the multiplication map $\mu:\Lambda \longrightarrow R$, such that $\mu(a \otimes b) = ab$. \begin{theorem}[Proposition 3.1, \cite{6}] The complex $P_{\bullet}$ equipped with the map $\mu$ forms the minimal $\Lambda$-projective resolution of $R$. \end{theorem} We can represent the resolution $P_{\bullet}$ using the path algebra of $R$. One could define modules $KQ_1 = \langle x,y\rangle$ and $KQ_1^* = \langle r_x,r_y\rangle$, where $r_x = x^2+yxy$ and $r_y = y^2 + xyx+d(xy)^2$. It is easy to see that $$R \otimes KQ_1 \otimes R = R \otimes \langle x \rangle \otimes R \oplus R \otimes \langle y \rangle \otimes R \simeq R \otimes R^{op} \oplus R \otimes R^{op} = \Lambda \oplus \Lambda,$$ so one could obtain the resolution $\{ P_n \}_{n=0}^{+\infty}$ of bimodules $$ \CD R @<{\mu}<< R \otimes R @<{d_0}<< R\otimes KQ_1 \otimes R @<{d_1}<< R\otimes KQ_1^* \otimes R @<{d_2}<< R\otimes R @<{d_3}<< \dots \endCD,$$ where $P_{n+4} = P_n$ for $n \in \mathbb{N}$. The differentials are defined by the formulae: \begin{itemize} \item $d_0(1 \otimes x \otimes 1) = x \otimes 1 + 1 \otimes x$, $d_0(1 \otimes y \otimes 1) = y \otimes 1 + 1 \otimes y$; \item $d_1(1 \otimes r_x \otimes 1) = 1\otimes x \otimes x + x \otimes x \otimes 1 + y \otimes x \otimes y + 1 \otimes y \otimes xy + yx \otimes y \otimes 1$, \\ $d_1(1 \otimes r_y \otimes 1) = 1\otimes y \otimes y + y \otimes y \otimes 1 +x \otimes y \otimes x + dx \otimes y \otimes xy + dxyx \otimes y \otimes 1 + 1 \otimes x \otimes yx + xy \otimes x \otimes 1 + d xy \otimes x \otimes y + d \otimes x \otimes yxy$; \item $d_2(1\otimes 1) = x \otimes r_x \otimes 1 + 1\otimes r_x \otimes x+ y\otimes r_y \otimes 1 + 1\otimes r_y \otimes y + d y\otimes r_y \otimes y + d \otimes r_y \otimes xyx + d^2 y \otimes r_y \otimes xyx$; \item $d_3 = \rho \mu$, where $\rho (1) = \sum \limits_{b \in B} b^*\otimes b + d xyx \otimes xyx$. \end{itemize} \subsection{Construction} \begin{definition} For the complex $$\CD 0 @<{}<< N @<{d_0}<< Q_0 @<{d_1}<< Q_1 @<{d_2}<< Q_2 \dots \endCD$$ define {\it the weak self-homotopy} to be a collection of $K$-homomorphisms $t_{n} : Q_n \longrightarrow Q_{n+1} $ together with $t_{0} : N \longrightarrow Q_0$ such that $t_{n}d_n + d_{n+1}t_{n+1} = id_{Q_n}$ for all $n \geqslant 0$ and $d_0t_{0} = id_N$. \end{definition} We need to construct a weak self-homotopy $\{t_i :P_i \longrightarrow R_{i+1} \}_{i \geqslant -1}$ (here $P_{-1} = R)$ for such projective resolution, as in \cite{1}. Consider the bimodule derivation $C : KQ \longrightarrow KQ \otimes KQ_1 \otimes KQ$ by sending the path $\alpha_1...\alpha_n$ to $\sum \limits_{i=1}^n \alpha_1 ... \alpha_{i-1} \otimes \alpha_i \otimes \alpha_{i+1} ... \alpha_n$ and consider the induced map $C: R \longrightarrow R \otimes KQ_1 \otimes R$. So one could define $t_{-1} (1) = 1\otimes 1$ and $t_0 (b \otimes 1) = C(b)$ for $b \in B$. Now construct $t_1 : P_1 \longrightarrow P_2$ by the following rules: for $b\in B$ let $$t_1(b \otimes x \otimes 1) = $$ $$= \begin{cases} 0, & bx \in B\setminus\{yxy\}\\ 1 \otimes r_x \otimes 1, & b=x\\ 1 \otimes r_x \otimes x^2 + x \otimes r_x \otimes x + x^2 \otimes r_x \otimes 1 + yx \otimes r_y \otimes xy, & b = (xy)^2\\ T\Big(y \otimes r_x \otimes 1 + xy \otimes r_x \otimes y + 1 \otimes r_y \otimes xy +dx \otimes r_x \otimes xy + dyxy \otimes r_x \otimes y \Big), & b = Tyx\\ y \otimes r_y \otimes 1+1 \otimes r_y \otimes y + dy \otimes r_y \otimes y + d\otimes r_y \otimes xyx+ d^2 y\otimes r_y \otimes xyx, & b = yxy, \end{cases} $$ $$t_1(b \otimes y \otimes 1) = $$ $$ = \begin{cases} 0, & by \in B \\ 1 \otimes r_y \otimes 1, & b=y\\ T \Big(x \otimes r_y \otimes 1 + yx \otimes r_y \otimes x+ 1 \otimes r_x \otimes yx + d \otimes r_x \otimes yxy + \\ + dyx \otimes r_y \otimes xy + dy \otimes r_x \otimes (xy)^2 \Big), & b = Txy.\\ \end{cases} $$ In order to define $t_2 : P_2 \longrightarrow P_3$ use the following rules: \begin{itemize} \item $t_2(x \otimes r_x \otimes 1) = 1 \otimes 1$, \item $t_2(y \otimes r_x \otimes 1) = d yxy\otimes y^2 + d yx \otimes (xy)^2$, \item $t_2(xy \otimes r_x \otimes 1) = dyxy \otimes y + dyx \otimes y^2$, \item $t_2(yx \otimes r_x \otimes 1) = y\otimes 1 + d xyx \otimes 1 + dxy \otimes x + d x\otimes yx$, \item $t_2(yxy \otimes r_x \otimes 1) = 1 \otimes x$, \item $t_2 (xyx \otimes r_x \otimes 1) = xy \otimes 1 + x\otimes y + dyxy \otimes yx$, \item $t_2((xy)^2 \otimes r_x \otimes 1) = 1 \otimes yxy + yx \otimes y + y \otimes xy + yxy \otimes 1 +dxyx \otimes xy $, \end{itemize} and \begin{itemize} \item $t_2(b \otimes r_y \otimes 1) = 0$ for $b \in \{x, y, xyx\}$, \item $t_2(xy \otimes r_y \otimes 1) = x\otimes 1 + dx \otimes y$, \item $t_2(yx \otimes r_y \otimes 1) = dy^2 \otimes yxy + d(xy)^2 \otimes xy$, \item $t_2(yxy \otimes r_y \otimes 1) = yx \otimes 1 + y \otimes x + dyx \otimes y+ d y \otimes xy + d^2 xyx \otimes xy + d xyx \otimes x + dxy \otimes yxy$, \item $t_2((xy)^2 \otimes r_y \otimes 1) = xy \otimes x + x \otimes yx + xyx \otimes 1 + dx \otimes yxy + d xyx \otimes y+ dxy \otimes xy$. \end{itemize} Finally for $t_3 : R \otimes R \longrightarrow R \otimes R$ put $t_3((xy)^2 \otimes 1) = 1 \otimes 1$ and $t_3(b \otimes 1)=0$ otherwise. Now let $t_{n+4} = t_n$ for all $n \ge 4$.\par \begin{theorem} The above-defined family of maps $\{t_i : P_{i} \longrightarrow P_{i+1}\}_{n \ge 0}$ together with $t_{-1}:R \longrightarrow P_0$ forms the weak self-homotopy for such resolution $P_{\bullet}$. \end{theorem} \begin{proof} For any $n \in \mathbb{N}$ it remains to verify a commutativity of required diagrams, which is straight-up obvious from the definitions of $t_n$ for $n \leq 4$ and from the periodicity for $n \ge 5$. \end{proof} \section{Comparsion morphisms} Consider the normalized bar-resolution $ \overline{{Bar}}_{\bullet} (R) = R \otimes \overline{{R}}^{\otimes \bullet } \otimes R$, where $ \overline{R} = R / (k\cdot 1_R)$. We now need to construct comparsion morphisms between $P_{\bullet}$ and $\overline{Bar}_{\bullet}(R)$: $$\Phi : P_{\bullet} \longrightarrow \overline{{Bar}}_{\bullet}(R) \text{ and } \Psi : \overline{{Bar}}_{\bullet}(R) \longrightarrow P_{\bullet}.$$ Note that there exists a weak self-homotopy $s_n(a_0 \otimes ... \otimes a_n \otimes 1) = 1 \otimes a_0 \otimes ... \otimes a_n \otimes 1$ of $ \overline{{Bar}}_{\bullet} (R)$, so put $\Phi_n = s_{n-1} \Phi_{n-1} d^P_{n-1}$ and $\Phi_0 = id_{R \otimes R}$. \begin{lemma} If $\Psi : \overline{{Bar}}_{\bullet}(R) \longrightarrow P_{\bullet}$ is the chain map constructed using $t_{\bullet}$, then for any $n \in \mathbb{N}$ and any $a_i \in R$ following formula holds $$\Psi_n (1 \otimes a_1 \otimes ... \otimes a_n \otimes 1) = t_{n-1} (a_1 \Psi_{n-1} (1 \otimes a_2 \otimes ... \otimes a_n \otimes 1)).$$ \end{lemma} \begin{proof} The proof is an immediate consequence of Lemma 2.5 in \cite{10}. \end{proof} In order to define $BV$-structure on Hochschild cohomology one need to compute $\Delta : HH^n(R) \longrightarrow HH^{n-1}(R)$. By the Poisson rule we have $$[a \smile b, c] = [a,c] \smile b + (-1)^{|a| (|c|-1)}(a \smile [b,c]),$$ and because char$K = 2$ it is easy to see that $$\Delta(abc) = \Delta(ab)c + \Delta(ac)b + \Delta(bc)a + \Delta(a)bc +\Delta(b)ac + \Delta(c)ab.$$ So one need to compute $\Delta$ only on all generating elements and all cup-products of those elements. Furthermore, for any $\alpha \in HH^n(R)$ there exists a cocycle $f \in \Hom (P_n,R)$ such that following equality holds: $$\Delta(\alpha) = \Delta(f \Psi_n) \Phi_{n-1}.$$ Hence we have $$\Delta(\alpha) (a_1 \otimes ... \otimes a_{n-1}) = \sum_{b \in B\setminus \{1\}} \sum \limits_{i=1}^n \langle (-1)^{i(n-1)} \alpha(a_i \otimes ... \otimes a_{n-1} \otimes b \otimes a_1 \otimes ...\otimes a_{i-1}), 1\rangle b^*,$$ where $\langle b,c \rangle$ is a defined-above bilinear form. \section{$BV$-structure} Let $K$ be an arbitrary algebraically closed field $K$ of characteristic two. Consider the set $$\mathcal{X} = \{p_1, p_2,p_3,p_4,q_1,q_2,w_1,w_2,w_3,e\},$$ where the degrees of these elements are listed here: $$ |p_1| = |p_2| = |p_3| = |p_4| = 0, \ |q_1| = |q_2| = 1, \ |w_1| = |w_2| =|w_3|=2, \ |e|=4. $$ Consider the ideal $\mathcal{I}$ in $K[\mathcal{X}]$, generated by such equalities: \begin{itemize} \item degree 0: $p_ip_j$ for all $i,j \in \{1, 2, 3, 4\}$; \item degree 1: $p_3q_1+p_2q_2$, $p_1q_1+dp_3q_1 + p_3q_2$, $p_1q_2+p_2q_1$; \item degree 2: $p_2w_1$, $p_4w_1$, $p_3w_2$, $p_4w_2$, $p_4w_3$, $p_1w_1 + p_2w_2$, $p_1w_1 + p_3w_3$, $p_1w_1 + p_4q_1^2$, $p_3w_1+p_1w_2$, $p_3w_1 + p_2w_3$, $p_3w_1+p_4q_2^2$, $q_1q_2$, $p_1w_3+dp_2w_2$; \item degree 3: $q_1w_1+q_2w_2$, $q_1^3+q_2^3$, $q_1w_1 + q_2 w_1 + q_1w_3 + p_1q_1w_1$, $q_1w_1 + q_1w_2 + q_2 w_3 + p_1q_1w_1$; \item degree 4: $w_iw_j$ for all $i,j \in \{1, 2, 3\}$. \end{itemize} \begin{theorem}[Theorem 2.1 in \cite{6}] There exists a $K$-algebra isomorphism $HH^*(R) \simeq \mathcal{A} = K[\mathcal{X}]/\mathcal{I}$. \end{theorem} Let $P$ be an item of minimal projective resolution $R$. If $P= R \otimes R$, then denote by $f$ the homomorphism in $\Hom_{R^e}(P,R)$, which sends $1 \otimes 1$ to $f$. If $P= R \otimes KQ\otimes R$ (or $P= R \otimes KQ_1\otimes R$), then denote by $(f,g)$ the homomorphism in $\Hom_{R^e}(P,R)$, which sends $1\otimes x \otimes 1$ (or $1 \otimes r_x \otimes 1$) to $f$ and $1 \otimes y \otimes 1$ (or $1 \otimes r_y \otimes 1$) to $g$. So one can rewrite the generating elements like in \cite{6} $$\begin{cases} \text{elements of degree 0:} & p_1 = xy+yx, \ p_2 = xyx, \ p_3 = yxy, \ p_4 = (xy)^2, \\ \text{elements of degree 1:} & q_1 = q_1 = (y, \ 1+dy + xy),\ q_2 =(1 + yx, \ dxy +x), \\ \text{elements of degree 2:} & w_1 = ( x, \ 0),\ w_2 = (0, \ y ),\ w_3= ( y,\ x + dxy ),\\ \text{elements of degree 4:} & e=1. \end{cases}$$ \subsection{Technical lemmas} It is obvious that $\Delta$ gives us zero on all elements of degree 0. \begin{lemma}[Elements of degree 1] We have $\Delta(q_1) = dp_1$, $\Delta(q_2) = 0$, $\Delta(p_1q_1) = p_2+dp_1$, $\Delta(p_1q_2) = dp_2+p_3$, $\Delta(p_2q_1) = p_3+dp_2$, $\Delta(p_3q_2) = p_2$, $\Delta(p_3q_1) = \Delta(p_2q_2) = p_1$, $\Delta(p_4q_1) = p_2$, $\Delta(p_4q_2) = p_3$. \end{lemma} \begin{proof} One need to compute $\langle a C(b), 1\rangle$ for all elements of degree 1. It is easy to see that $$\langle a C(b), 1\rangle = \begin{cases} 1, & a \in \{ p_1q_1, p_3q_2, p_4q_1 \}, b = y \text{ or } \\ & a \in \{ p_1q_2, p_2q_1, p_4q_2 \}, b = x \text{ or } a \in \{p_3q_1, p_2q_2\}, b \in \{xy,yx\},\\ d, & a \in \{q_1, p_1q_1\}, b \in \{xy, yx\} \text{ or } a \in \{p_1q_2, p_2q_1\}, b = y,\\ 0, & \text{otherwise.} \end{cases}$$ Hence we only need to note that $\Delta(a) = \sum \limits_{b \in B} \langle a C(b), 1\rangle b^*$, so the required statement holds. \end{proof} \begin{lemma}[Elements of degree 2] For any combination $a \in HH^2(R)$ of generating elements we have $\Delta(a)=0$. \end{lemma} \begin{proof} It is easy to see that for any such $a$ of degree two following formulae hold: $$\Delta(a)(1 \otimes x \otimes 1) = \Delta(a \Psi_2) \Phi_1(1\otimes x \otimes 1) = \sum \limits_{b \ne 1} \langle a t_1 (b \otimes x \otimes 1 + x C(b)), 1\rangle b^*,$$ $$\Delta(a)(1 \otimes y \otimes 1) = \Delta(a \Psi_2) \Phi_1(1\otimes y \otimes 1) = \sum \limits_{b \ne 1} \langle a t_1 (b \otimes y \otimes 1 + y C(b)), 1\rangle b^*.$$ So we need to compute $t_1 (b \otimes x \otimes 1 + x C(b))$. Denote this formula by $\Psi_2(b,x)$: \begin{enumerate} \item $\Psi_2(xy,x) = 1 \otimes r_x \otimes y + yx \otimes r_y \otimes 1 + y \otimes r_x \otimes yx + dy \otimes r_x \otimes yxy + dy^2 \otimes r_x \otimes (xy)^2$, \item $\Psi_2(yx,x) = y \otimes r_x \otimes 1 + xy \otimes r_x \otimes y + 1 \otimes r_y \otimes xy + dx \otimes r_x \otimes xy + d yxy \otimes r_x \otimes y$, \item $\Psi_2(xyx,x) = xy \otimes r_x \otimes 1 + x \otimes r_y \otimes xy + dx^2 \otimes r_x \otimes xy + d(xy)^2 \otimes r_x \otimes y + 1 \otimes r_x \otimes yx + yx \otimes r_y \otimes x + dy \otimes r_x \otimes (xy)^2$, \item $\Psi_2(yxy,x)= y \otimes r_y \otimes 1 + 1\otimes r_y \otimes y + dy \otimes r_y \otimes y + d \otimes r_y \otimes xyx +d^2 y \otimes r_y \otimes xyx $, \item $\Psi_2(b,x)=0$ for $b \in \{x, y, (xy)^2\}$. \end{enumerate} And also we need to compute $t_1 (b \otimes y \otimes 1 + y C(b))$. Denote this by $\Psi_2(b,y)$. \begin{enumerate} \item $\Psi_2(xy,y) = x \otimes r_y \otimes 1 + yx \otimes r_y \otimes x + 1 \otimes r_x \otimes yx + d\otimes r_x \otimes yxy + dyx\otimes r_y \otimes xy + dy \otimes r_x \otimes (xy)^2$, \item $\Psi_2(yx,y) = 1 \otimes r_y \otimes x + xy \otimes r_x \otimes 1 + x \otimes r_y \otimes xy + dx^2 \otimes r_x \otimes xy + d(xy)^2 \otimes r_x \otimes y + d \otimes r_x \otimes x^2 + dx \otimes r_x \otimes x + dx^2 \otimes r_x \otimes 1 + dyx \otimes r_y \otimes xy$, \item $\Psi_2(xyx,y) = y \otimes r_y \otimes 1 + 1\otimes r_y \otimes y + dy \otimes r_y \otimes y + d \otimes r_y \otimes xyx +d^2 y \otimes r_y \otimes xyx$, \item $\Psi_2(yxy,y) = yx \otimes r_y \otimes 1 + y \otimes r_x \otimes yx + dy \otimes r_x \otimes yxy + dy^2 \otimes r_x \otimes (xy)^2 + 1 \otimes r_y \otimes xy + xy \otimes r_x \otimes y + d (xy)^2 \otimes r_x \otimes y^2 + d yxy \otimes r_x \otimes y $, \item $\Psi_2((xy)^2,y) = y \otimes r_y \otimes y + 1 \otimes r_y \otimes xyx + dy \otimes r_y \otimes xyx$. \item $\Psi_2(b,y) = 0$ for $b \in \{x, y\}$. \end{enumerate} Finally, one should note that \[q_1q_2 = (0,\ 0), \ q_1^2 = (x, \ 1), \ q_2^2 = (1, \ y), \ p_4q_1^2 = (0, \ (xy)^2), \ p_4q_2^2 = ((xy)^2, \ 0), \] hence lemma holds by given computations. \end{proof} \begin{lemma}[Elements of degree 2] We have $\Delta(q_1w_1) = \Delta(q_2w_2) = w_3, \ \Delta(q_2w_1) = q_2^2 + w_2, \ \Delta(q_1w_2) = \Delta(q_2w_3) = q_1^2 + w_1 + d(p_1+1)w_2, \ \Delta(q_1w_3) = q_2^2 + w_2 + dw_3$. \end{lemma} \begin{proof} Using certain isomorphism $a_1 \otimes a_2 \otimes a_3 \equiv 1 \otimes a_1 \otimes a_2 \otimes a_3 \otimes 1$ one could show that $$\Delta(a)(1 \otimes r_x \otimes 1) = \Delta(a\Psi_3) \Phi_2 (1\otimes r_x \otimes 1) = \sum \limits_{b \not=1} \langle a\Psi_3(b \otimes x \otimes x + x \otimes x \otimes b + x \otimes b \otimes x), 1 \rangle b^* +$$ $$+ \sum \limits_{b \not=1} \langle a\Psi_3 (b \otimes y \otimes x + x \otimes b \otimes y + y\otimes x \otimes b), 1 \rangle b^* y + \sum \limits_{b \not=1} \langle a\Psi_3 (b \otimes yx \otimes y + yx \otimes y \otimes b + y \otimes b \otimes yx), 1 \rangle b^* $$ for all $a \in HH^3(R)$. Note that $\Psi_3(a_1 \otimes a_2 \otimes a_3) = t_2 (a_1 t_1 (a_2 C(a_3)))$, so we can compute it directly.\par First of all, $\Psi_3 \big(b \otimes x \otimes x + x \otimes x \otimes b + x \otimes b \otimes x \big) = t_2 \big(b t_1 (x \otimes x \otimes 1) + xt_1 (b \otimes x \otimes 1) + xt_1(xC(b)) \big).$ Denote this formula by $\Psi_3(b,x)$. \begin{enumerate} \item $\Psi_3(x,x) = 1 \otimes 1$, \item $\Psi_3(y,x) = d yxy \otimes y^2 + dyx \otimes (xy)^2,$ \item $\Psi_3(xy,x) = dyxy \otimes y + dyx \otimes y^2 + 1\otimes y$, \item $\Psi_3(yx,x) = y \otimes 1 + dxyx \otimes 1 + dxy \otimes x + dx \otimes yx$, \item $\Psi_3(xyx,x) = xy \otimes 1 + 1\otimes yx+ x \otimes y + dyxy \otimes yx + yx \otimes xy + dyxy \otimes xy$, \item $\Psi_3(yxy,x) = 1 \otimes x + x\otimes 1$, \item $\Psi_3((xy)^2,x) = 1\otimes yxy$. \end{enumerate} \par Secondly, $\Psi_3 (b \otimes y \otimes x + y \otimes x \otimes b + x \otimes b \otimes y)$ could be rewritten as $t_2 (xt_1(b \otimes y \otimes 1) + yt_1(xC(b)))$. Denote this formula by $\Psi_3(b,x, 2)$. \begin{enumerate} \item $\Psi_3(x ,x, 2) = dyxy\otimes y^2 + dyx \otimes (xy)^2$, \item $\Psi_3(xy,x, 2) = yx \otimes 1 + y\otimes x + dyx \otimes y + dy\otimes xy + d^2 xyx \otimes xy + dxyx \otimes x + dxy \otimes yxy + 1 \otimes yx + d\otimes yxy + xy \otimes yx + dxy \otimes x^2 + dyxy \otimes (xy)^2+ d yxy \otimes yx +d^2yxy \otimes yxy$, \item $\Psi_3(xyx,x, 2) = dxy \otimes (xy)^2 + d^2 yxy \otimes (xy)^2$, \item $\Psi_3(b, x, 2) = 0$ otherwise. \end{enumerate} \par Finally, $\Psi_3 (b \otimes yx \otimes y + yx \otimes y \otimes b + y \otimes b \otimes yx) = t_2 (yx t_1(yC(b)) + yt_1(b \otimes y \otimes x + by \otimes x \otimes 1))$. Denote this formula by $\Psi_3(b,x, 3)$. \begin{enumerate} \item $\Psi_3(x,x, 3) = 0$, \item $\Psi_3(y,x, 3) = d^2xyx\otimes x + d^2 xy \otimes x^2 + dy^2 \otimes yxy + d (xy)^2 \otimes xy + 1 \otimes x + d (xy)^2 \otimes (xy)^2 + d y\otimes x$, \item $\Psi_3(xy,x, 3) = dy^2 \otimes (xy)^2 + d(xy)^2 \otimes xyx$, \item $\Psi_3(yx,x, 3) = dy^2 \otimes (xy)^2 + d(xy)^2 \otimes xyx + dy\otimes yxy + dxyx \otimes xy + dxy \otimes x + dx\otimes yx + dxyx\otimes 1 $, \item $\Psi_3(xyx,x, 3) = yx \otimes 1$, \item $\Psi_3(yxy,x, 3) = d(xy)^2 \otimes (xy)^2 + dxy\otimes (xy)^2+ d^2 yxy \otimes (xy)^2$, \item $\Psi_3((xy)^2,x, 3) = xy \otimes x^2 + xyx \otimes x + dx\otimes (xy)^2 + dxyx \otimes yx +dxy\otimes xyx$. \end{enumerate} \par Further, we have $$\Delta(a)(1 \otimes r_y \otimes 1) = \Delta(a\Psi_3) \Phi_2 (1\otimes r_x \otimes 1) = \sum \limits_{b \not=1} \langle a\Psi_3(b \otimes y \otimes y + y \otimes y \otimes b + y \otimes b \otimes y), 1 \rangle b^* +$$ $$ \sum \limits_{b \not=1} \langle a\Psi_3 (b \otimes x \otimes y + y \otimes b \otimes x + x \otimes y \otimes b), 1 \rangle b^* (x + d xy) + d\sum \limits_{b \not=1} \langle a\Psi_3 (b \otimes xyx \otimes y + xyx \otimes y \otimes b + y \otimes b \otimes xyx), 1 \rangle b^* $$$$+ \sum \limits_{b \not=1} \langle a\Psi_3 (b \otimes xy \otimes x + xy \otimes x \otimes b + x \otimes b \otimes xy), 1 \rangle b^* (1 + dy).$$ Note that $\Psi_3 (b \otimes y \otimes y + y \otimes y \otimes b + y \otimes b \otimes y) = t_2 (yt_1(b\otimes y \otimes 1) + yt_1(y C(b)) + b \otimes r_y \otimes 1)$. Denote this formula by $\Psi_3(b, y)$. So \begin{enumerate} \item $\Psi_3(b, y) = 0$ for $b \in \{x, y\}$, \item $\Psi_3(xy, y) = x\otimes 1+dx\otimes y + dy^2 \otimes yxy + d(xy)^2 \otimes xy + d^2 yxy \otimes (xy)^2 + dxy \otimes (xy)^2 $, \item $\Psi_3(yx, y) = 1 \otimes x + d(xy)^2 \otimes (xy)^2 + d y \otimes x + d^2 xyx \otimes x + d^2 xy \otimes x^2 + dy^2 \otimes yxy + d(xy)^2 \otimes xy$, \item $\Psi_3(xyx, y) = dxy \otimes x + dx\otimes yx + dxyx \otimes 1$, \item $\Psi_3(yxy, y) = 1\otimes xy +yx\otimes 1 +y\otimes x + xy \otimes yx + dyxy \otimes yx + dxy \otimes x^2 + d^2 yxy \otimes yxy + dyx \otimes y + dxyx \otimes x + dxy \otimes yxy$, \item $\Psi_3((xy)^2, y) = xy \otimes x + x \otimes yx + xyx \otimes 1$. \end{enumerate} \par Observe that $\Psi_3 (x \otimes y \otimes b + y \otimes b \otimes x + b \otimes x \otimes y)$ is equal to $t_2 (xt_1(yC(b)) +yt_1(b\otimes x \otimes 1))$. Denote this formula by $\Psi_3(b,y, 2)$. \begin{enumerate} \item $\Psi_3(x,y, 2)= dyxy \otimes y^2 + d yx \otimes (xy)^2$, \item $\Psi_3(yx,y, 2) = yx \otimes xy + dyxy \otimes xy + xy \otimes 1 + x \otimes y + dyxy \otimes yx + 1 \otimes xy + d^2xyx \otimes xy + dy \otimes xy$, \item $\Psi_3(xyx,y, 2) = x\otimes 1+ 1\otimes x + d(xy)^2 \otimes (xy)^2$, \item $\Psi_3(yxy,y, 2) = dyx \otimes y^2 + dyxy \otimes y + dxy \otimes x + dx \otimes yx + dxyx\otimes 1 $, \item $\Psi_3((xy)^2,y, 2) = x \otimes y + y\otimes x + dxyx \otimes x + dxy \otimes x^2$, \item $\Psi_3(b,y, 2) = 0$ for any $b \in \{y, xy \}$. \end{enumerate} \par Rewrite $\Psi_3 (xyx \otimes y \otimes b + y \otimes b \otimes xyx + b \otimes xyx \otimes y)$ as $t_2 (xyx t_1(yC(b)) + yt_1(b \otimes x \otimes yx + bx \otimes y \otimes x + bxy \otimes x \otimes 1))$. Denote this formula by $\Psi_3(b,y, 3)$. \begin{enumerate} \item $\Psi_3(x,y, 3) = dxy \otimes (xy)^2 + d^2 yxy \otimes (xy)^2$, \item $\Psi_3(y,y, 3) = dxy \otimes x + dx\otimes yx + dxyx \otimes 1 $, \item $\Psi_3(xy,y, 3) = y\otimes x + dxyx \otimes x + dxy \otimes x^2$, \item $\Psi_3(yx,y, 3) = dxy \otimes yxy + xy \otimes yx + d yxy \otimes (xy)^2 + dyxy \otimes yx$, \item $\Psi_3(xyx,y,3) = xy \otimes x + x \otimes yx + xyx \otimes 1 + 1 \otimes xyx $, \item $\Psi_3(yxy,y, 3) = xy \otimes x^2 + xyx \otimes x$, \item $\Psi_3((xy)^2,y, 3) = xy \otimes xy + x \otimes yxy + xyx\otimes y + dxyx \otimes xyx + y \otimes xyx $. \end{enumerate} \par Finally, denote $\Psi_3 (xy \otimes x \otimes b + x \otimes b \otimes xy + b \otimes xy \otimes x) = t_2 (xy t_1 (xC(b)) + xt_1(b \otimes x \otimes y + bx \otimes y \otimes 1))$ by $\Psi_3(b,y, 4)$. \begin{enumerate} \item $\Psi_3(x,y, 4) = 1\otimes y + dyxy \otimes y + dyx \otimes y^2$, \item $\Psi_3(xy,y, 4) = dyxy \otimes y^2 + dyx \otimes (xy)^2 $, \item $ \Psi_3(yxy,y, 4) = x \otimes y $, \item $\Psi_3((xy)^2,y, 4) = yx \otimes y^2 + yxy \otimes y $, \item $\Psi_3(b,y, 4) = 0$ for $b \in \{y, yx, xyx \}$. \end{enumerate} \par We now only need to note that $$q_1w_1 = xy, \ q_1w_2 = y = q_2w_3,\ q_2w_1 = x, \ q_2w_2 = yx, \ q_1w_3 = x+dyx +d(xy)^2,$$ hence required formulae holds. \end{proof} \begin{lemma}[Степень 4] We have $\Delta(e) = \Delta(p_4 e) = d^3 p_1 q_1 w_1$, $\Delta(p_1e) = \Delta(p_2e) = \Delta(p_3e) = 0$. \end{lemma} \begin{proof} By $circ(a_1 \otimes ...\otimes a_n)$ we denote $\sum a_{i_1} \otimes ... \otimes a_{i_n}$, where in the sum we are counting all indices such that $(i_1, \dots , i_n) = (1,2, \dots ,n)^{-1}$. So for any $a \in HH^4 (R)$ we have $$\Delta(a) (1 \otimes 1) = \Delta(a \Psi_4) \Phi_3 (1\otimes 1) = \sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes x \otimes x \otimes x)), 1 \rangle b^* $$ $$+ \sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes x \otimes y \otimes x)), 1 \rangle b^* y + \sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes x \otimes yx \otimes y)), 1 \rangle b^* +$$ $$\sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes y \otimes y \otimes y)), 1 \rangle b^* (1 +dy +d^2 xyx) + \sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes y \otimes x \otimes y)), 1 \rangle b^* x +$$ $$\sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes y \otimes xy \otimes x)), 1 \rangle b^* + d\sum \limits_{b \not=1} \langle a \Psi_4 (circ(b \otimes y \otimes xyx \otimes y)), 1 \rangle b^* (1 +dy +d^2 xyx).$$ Denote by $\Psi_4(b,i)$ the value of $b$-th summand in the $i$-th sum from above. Direct computations show us that \[ \Psi_4(b, 1) = \begin{cases} d \otimes y^2, & b = y\\ d \otimes y, & b = xy\\ d \otimes yx + d \otimes xy, & b = xyx \\ 1 \otimes 1, & b = (xy)^2 \\ 0, & \text{otherwise} \end{cases} \ \text{ and } \ \Psi_4(b, 2) = \begin{cases} d \otimes y^2, & b = x\\ d \otimes (xy)^2 + d \otimes yx + d^2 \otimes yxy, & b = xy\\ d^2 \otimes (xy)^2, & b = xyx \\ 0, & \text{otherwise,} \end{cases} \] \[ \Psi_4(b, 3) = \begin{cases} d \otimes (xy)^2, & b = x\\ d \otimes y, & b = yx\\ 0, & \text{otherwise} \end{cases} \ \text{ and } \ \Psi_4(b, 4)+ \Psi_4(b, 7)= \begin{cases} d^2 \otimes 1 , & b = y\\ d \otimes x^2, & b \in \{xy, yx\}\\ d \otimes 1, & b = xyx \\ 1 \otimes 1, & b = (xy)^2 \\ 0, & \text{otherwise,} \end{cases} \] \[ \Psi_4(b,5) = \begin{cases} d^2 \otimes (xy)^2, & b = xy\\ d \otimes yx + d^2 \otimes yxy, & b = yxy\\ 0, & \text{otherwise} \end{cases} \ \text{ and } \ \Psi_4(b,6) = \begin{cases} d \otimes yxy, & b = xy\\ 0, & \text{otherwise.} \end{cases} \] So the required formulae now can be deduced from the above-given computations. \end{proof} \begin{remark} It is useful to know the direct form of the map $\Phi_4$: $$\Phi_4 (1 \otimes 1) = \sum_b 1 \otimes b \otimes x \otimes x \otimes x \otimes b^* + \sum_b 1 \otimes b \otimes x \otimes y \otimes x \otimes yb^*+ $$ $$\sum_b 1 \otimes b \otimes x \otimes yx \otimes y \otimes b^* + \sum_b 1 \otimes b \otimes y \otimes y \otimes y \otimes (1+dy + d^2 xyx)b^* + \sum_b 1 \otimes b \otimes y \otimes x \otimes y \otimes xb^* + $$ $$ \sum_b 1 \otimes b \otimes y \otimes xy \otimes x \otimes b^* + \sum_b 1 \otimes b \otimes y \otimes xyx \otimes y \otimes (d+d^2y + d^3 xyx)b^* +$$ $$d \otimes xyx \otimes x \otimes x \otimes x \otimes xyx+ d \otimes xyx \otimes x \otimes y \otimes x \otimes (xy)^2 + d \otimes xyx \otimes x \otimes yx \otimes y \otimes xyx +$$ $$d \otimes xyx \otimes y \otimes y \otimes y \otimes y^2+d \otimes xyx \otimes y \otimes xy \otimes x \otimes xyx + d^2 \otimes xyx \otimes y \otimes xyx \otimes y \otimes y^2.$$ We denote by $\Phi_4^i$ the $i$-th summand from this expression for any $ 1 \leq i \leq 13$ (here we use the fixed order of summands as we write in the above presented formula). \end{remark} \subsection{Gerstenhaber brackets} \begin{lemma} We have $[q_1,e] =0$ and $[q_2, e] = dp_2e$. \end{lemma} \begin{proof} It is not hard to show that for any $a \in HH^1(R), e \in HH^4(R)$ we have $$[a,e] (1 \otimes 1) = (a \Psi_1 \circ e\Psi_4) \Phi_4 (1\otimes 1) + (e \Psi_4 \circ a \Psi_1) \Phi_4 (1\otimes 1).$$ Now observe that $\Phi_4 (1\otimes 1) = \sum \limits_{b \in B} 1 \otimes b \Phi_3 (1 \otimes 1) b^* + d \otimes xyx \Phi_3 (1 \otimes 1)xyx$. If we apply $d_3^{Bar}$ to $1 \otimes b \Phi_3 (1\otimes 1) b^*$, and then apply $t_3 \Psi_3$ to the resulting formula, then the only non-zero summand will be $b \Phi_3 (1 \otimes 1) b^*$: indeed, $t_3( 1 \cdot \Psi_3(s)) = t_3t_2 (s) = 0$ for any $s$ from the domain of $\Psi_3$, so required equality holds. Hence we have $$\Psi_4 \Phi_4 = t_3 \Psi_3 d_3^{Bar} \Phi_4 = \sum_b t_3 (b \Psi_3 \Phi_3 )b^*+ dt_3 (xyx \Psi_3 \Phi_3 )xyx,$$ and $\Psi_3 \Phi_3 (1 \otimes 1) = t_2 (xt_1(x \cdot x \cdot 1)) + t_2(yt_1( y \cdot y \cdot 1))(1 + dy +d^2 xyx) = t_2 (x \cdot r_x \cdot 1) = 1 \otimes 1$, so $$(a \Psi_1 \circ e\Psi_4) \Phi_4 (1\otimes 1) = (a \Psi_1) (e \Psi_4 \Phi_4 (1\otimes 1)) = (a \Psi_1) (1) = 0.$$ Now we only need to calculate the second summand from the expression given above for $[a,e] (1 \otimes 1)$. By definition of the function $e \Psi_4 \circ a \Psi_1$ it is equal to the sum of four functions $F_i^a$ for any generating $a \in HH^1(R)$ and any $1 \leq i \leq 4$. For the calculations one need to know the values of $a \Psi_1 (b)$ for any $b \in B$: $$q_1 \Psi_1 (b) = \begin{cases} y, & b = x \\ 1 + xy + dy, & b = y \\ x + dxy + y^2 , & b = xy \\ x + dyx + d(xy)^2, & b = yx \\ yxy+ dxyx, & b = xyx \\ xy + yx, & b = yxy \\ xyx, & b = (xy)^2, \end{cases} \quad q_2 \Psi_1 (b) = \begin{cases} 1 + yx, & b = x \\ d xy + x, & b = y \\ y, & b = xy \\ y + x^2 + dxyx, & b = yx \\ xy + yx, & b = xyx \\ xyx, & b = yxy\\ yxy, & b = (xy)^2. \end{cases} $$ It is easy to see that all summands from $\Phi_4 (1\otimes 1)$ have the form $1 \otimes a_1 \otimes \dots \otimes a_5 \otimes a_6$, and if $a_4a_5 \in B$, then these summands gives zero after applying $F_1^a$ or $F_2^a$ to them. \par 1) Denote by $f^a_i(b)$ the value of $(e \Psi_4 \circ_1 q_a\Psi_1) (1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4)$, where $1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4$ is a summand in $\Phi_4^i$. So $$f^1_1 (b) = \begin{cases} d xy, & b = xy\\ dyx, & b= yx\\ 0, & \text{otherwise} \end{cases} \text{ and } \quad f^2_1 (b) = 0. $$ Now observe that $$f^2_{8} (xyx) = d \cdot e t_3 (a \Psi_1 (xyx) \otimes 1) xyx = 0 \text{ and } f^1_{8} (xyx) =0,$$ so $$F_1^{q_1} = d(xy+yx) \text{ and } F_1^{q_2} =0.$$ 2) By $g^a_i(b)$ we denote the value of $(e \Psi_4 \circ_2 q_a \Psi_1) (1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4)$, where $1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4$ is a summand of $\Phi_4^i$. We obtain $$g_1^1 (b) = 0, \quad g_1^2 (b) =\begin{cases} y, & b = xyx \\ 0, & \text{otherwise,} \end{cases} \quad g_4^1 (b) = \begin{cases} x , & b = yxy\\ 0, & \text{otherwise,} \end{cases}$$ and so $g^2_{8} (xyx) = dxyx$, $g^1_{8} (xyx) =0$. It is not hard to show that $\sum_b g_4^2 (b) = d \sum_b g_4^1(b) + \sum_b et_3 (b t_2 (x\otimes r_y \otimes 1))(1+dy+d^2xyx)b^*$. Now observe that the summands of the second sum gives us zero for all $b \in B$. Finally, one has $g_{11}^a(xyx) = 0$ for any generating $a \in HH^1(R)$. To sum up, we can conclude that $$F_2^{q_1} = x \text{ and } F_2^{q_2} = y + dx + d xyx.$$ 3) Denote by $h^a_i(b)$ the value of $(e \Psi_4 \circ_3 q_a \Psi_1) (1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4)$, where $1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4$ is a summand of $\Phi_4^i$. Then we have $h_1^2(b) = 0$ and $h_1^1(b) = 0$ for any $ b \in B$ by definitions of $t_i$. Hence $h_{8}^1(xyx) = h_{8}^2(xyx) = 0$. So, $$h_2^a(b) = \begin{cases} y, & b = (xy)^2 \text{ and } a= q_2\\ 0, & \text{otherwise} \end{cases} \text{ and } h_3^a(b) = 0 \text{ for any } a \in HH^4(R),$$ hence $h_3^a(b) = h_{13}^a(b) = 0$ for any $a$. It is easy to see that $h_4^1(b) = 0$ and $h_4^2(b) = 0$ for any $b$ because $t_1(x \otimes y \otimes 1) = 0$. Analogically we have $ h_{11}^a (b) = 0$ for any $a$. Moreover, since $t_2(y t_1 (a\Psi_1 (x) \otimes y \otimes 1)) = 0$ we have $h_5^a (b) = 0$. Furthermore, $$h_7^1(b) = \begin{cases} d xy, & b = xyx\\ x , & b = (xy)^2\\ 0, & \text{ otherwise} \end{cases} \text{ and } h_7^2 (b) = 0.$$ It is easy to see that $h^a_{12} (xyx) = 0$ and $h_7^a(b) = 0$ for any $b \in B$ and any $a \in HH^1(R)$, so we have $ h_{13}^a(xyx) = 0$. Hence $$F_3^{q_1} = x + dxy \text{ and } F_3^{q_2} = y.$$ 4) It remains to describe the $F_4^a$ for any $a \in \{1, 2\}$. Denote by $k^a_i(b)$ the value of $(e \Psi_4 \circ_4 q_a \Psi_1) (1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4)$, where $1 \otimes b \otimes a_1 \otimes a_2 \otimes a_3 \otimes a_4$ is a summand of $\Phi_4^i$. For the calculation we need the following formulae: $$C(a \Psi_1 (x)) = \begin{cases} 1 \otimes y \otimes 1, & a = q_1\\ y \otimes x \otimes 1 + 1 \otimes y \otimes x, & a = q_2, \end{cases}$$ $$C(a \Psi_1 (y)) = \begin{cases} 1 \otimes x \otimes y + x \otimes y \otimes 1 + d\otimes y \otimes 1, & a = q_1\\ d \otimes x \otimes y + dx \otimes y \otimes 1 + 1 \otimes x \otimes 1, & a = q_2. \end{cases}$$ It follows from the definitions of $t_i$ and $\Phi_4$ that $k_7^1((xy)^2) = dxy$, $k^2_7((xy)^2)= dx$ and $k_i^j(b) = 0$ for any $(i,b) \not= (7,(xy)^2)$ and any $j \in \{1,2\}$. So we have \[F_4^{q_1} = dxy \text{ and } F_4^{q_2} = \sum \limits_{b,i} k_i^2(b) = dx. \] It remains to compute the Gerstenhaber brackets: \[[q_1,e] = (q_1 \Psi_1 \circ e\Psi_4) \Phi_4 (1\otimes 1) + (e \Psi_4 \circ q_1 \Psi_1) \Phi_4 (1\otimes 1) = \sum_{i=1}^4 F_i^{q_1} = dxy+dyx \equiv 0,\] \[[q_2,e] =(q_2 \Psi_1 \circ e\Psi_4) \Phi_4 (1\otimes 1) + (e \Psi_4 \circ q_2 \Psi_1) \Phi_4 (1\otimes 1) = \sum_{i=1}^4 F_i^{q_2} = dxyx.\] \end{proof} \begin{lemma} $[v,e] = 0$ for any $v \in \{w_1, w_2, w_3\}$. \end{lemma} \begin{proof} For any $v \in HH^2(R)$ and any $e \in HH^4(R)$ we have $$[v,e] (1 \otimes a \otimes 1) = \big( (v \Psi_2) \circ (e\Psi_4) \big) \Phi_5 (1 \otimes a \otimes 1) + \big( (e\Psi_4)\circ (v \Psi_2) \big) \Phi_5 (1 \otimes a \otimes 1).$$ We now need to calculate $\big( (v \Psi_2) \circ (e\Psi_4) \big) \Phi_5( 1 \otimes a \otimes 1)$: \[ \big( (v \Psi_2) \circ (e\Psi_4) \big) \Phi_5( 1 \otimes a \otimes 1) = \sum_{i=1}^2 \big( (v \Psi_2) \circ_i (e\Psi_4) \big) \Phi_5( 1 \otimes a \otimes 1), \] and we denote the summands of this sum by $S_1^v$ and $S_2^v$ respectively. It is easy to see that $S_2^v = 0$ for any $v \in \{ w_1, w_2, w_3\}$: indeed we have \[ S_2 (a_1 \otimes .. \otimes a_5 \otimes a_6) = v t_1 (a_1 C(et_3 (a_2 t_2 (a_3 t_1( a_4 C(a_5) ))))) \cdot a_6, \] and this formula equals zero on all summands of the form $1 \otimes a_1 \dots a_5 \otimes a_6$ from the definition of $\Phi_5$ because $C(1) = 0$ and $et_3(b \otimes 1) =1$ for $ b = (xy)^k $ and equals zero elsewhere. Let us proof that $S_1^v$ gives us zero on all summands of $\Phi_5 (1\otimes a \otimes 1)$ except maybe first, fourth, seventh and eleventh summands: \[ S_1^{v} (a \otimes xyx \otimes x \otimes x \otimes x \otimes y) = \begin{cases} dyxy, & a = x \text{ and } v = w_1 \\ d(xy)^2, & a = x \text{ and } v = w_3 \\ 0, & \text{otherwise,} \end{cases} \] \[ S_1^{v} \big(a \otimes (xy)^2 \otimes y \otimes y \otimes y \otimes (1 +dy + d^2 xyx) \big) = \begin{cases} dy + dxyx, & a = y \text{ and } v = w_2 \\ dx, & a = y \text{ and } v = w_3 \\ 0, & \text{otherwise,} \end{cases} \] \[ S_1^{v} \big(a \otimes (xy)^2 \otimes y \otimes xyx \otimes y \otimes (d +d^2y + d^3 xyx) \big) = \begin{cases} dy + dxyx, & a = y \text{ and } v = w_2 \\ dx, & a = y \text{ and } v = w_3 \\ 0, & \text{otherwise,} \end{cases} \] and $S_1^v$ gives us zero on all other combinations of elements $a_i$. So we have \[ \big( (v \Psi_2) \circ (e\Psi_4) \big) \Phi_5( 1 \otimes a \otimes 1) = \begin{cases} dyxy, & a = x \text{ and } v = w_1 \\ d(xy)^2, & a = x \text{ and } v = w_3 \\ 0, & \text{otherwise.} \end{cases} \] Now we need to compute $\big( (e\Psi_4)\circ (v \Psi_2) \big) \Phi_5 (1 \otimes a \otimes 1) = \sum \limits_{i=1}^4 \big( (e\Psi_4)\circ_i (v \Psi_2) \big) \Phi_5 (1 \otimes a \otimes 1)$. We denote by $F^v_i$ the summands of this sum for any $1 \leq i \leq 4$. It is easy to see that $F^v_1$, $F^v_2$ and $F_4^v$ may {\it not} equal to zero only for combinations of elements from the first, fourth, seventh and eleventh summands of $\Phi_5 (1 \otimes a \otimes 1)$, because any other summand has the form $a_1 \dots a_5 \otimes a_6$, where $t_1 (a_4 C(a_5) ) = 0$. \par 1) Consider the function $F_1^v$. Obviously $t_2 \big(y t_1 (y \otimes y \otimes 1) \big) = 0$, so $F_1^v$ equals zero on the fourth and eleventh summands. It remains to show that \[ et_3 \Big(vt_1 \big(aC(b) \big) t_2 \big(x t_1 (x \otimes x \otimes 1)\big) \Big) b^* = et_3\big(vt_1 (aC(b)) \otimes 1 \big) b^* \] and for the first summand \[ F_1^v (a \otimes b \otimes x \otimes x \otimes x \otimes b^*) = et_3\big(vt_1 (aC(b)) \otimes 1 \big) b^* = \] \[ = \begin{cases} xy, & a =x, \text{ } b = xy \text{ and } v = w_1 \\ dxy, & a =x, \text{ } b = xy \text{ and } v = w_3 \\ dyx, & a =y, \text{ } b = yx \text{ and } v = w_1 \\ yx, & a =y, \text{ } b = yx \text{ and } v = w_2 \\ y, & a = x, \text{ } b = xyx \text{ and } v = w_2 \\ x, & a = y, \text{ } b = yxy \text{ and } v = w_1 \\ 1, & b = (xy)^2 \text{ and } (a,v) = (x, w_1) \text{ or } (a,v) = (y, w_2)\\ 0, & \text{otherwise.} \end{cases} \] So for the eighth summand we have \[ d F_1^v (a \otimes xyx \otimes x \otimes x \otimes x \otimes xyx) = \begin{cases} dxyx, & a = x \text{ and } v = w_2 \\ 0, & \text{otherwise.} \end{cases} \] And from the definitions of $t_i$ we show that for any another combination $a_1 \otimes \dots \otimes a_6$ from the summands of $\Phi_5 (1 \otimes a \otimes 1)$ gives us zero. So we have \[ F_1^{w_1} = \begin{cases} 1+xy, & a =x \\otimes + dyx, & a = y, \end{cases} \quad F_1^{w_2} = \begin{cases} y+dxyx, & a = x \\ 1+yx, & a = y, \end{cases} \] \[ F_1^{w_3} = \begin{cases} dxy, & a = x \\ 0, & a = y. \end{cases}\] 2) In the case of $F_2^v$ for the first summand we have \[ F_2^{v}(1 \otimes a \otimes b \otimes x \otimes x \otimes x \otimes b^*) = \begin{cases} et_3\big(at_2 (yx \otimes r_x \otimes 1 + (xy)^2 \otimes r_x \otimes 1)\big)yx, & b = yx \text{ and } v = w_1 \\ et_3\big(at_2 (y^2 \otimes r_x \otimes 1)\big)yx, & b = yx \text{ and } v = w_3 \\ et_3\big(at_2 (xyx \otimes r_x \otimes 1)\big)y, & b = xyx \text{ and } v = w_1 \\ et_3\big(at_2 ((xy)^2 \otimes r_x \otimes 1)\big)y, & b = xyx \text{ and } v = w_2 \\ et_3\Big(at_2 \big((yx + xy + 2d yxy)\otimes r_x \otimes 1\big)\Big)x, & b = yxy \text{ and } v = w_3 \\ et_3\big(at_2 ((xy)^2\otimes r_x \otimes 1)\big), & b = (xy)^2 \text{ and } v = w_1 \\ et_3\big(at_2 (xyx\otimes r_x \otimes 1)\big), & b = (xy)^2 \text{ and } v = w_3 \\ 0, & \text{otherwise,} \end{cases} = \] \[ = \begin{cases} yx, & a= x, \text{ } b = yx \text{ and } v = w_1 \\ d(xy)^2 + dyx, & a= x, \text{ } b = yx \text{ and } v = w_3 \\ dyxy, & a= x, \text{ } b = xyx \text{ and } v = w_1 \\ y, & a= x, \text{ } b = xyx \text{ and } v = w_2 \\ dyx, &a= x, \text{ } b = yxy \text{ and } v = w_3 \\ 1, & a= x, \text{ } b = (xy)^2 \text{ and } v = w_1 \\ dyx, & a= x, \text{ } b = (xy)^2 \text{ and } v = w_3 \\ 0, & \text{otherwise.} \end{cases} \] So for the eighth summand we obtain \[ d F_2^v(1 \otimes a \otimes xyx \otimes x \otimes x \otimes x \otimes xyx) = \begin{cases} dxyx, & a = x \text{ and } v = w_2\\ 0, & \text{otherwise.} \end{cases} \] Now in case of fourth and eleventh summands $$F_2^v\big(a \otimes b \otimes y \otimes y \otimes y \otimes (1+dy+d^2xyx)b^*\big) = et_3\Big( at_2 \big( vt_1(b \otimes y \otimes 1) \otimes r_y \otimes 1\big) \Big)(1+dy+d^2xyx)b^*,$$ so \[F_2^v\big(a \otimes b \otimes y \otimes y \otimes y \otimes 1 \big) =\] \[= \begin{cases} et_3\big(at_2(xyx \otimes r_y \otimes 1 + d(xy)^2 \otimes r_y \otimes 1) \big), & b = xy \text{ and } v = w_1\\ et_3\big(at_2(xy \otimes r_y \otimes 1 + (xy)^2 \otimes r_y \otimes 1) \big), & b = xy \text{ and } v = w_2\\ et_3\big(at_2((xy)^2 \otimes r_y \otimes 1) \big), & (b, v) = (yxy, w_1) \text{ or } (b, v) = ((xy)^2, w_2) \\ \end{cases} \] and $F_2^v$ gives us zero for any other combinations of elements after right multiplying by $(1+dy+d^2xyx)b^*$. So we only need to check that \[F_2^v\big(a \otimes b \otimes y \otimes y \otimes y \otimes (1 +dy + d^2xyx)b^* \big) = \begin{cases} dxy, & a=y, \text{ } b = xy \text{ and } v = w_1\\ xy, & a=y, \text{ } b = xy \text{ and } v = w_2\\ x, & a=y, \text{ } b = yxy \text{ and } v = w_1 \\ 1, & a=y, \text{ } b = (xy)^2 \text{ and } v = w_2 \\ 0, & \text{otherwise.} \end{cases} \] So on the eleventh summand $F_2^v$ gives us zero and hence \[ F_2^{w_1} = \begin{cases} 1+yx + dyxy, & a =x \\otimes + dxy, & a = y, \end{cases} \quad F_2^{w_2} = \begin{cases} y+dxyx, & a = x \\ 1+xy, & a = y, \end{cases}\] \[ F_2^{w_3} = \begin{cases} dyx + d(xy)^2, & a = x \\ 0, & a = y. \end{cases}\] 3) In order to calculate $F_3^v$ we need to note that if $t_1 \big(a_3 C(a_4) \big)=0$ for a summand of the form $a_1 \otimes \dots \otimes a_5 \otimes a_6$ then this summand gives us zero after applying $F_3^v$ to the summand. If $(a_3,a_4,a_5) = (x,x,x)$, then \[ t_1 \big(vt_1(x \otimes x \otimes 1) \otimes x \otimes 1 \big) = \begin{cases} 1\otimes r_x \otimes 1, & v = w_1 \\ 0, & v \not= w_1, \end{cases}\] and so \[ F_3^{v} (1 \otimes a \otimes b \otimes x \otimes x \otimes x \otimes 1) = \begin{cases} 1, & a = x, \text{ } b = (xy)^2 \text{ and } v = w_1, \\ dyxy, & a = x, \text{ } b \in \{xy, xyx\} \text{ and } v = w_1, \\ 0, & \text{otherwise.} \end{cases} \] Now it is easy to see that $t_1(v t_1 (y \otimes y \otimes 1) \otimes y \otimes 1) = t_1(v (1 \otimes r_y \otimes 1) \otimes y \otimes 1)$, so $F_3^{w_1}=0$ on the fourth summand and \[ F_3^{w_2} (a \otimes b \otimes y \otimes y \otimes y \otimes 1 ) = \begin{cases} dyxy, & a = y \text{ and } b = yx \\ 1+dy, & a = y \text{ and } b = (xy)^2. \end{cases} \] So $F_2^{w_2} = 1$ on the fourth summand in case of $a = y$ and it gives us zero otherwise. Observe that \begin{itemize} \item $F_3^{w_3}\big(a \otimes x \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(dx^2 \otimes r_y \otimes 1 + dxyx \otimes r_y \otimes x + dx \otimes r_x \otimes yx + d^2 x \otimes r_x \otimes yxy + d^2 xyx \otimes r_y \otimes xy + d^2 xy \otimes r_x \otimes (xy)^2)\big)$, \item $F_3^{w_3}\big(a \otimes y \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(dyx \otimes r_y \otimes 1 + dy \otimes r_x \otimes yx + d^2 y \otimes r_x \otimes yxy + d^2 y^2 \otimes r_x \otimes (xy)^2)\big)$, \item $F_3^{w_3}\big(a \otimes xy \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(dxyx \otimes r_y \otimes 1 + dxy \otimes r_x \otimes yx + d^2 xy \otimes r_x \otimes yxy)\big)$, \item $F_3^{w_3}\big(a \otimes yx \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(at_2(d(xy)^2 \otimes r_y \otimes x + dyx \otimes r_x \otimes yx + d^2 yx \otimes r_x \otimes yxy + d^2 (yx)^2 \otimes r_y \otimes xy + d^2 yxy \otimes r_x \otimes (xy)^2)\big)$, \item $F_3^{w_3}\big(a \otimes xyx \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(at_2(dxyx \otimes r_x \otimes yx + d^2 xyx \otimes r_x \otimes yxy + d^2 (xy)^2 \otimes r_x \otimes (xy)^2)\big)$, \item $F_3^{w_3}\big(a \otimes yxy \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(at_2(d(xy)^2 \otimes r_y \otimes 1 + dyxy \otimes r_x \otimes yx + d^2 yxy \otimes r_x \otimes yxy)\big)$, \item $F_3^{w_3}\big(a \otimes (xy)^2 \otimes y \otimes y \otimes y \otimes 1 \big) = et_3\big(at_2(d(xy)^2 \otimes r_x \otimes yx + d^2 (xy)^2 \otimes r_x \otimes yxy)\big)$, \end{itemize} and it is now not hard to prove that $F_3^{w_3}$ gives us zero for any $b \in \{x, y, xy, xyx\}$ after the right multiplying by $(1+dy+d^2xyx)b^*$. So we have $$F_3^{w_3}\big(a \otimes b \otimes y \otimes y \otimes y \otimes (1+dy+d^2 xyx)b^* \big) = \begin{cases} dxyx + d^2(xy)^2, & a = y \text{ and } b = yx\\ dx, & a = y \text{ and } b = yxy \\ dyx, & a = x \text{ and } b = (xy)^2. \end{cases}$$ Now consider the seventh summand of $\Phi_5 (1 \otimes a \otimes 1)$. Obviously $$F_3^v(a \otimes b \otimes y \otimes xyx \otimes y \otimes 1) = $$ $$=et_3\bigg(at_2\Big( bt_1 \big(v(y \otimes r_y \otimes 1 + 1 \otimes r_y \otimes y + dy \otimes r_y \otimes y + d\otimes r_y \otimes xyx + d^2 y\otimes r_y \otimes xyx) \otimes y \otimes 1\big) \Big) \bigg),$$ and this is not equals zero only for $v = w_3$. In this case it is equal to $d F_3^v(a \otimes b \otimes y \otimes xyx \otimes y \otimes 1) = F_3^v(a \otimes b \otimes y \otimes y \otimes y \otimes 1) $, so \[F_3^{w_3}\big(a \otimes b \otimes y \otimes y \otimes y \otimes (d+d^2y+d^3 xyx)b^* \big) = F_3^{w_3}\big(a \otimes b \otimes y \otimes y \otimes y \otimes (1+dy+d^2 xyx)b^* \big) \] and $F_3^v = 0$ on eighth and eleventh summands by computations given above. Hence we have \[ F_3^{w_1} = \begin{cases} 1, & a= x \\ 0, & a = y,\end{cases} \quad F_3^{w_2} = \begin{cases} 0, & a= x \\ 1, & a = y,\end{cases} \text{ and } F_3^{w_3} = 0. \] 4) Finally we need to compute $F_4^v$. Since $v t_1 (x \otimes x \otimes 1) = v( 1 \otimes r_x \otimes 1) \not=0$ only for $v = w_1$, we have $F_4^v = 0$ for the first summand and for any $v \in \{w_2, w_3\}$. In the case of $v = w_1$ we obtain \[ \sum \limits_{b \in B^*}F_4^{w_1}(a \otimes b \otimes x \otimes x \otimes x \otimes b^*) = et_3 \Big(a t_2 \big(b t_1 (x \otimes x \otimes 1) \big) \Big) = \begin{cases} 1, & a =x \\ 0, & a = y \end{cases} \] as described in the cases above. Now it is obvious that $F_4^v$ gives us zero on the eighth summand and \[\sum \limits_{b \in B^*} F_4^v \big(a \otimes b \otimes y \otimes y \otimes y \otimes (1+dy+d^2xyx)b^* \big) = \] \[ = \begin{cases} 0, & v = w_1 \\ \sum \limits_{b \in B^*} et_3\Big(at_2\big(bt_1(y \otimes y\otimes 1)\big)\Big)(1+ dy+d^2xyx)b^*, & v = w_2\\ \sum \limits_{b \in B^*} et_3\Big(at_2\big(bt_1(y \otimes x\otimes 1 + dy \otimes x \otimes y + dyx \otimes y \otimes 1)\big)\Big)(1+ dy+d^2xyx)b^*, & v = w_3\\ \end{cases} \] \[ = \begin{cases} 1, & v = w_2 \text{ and } a = y\\ 0, & \text{otherwise.}\\ \end{cases} \] So $F_4^v$ gives us zero on the eleventh summand and hence \[ F_3^{w_1} = \begin{cases} 1, & a= x \\ 0, & a = y,\end{cases} \quad F_3^{w_2} = \begin{cases} 0, & a= x \\ 1, & a = y, \end{cases} \text{ and } F_3^{w_3} = 0. \] According to the computations given above We have proved that \[\sum \limits_{i=1}^4 F_i^v = \begin{cases} d yxy, & a = x \text{ and } v = w_1\\ d(xy)^2, & a = x \text{ and } v = w_3 \\ 0, & \text{otherwise.} \end{cases} \] Finally, for any $v \in \{w_1, w_2,w_3\}$ we have \[ [v,e] =\sum \limits_{i=1}^2 S_i^v + \sum \limits_{i=1}^4 F_i^v = 0. \] \end{proof} \begin{corollary} Following formulae hold: \begin{enumerate} \item $\Delta(q_1e) = dp_1e$ and $\Delta(q_2e) = dp_2e$, \item $\Delta(ve) = 0$ for any $v \in \{w_1, w_2, w_3\}$. \end{enumerate} \end{corollary} \begin{proof} Firstly observe that $\Delta(q_1)=dp_1$ and $\Delta(q_2) = \Delta(v)=0$ for any $v \in \{w_1,w_2,w_3\}$ and $\Delta(e)= d^3p_1q_1w_1$ according to Lemmas 2, 3 and 5. So by the Tradler equation $\Delta(ab) = \Delta(a)b + a \Delta(b) + [a,b].$ Hence we have \begin{align*} &\Delta(q_1e) = dp_1e + d^3p_1q_1^2w_1 + 0 = dp_1e + d^3 q_1q_2w_2 = dp_1e,\\ &\Delta(q_2e) = 0 + d^3p_1q_1q_2w_1 + dp_2e = dp_2e,\\ &\Delta(ve) = 0 \cdot e + d^3 p_1q_1 v w_1 + 0 = 0 \end{align*} for any $v \in \{w_1,w_2,w_3\}$. \end{proof} \section{Main Theorem.} Let $K$ be an algebraically closed field of characteristic 2, let $d \in K$ be a scalar and let $R(2,0,d)$ be an algebra described in item $ 3.1$. So $BV$-structure on Hocschild cohomology algebra $HH^{*}(R)$ can be described in terms of map $\Delta: HH^*(R) \longrightarrow HH^{*}(R)$ of degree $-1$. \begin{theorem} The map $\Delta$ is completely defined by the following equalities: \begin{itemize} \item of degree 1: $\begin{cases} \Delta(q_1) = dp_1, \ \Delta(p_1q_1) = p_2+dp_1, \ \Delta(p_1q_2) = dp_2+p_3, \\ \Delta(p_2q_1) = p_3+dp_2, \ \Delta(p_3q_2) = p_2, \Delta(p_4q_1) = p_2 , \\ \Delta(p_3q_1) = \Delta(p_2q_2) = p_1, \ \Delta(p_4q_2) = p_3, \end{cases}$ \item of degree 3: $\begin{cases} \Delta(q_1w_1) = \Delta(q_2w_2) = w_3, \ \Delta(q_2w_1) = q_2^2 + w_2, \\ \Delta(q_1w_2) = \Delta(q_2w_3) = q_1^2 + w_1 + d(p_1+1)w_2, \\ \Delta(q_1w_3) = q_2^2 + w_2 + dw_3, \end{cases}$ \item of degree 4: $ \Delta(e) = \Delta(p_4e) = d^3 p_1q_1w_1,$ \item of degree 5: $ \Delta(q_1e) = dp_1e, \ \Delta(q_2e) = dp_2e$, \item $\Delta(ab) = 0$ for any other combinations of generating elements $a, b \in \mathcal{X} \cup \{1\} $. \end{itemize} \end{theorem} \begin{proof} For the map defined above $\Delta$ we have equalities of degrees 1, 2, 3 and 4 by Lemmas 2, 3, 4 and 5 respectively and equalities of the higher degrees was given in Corollary 5. \end{proof}
proofpile-arXiv_065-3388
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{s:Introduction} Innovative product requirements are evolving rapidly, reflecting the technological advances in many engineering disciplines. The accelerating nature of this change is accompanied by the growth in products performance, complexity, and cost. To meet emerging requirements, faster design processes are thus required to: thoroughly and accurately explore design spaces of increased size, leverage potentially complex physical interactions for performance benefit, and avoid deleterious interactions that may greatly increase product cost through late defect discovery \cite{beran2020comparison}. Nowadays, there are design benefits by coupling more disciplines at higher levels of fidelity earlier in the development process. But there is no mathematical framework to determine which disciplines, level of coupling, or level of fidelity is required to capture the physics most critical to a particular system’s design, where the design space data is best collected, or how to make the best possible design decision with constrained computing resources. Currently, these decisions are based solely on engineering experience. This approach works reasonably well for systems that are similar to previous designs, but can fail for unique and innovative vehicles and technologies. In this regard, one of the long-term challenges of multidisciplinary design optimization (MDO) is the efficient increase of modeling fidelity, when it is needed, to capture the critical physics that constrain or enable particular product concepts. Relying on low-fidelity models for the analysis throughout the entire design space may lead to designs that are infeasible, or significantly sub-optimal, when the physics is not sufficiently modeled or resolved. Simply replacing these models with higher fidelity models during optimization is often not a practical strategy, because of the higher computational cost associated with these more informative techniques. Multifidelity methods offer the conceptual framework to efficiently optimize products by judiciously using a limited number of high-fidelity analyses while leveraging the information provided by low-fidelity methods. Multifidelity approaches are considered here to fall into a larger class of methods that manipulate a set of information sources to accelerate the computational task. These information sources quantify the systems response using computational approaches (i.e., a mathematical description and the concomitant numerical analysis) and/or non-computational approaches (e.g., physical experiments, analytical solutions, and expert analysis). Despite the development of quite a large number of multifidelity methods, their capabilities are still under discussion and their potential is still under-explored \cite{peherstorfer2018survey,giselle2019issues}. This motivates the interest for benchmark problems that could support the comparative and rigorous assessment of these methods. Beran et al. \cite{beran2020comparison} propose to classify use cases and test problems into three classes: L1 problems, computationally cheap analytical functions with exact solutions; L2 problems, simplified engineering applications problems that can be executed with a reduced computational expense; and L3 problems, more complex engineering use cases, usually including multiphysics couplings. The NATO AVT-331 research task group on ``Goal-Driven, Multifidelity Approaches for Military Vehicle System-Level Design,'' has been conducting a coordinated activity to collect and study benchmarks for these three classes. This paper provides an overview of the L1 benchmarks which are analytical problems with no explicit resemblance to actual engineering problems but support cross-domain investigations. A large number of L1 benchmark problems have been proposed in literature, mostly in conjunction with the presentation of a novel multifidelity method \cite{2022-CMAME-Guo_etal,2021-KBS-Liu_etal,2021-JMLR-Moss_etal,2021-CMAME-Zhang_etal,2020-ASOC-Li_etal,2020-SMO-Yi_etal,2019-IJCFD-Serani_etal,2019-IISE-Song_etal,2018-IEEE-Wang_etal,2018-AAAI-Hoag_Doppa,2018-JMeST-Li_etal,2017-SMO-Durantin_etal,2017-AIAA-Cai_etal,2016-JCS-Liu_etal,rumpfkeil2020-AIAA,grassi2021resource,park2017remarks,bryson2018-AIAAJ,bryson2016-AIAA}. However, a comprehensive framework of computationally efficient benchmarks is not yet available. The objective of this work is to propose and discuss a suite of analytical benchmark problems specifically formulated and selected to stress-test and assess the capabilities of a broad spectrum of multifidelity methods. The framework is intended to provide a set of standard problems, recommended experimental setups and performance assessment metrics to support the rigorous test and comparison of different computational methods. The benchmarks are selected to exemplify mathematical characteristics and behaviors that are often encountered in simulation-based optimization problems and that can challenge the successful search and identification of the optimal solutions for real-world engineering applications. Those challenges include: (i) addressing the curse of dimensionality \cite{bellman1957dynamic} and the scalability associated with multifidelity methods; (ii) handling localized, multimodal, and discontinuous behaviors of the objective functions; and (iii) handling the possible presence of noise in the objective functions. The benchmarks are designed and selected to be of simple implementation while permitting to isolate and investigate different mathematical characteristics to gain insights about the performance of different multifidelity approaches to modeling, design, and optimization. The selected test set is composed of: the Forrester function (continuous and discontinuous), the Rosenbrock function, the Rastrigin function (shifted and rotated), the Heterogeneous function, a coupled spring-mass system, and the Pacioreck function (affected by noise). The suite of analytical L1 benchmarks is designed to assess weaknesses and strengths of multifidelity methods in the face of all these mathematical characteristics. This paper also presents the metrics to compute and compare the global and optimization accuracy of the methods. Global accuracy metrics provide a measure of the ability to approximate the highest fidelity function, also considered as the ground truth source of information. The optimization accuracy is a goal-oriented metric that measures the efficiency and effectiveness of the method in searching and finding the global optimum. The remainder of the paper is organized as follows. Section~\ref{sec:problems} illustrates the individual benchmark problems including their formulations and their distinguishing mathematical features. Section~\ref{sec:setup} presents the recommendations about the set up of the benchmark experiments for a fair and meaningful comparison of the methods. Section~\ref{sec:metrics} discusses the different metrics and criteria to assess and compare the performance of multifidelity modelling and optimization strategies. Finally, concluding remarks are discussed in Section~\ref{sec:conclusion}. \section{Analytical Benchmarks for Multifidelity Optimization} \label{sec:problems} The analytical benchmarks proposed would exemplify potential objective functions to be optimized, somehow related to system-level design for complex industrial/military application, thus solvable by using goal-driven multifidelity methods. Specifically, we will consider a box-constrained optimization problem in the form: \begin{equation} \label{e:OptimizationProblem} \min_{{\bf x} \in \mathcal{A}} f({\bf x}), \qquad \mathrm{with} \qquad \mathbf{l}\leq\mathbf{x}\leq\mathbf{u}, \end{equation} where $\mathbf{x} \in {\mathbb R}^D$ is a design point in the feasible domain $\mathcal{A}$ bounded by $\bf l$ (lower bound) and $\bf u$ (upper bound), $\{x_k\}$ are the elements of $\mathbf{x}$ ($k$ is an integer satisfying $1\leq k\leq D$ or $k\in[1,D]$), $D$ is the dimensionality of the parameter space, $f(\mathbf{x}) \in {\mathbb R}$ is the objective function, and $\mathbf{x}^\star$ is the optimum design point satisfying: \begin{equation} \label{e:DesignPoint} \mathbf{x}^\star={\underset{\mathbf{x} \in \mathcal{A}}{\rm argmin}}f(\mathbf{x}), \end{equation} where $f^{\star}\equiv f(\mathbf{x}^\star)$. Within the multifidelity settings of the benchmarks, the $f({\bf x})$ to minimize would be the highest fidelity function $f_1({\bf x})$, while all the other possible $L$ representations would be considered as cheaper-to-evaluate approximations of the objective function, thus providing a fidelity spectrum from $f_1(\mathbf{x})$ up to $f_L(\mathbf{x})$, where the latter is the lowest fidelity level available. \begin{table}[!b] \caption{Analytical benchmarks main features} \footnotesize \centering \begin{tabular}{cp{2.5cm}p{2.5cm}p{2.5cm}p{2.5cm}p{1.5cm}} \toprule \textbf{ID} & {\bf Name} & {\bf Behaviors} & {\bf Scalability} & {\bf Discrepancy} & {\bf Noise}\\ \midrule MF1 & Forrester & {Local / \newline (Dis)continous} & - & (non)linear & no\\ MF2 & Rosenbrock & Local & Parametric & nonlinear & no\\ MF3 & {Shifted-rotated \newline Rastrigin} & Multi-modal & Parametric / \newline Fidelity & nonlinear & no \\ MF4 & Heterogeneous & Local / \newline Multi-modal & Parametric & nonlinear & no \\ MF5 & Spring-Mass \newline system & Multi-modal & Parametric / \newline Fidelity & nonlinear & no \\ MF6 & Paciorek & Multi-modal & Fidelity & nonlinear & yes\\ \bottomrule \end{tabular} \label{t:ExSetup} \end{table} The multifidelity benchmark problems are selected to capture fundamental mathematical characteristics and properties which also mimic real-world engineering problems. The distinguishing mathematical features of the selected benchmark problems can be listed as: behaviors, scalability, discrepancy type, and noise. Function behaviors include multi-modality, discontinuities, and atypical local behaviors, that cannot be neglected a priori, especially for real world problems where it could be important to represent the whole variable domain. In particular, multi-modality and discontinuities are challenging from both a modelling and optimization viewpoint. Scalabilty takes into consideration both the function parameterization and the fidelity spectrum. The former is one of the important criteria for the comprehensive assessment of the performance of the multifidelity methods which enables to represent the same parametric function with different input dimensions, whereas the latter is useful to demonstrate how the modelling process can be improved depending on the fidelity level available. This last point is particularly relevant, because its relation with the discrepancy type, that describes the relation among fidelities. In general, a linear discrepancy is simpler to model than a nonlinear one. For this reason the discrepancy type allows for a deeper assessment of the multifidelity methods, since the number of fidelities available have to be correlated to the associated discrepancy type. Finally, in real-world engineering problems, noise may exist, it is undesired but inescapable in the overall response of a system which may exhibit abrupt changes within the solution domain. It is difficult or even impossible to distinguish the individual impact of the noise in the overall response and eliminate it. Thus, the user has to deal with a function with some embedded noise, and is important to asses the ability of multifidelity methods to model a noisy function. Considering all these mathematical characteristics over the benchmark problems will allow to evaluate the performance metrics to assess the strengths and weaknesses of the multifidelity methods employed. The mathematical characteristics which are considered in this work are briefly summarized in Tab. \ref{t:ExSetup}. The following subsections present each benchmark formulation in turn. \subsection{Forrester function The proposed Forrester \cite{forrester2007-PRSA} multifidelity benchmark (MF1.1) is a well-known one-dimensional benchmark for multifidelity methods, described by the following equations (from 1 highest fidelity to 4 lowest fidelity level) and shown in Fig.~\ref{fig:forrester}. \begin{align} f_1(x) &= \left(6x-2\right)^2\sin(12x-4) \\ f_2(x) &= \left(5.5x-2.5\right)^2\sin(12x-4) \\ f_3(x) &= 0.75f_1(x)+5(x-0.5)-2 \\ f_4(x) &= 0.5f_1(x)+10(x-0.5)-5 \end{align} The function is defined in the domain $0\leq x\leq 1$ and the minimum is located at $x^\star=0.75724876$ and given by $f(x^\star)= -6.020740$. In order to observe the performance of the multifidelity methods in problems with discontinuous behaviour, the discontinuous Forrester function \cite{DiscForr2018} is also selected as one of the benchmarks. The benchmark function (MF1.2) is derived from the revision of Forrester's function and is also called Forrester function with jump \cite{JumpForr}. The discontinuous Forrester function is described by the following equation \begin{equation} f_{1}(x)=\left\{\begin{array}{lc} (6x-2)^2 \sin(12x-4), & \ 0 \leq x \leq 0.5 \\ (6x-2)^2 \sin(12x-4)+10, & \ 0.5<x \leq 1 \end{array}\right. \end{equation} \begin{equation} f_{2}(x)=\left\{\begin{array}{lc} 0.5 f_1(x) + 10 (x-0.5) -5, & \ 0 \leq x \leq 0.5 \\ 0.5 f_1(x) + 10 (x-0.5) -2, & \ 0.5<x \leq 1 \end{array}\right. \end{equation} \begin{figure}[!b] \centering \includegraphics[width=0.45\textwidth]{figures/Forrester.pdf} \includegraphics[width=0.45\textwidth]{figures/JumpForrester.pdf} \caption{Forrester function (left) and discontinuous variant (right)} \label{fig:forrester} \end{figure} The function is defined in the domain $0 \leq x \leq 1$ and the minimum is located $x^* = 0.1426$ and given by $f(x^*)=-0.9863$. \subsection{Rosenbrock function The Rosenbrock function, also referred to as the Valley or Banana function, is a well-known D-dimensional optimization benchmark problem described by the following equation \begin{equation} f_1(\mathbf{x})=\sum_{i=1}^{D-1} 100\left(x_{i+1}-x_i^2\right)^2 + \left(1-x_i\right)^2 \end{equation} The global minimum is inside a long, narrow, parabolic shaped flat valley. The function is unimodal, and the global minimum is located at $\mathbf{x}^\star=\{1,\dots,1\}^{\sf T}$ and equal to $f(\mathbf{x}^\star)=0$. However, even though this valley is easy to find, convergence to the minimum is difficult. Note that there is also a local minimum at $\{-1,1,\dots,1\}^{\sf T}$ for $4 \le D \le 7$. For the present problem the variable domain is defined as $-2\leq x_i \leq 2$ for $i=1,\dots D$. The extension to multifidelity purposes is described by the following equations, where $f_2$ can be considered as a medium-fidelity level \cite{ficini2021-AIAA} and $f_3$ as the lowest fidelity \cite{bryson2016-AIAA}. \begin{equation} f_2(\mathbf{x})=\sum_{i=1}^{D-1} 50\left(x_{i+1}-x_i^2\right)^2 + \left(-2-x_i\right)^2 - \sum_{i=1}^D 0.5x_i \end{equation} \begin{equation} f_3(\mathbf{x})= \dfrac{f_1(\mathbf{x})-4-\sum_{i=1}^D 0.5x_i}{10+\sum_{i=1}^D 0.25x_1} \end{equation} The three-fidelity levels are shown for two dimensions in Figure \ref{fig:rosenbrock}. \begin{figure}[!h] \centering \includegraphics[width=0.32\textwidth]{figures/Rosen_f1.pdf} \includegraphics[width=0.32\textwidth]{figures/Rosen_f2.pdf} \includegraphics[width=0.32\textwidth]{figures/Rosen_f3.pdf} \caption{Rosenbrock Function: from left to right, $f_1$ (highest-fidelity), $f_2$ , and $f_3$ (lowest-fidelity)} \label{fig:rosenbrock} \end{figure} \subsection{Shifted-rotated Rastrigin function To address real-world optimization problems, where the landscape of the objective function is usually multi-modal, the Rastrigin function is selected as benchmark. The function is shifted to change the position of the minimum and rotated to change the properties of the function itself within the variable space. The equation of the shifted-rotated Rastrigin function reads as follows \begin{equation} f_1(\mathbf{z})=\sum_{i=1}^D \left(z_i^2 +1 - \cos(10\pi z_i)\right) \end{equation} with \begin{equation} \mathbf{z} = R(\theta)(\mathbf{x}-\mathbf{x}^\star) \,\,\,\,\, \mathrm{with} \,\,\,\,\, R(\theta)= \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \\ \end{bmatrix} \end{equation} where $R$ is the rotation matrix in 2$D$ and can be extended to arbitrary dimension by using the Aguilera-Perez algorithm \cite{aguilera2004general}. The variable ranges is defined such as $-0.1 \leq x_i \leq 0.2$ for $i=1,\dots D$, rotation angle $\theta=0.2$, and optimum equal to $f(\mathbf{x}^\star)=0$ at $\mathbf{x^\star}=\{0.1,\dots,0.1\}^{\sf T}$. The fidelity levels can be defined following the work of Wang et al. \cite{2018-IEEE-Wang_etal}, where a resolution error is defined as follows \begin{equation} e_r(\mathbf{z},\phi)=\sum_{i=1}^D a(\phi)\cos^2(w(\phi)z_i+b(\phi)+\pi) \end{equation} with $a(\phi)=\Theta(\phi)$, $w(\phi)=10\pi\Theta(\phi)$, $b(\phi)=0.5\pi\Theta(\phi)$, and $\Theta(\phi)= 1-0.0001\phi$. The fidelity levels are thus described as follows and depicted in Fig. \ref{fig:srrastrigin}. \begin{equation} f_{i}(\mathbf{z},\phi_i)=f_1(\mathbf{z})+e_r(\mathbf{z},\phi_i) ~~~ \mathrm{for} ~~~ \, i=1,2,3 \end{equation} with $\phi_1=10000$ (high-fidelity), $\phi_2=5000$ (medium-fidelity), and $\phi_3=2500$ (low-fidelity). \begin{figure}[!h] \centering \includegraphics[width=0.32\textwidth]{figures/SRRas_f1.pdf} \includegraphics[width=0.32\textwidth]{figures/SRRas_f2.pdf} \includegraphics[width=0.32\textwidth]{figures/SRRas_f3.pdf} \caption{Shifted-rotated Rastrigin Function: from left to right, $f_1$ (highest-fidelity), $f_2$, and $f_3$ (lowest-fidelity)} \label{fig:srrastrigin} \end{figure} \subsection{Heterogeneous function This problem employs heterogeneous non-polynomial analytic functions defined on unit hypercubes ($0 \leq x_i \leq 1$ for $i=1,\dots D$) in one, two (taken from \cite{clark2016-AIAA}), and three dimensions, with low-fidelity functions obtained by using linear additive and multiplicative bridge functions. The benchmark reads \begin{equation}\begin{cases} f_1(x) & =\sin[30(x-0.9)^4] \cos[2(x-0.9)]+(x-0.9)/2\\ f_2(x) & =(f_1(x)-1.0+x)/(1.0+0.25x) \end{cases} \qquad \mathrm{for} \qquad D=1 \end{equation} \begin{figure}[!b] \centering \includegraphics[width=0.45\textwidth]{figures/ALOS1D.pdf} \caption{Heterogeneous function ($D=1$)} \label{fig:clarkbae1d} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.32\textwidth]{figures/ALOS2D_f1.pdf} \includegraphics[width=0.32\textwidth]{figures/Alos2D_f2.pdf} \caption{Heterogeneous function ($D=2$)} \label{fig:clarkbae2d} \end{figure} \begin{equation}\begin{cases} f_1(\mathbf{x})&=\sin[21(x_1-0.9)^4] \cos[2(x_1-0.9)]+(x_1-0.7)/2 + \sum_{i=2}^D ix_i^i \sin \left(\prod_{j=1}^i x_j\right) \\ f_2(\mathbf{x})&=(f_1(\mathbf{x})-2.0+ \sum_{i=1}^D x_i)/(5.0 + \sum_{i=1}^2 0.25ix_i -\sum_{i=3 \atop D > 2}^D 0.25ix_i) \end{cases} \,\,\, \mathrm{for} \,\,\, D\geq2 \end{equation} The one- and two-dimensional functions are displayed together with their low-fidelity approximations in Figures \ref{fig:clarkbae1d} and \ref{fig:clarkbae2d}. The optimum for $D=1$ is equal to $f(\mathbf{x}^\star)=-0.625$ at $\mathbf{x^\star}=0.27550$, while for $D=2,3$ it is equal to $f(\mathbf{x}^\star)=-0.5627123$ at $\mathbf{x^\star}=\{0,\dots,0\}^{\sf T}$. \subsection{Coupled Spring-Mass system The proposed problem represents a general coupled spring-mass system. Consider two point masses $m_1$ and $m_2$ concentrated at their center of gravity and attached to each other by three springs, that operate according to Hooke’s law with constant $k_1$, $k_2$, and $k_3$. The mass of each spring is negligible and they restore after compression and extension. The masses can slide along a frictionless horizontal surface, while the first and last springs ($k_1$ and $k_3$) are attached to fixed walls, as shown in Figure \ref{3masses}. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{figures/Spring-mass} \caption{Coupled spring-masses system} \label{3masses} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=0.32\textwidth]{figures/Spring_f1.pdf} \includegraphics[width=0.32\textwidth]{figures/Spring_f2.pdf} \caption{Spring-mass system ($D=2$, springs only)} \label{fig:spring} \end{figure} Assume that $x_1(t)$ and $x_2(t)$ denote the mass positions along the horizontal surface, measured from their equilibrium positions, positive right and negative left. The equations of motion are given by the following system \begin{equation}\begin{cases}\label{eqofmot} m_1 \ddot{x}_1(t) &= -k_1 x_1 (t) + k_2 [x_2 (t) - x_1 (t)] \\ m_2 \ddot{x}_2(t) &= -k_2 [x_2 (t) - x_1 (t)] - k_3 x_2 (t) \end{cases} \end{equation} The equations are justified in the case of all positive variables by observing that the first two springs are elongated by $x_1$ and $x_2 - x_1$, respectively. The last spring is compressed by $x_2$, which accounts for the minus sign. Eq.~(\ref{eqofmot}) can be written as a second-order vector-matrix system \begin{equation} {\bf M} \ddot{{\bf x}} (t) = {\bf Kx}(t) \label{secorderODE} \end{equation} where the displacement ${\bf x}$, mass matrix ${\bf M}$ and stiffness matrix $\bf K$ are defined by the following formulae \begin{displaymath} {\bf x} = \left\{ \begin{array}{c} x_1 \\ x_2 \end{array} \right \} \quad {\bf M} = \left[ \begin{array}{cc} m_1 & 0 \\ 0 & m_2 \end{array} \right] \quad {\bf K} = \left[ \begin{array}{cc} -k_1-k_2 & k_2 \\ k_2 & -k_2-k_3 \end{array} \right] \end{displaymath} This is a constant-coefficient homogeneous system of second-order ODEs the solution of which is given by \begin{equation} \label{solutionODE} {\bf x} (t)= \sum_{i=1}^2 [a_i \cos(\omega_i t) + b_i \sin(\omega_i t)] {\bf z}_i \end{equation} where $\omega_i = \sqrt{-\lambda_i}$ and $\lambda_i$ are the eigenvalues of the matrix ${\bf M}^{-1} {\bf K}$ and ${\bf z}_i$ are the corresponding eigenvectors. The constants $a_i$ and $b_i$ are determined by the initial conditions ${\bf x} (t=0) = {\bf x}_0$ and $\dot{\bf x} (t=0) = \dot{\bf x}_0$ Converting Eq. \ref{secorderODE} into a system of first-order ODEs and using the fourth-order accurate Runge-Kutta time-marching method yields a multifidelity analysis problem by varying the time-step size $\Delta t$. The proposed benchmark uses the initial conditions ${\bf x}_0 = \{1 \; 0\}^{\sf T}$ and $\dot{\bf x}_0 = \{0 \; 0\}^{\sf T}$ with two fidelity levels, defined by the time-step size, specifically equal to $\Delta t = 0.01$ and 0.6. Two test are proposed, considering the position of the first mass $x_1$ at the time $t=6$ as the objective function: (MF5.1) springs $k_1$ and $k_2$ are the independent input variables with $1 \le (k_1,k_2) \le 4$ and $k_3=k_1$, while the masses are constant ($m_1=m_2=1$), the optimum is equal to $f(\mathbf{x}^\star)=-1$ at $\mathbf{x^\star}=\{2.467401, 2.193245\}^{\sf T}$; (MF5.2) springs $k_1$ and $k_2$ and masses $m_1$ and $m_2$ are the independent input variables with $1 \le (k_1,k_2,m_1,m_2) \le 4$ and $k_3=k_1$, the optimum is equal to $f(\mathbf{x}^\star)=-1$ at $\mathbf{x^\star}=\{1.000000, 3.946018, 4.000000, 3.286277\}^{\sf T}$. The two-dimensional (springs only) problem is shown in Fig. \ref{fig:spring} \begin{figure}[!b] \centering \includegraphics[width=0.32\textwidth]{figures/Pacioreck_f1.pdf} \includegraphics[width=0.32\textwidth]{figures/Pacioreck_f2.pdf} \caption{Paciorek function with noise: $f_1$ (left) and $f_2$ (right)} \label{fig:PaciorekPlot} \end{figure} \subsection{Paciorek Function} The Paciorek equation \cite{Toal2014}, which has localized and multi-modality properties, is considered and a normally distributed random noise parameter is added to the high- and low-fidelity equation to model the noise. The Paciorek function with noise term is defined as following \begin{equation} f_1(\mathbf{x})=\sin\left(\prod_{i=1}^D x_i\right)^{-1} + rand.normal(0,\alpha_1 \end{equation} \begin{equation} f_2(\mathbf{x})=f_1(\mathbf{x})-9A^2\cos\left(\prod_{i=1}^D x_i\right)^{-1} + rand.normal(0, \alpha_2 \end{equation} $A$ is a parameter that models the error among the fidelities and can vary between 0 and 1, when $A=0$ the low- and high-fidelity equations are the same, while the change between the low-fidelity and high-fidelity models increases as the value of $A$ increases. In this study it is set as $A=0.5$, while $\alpha_i$ is the coefficient that defines the noise level. The benchmark is defined considering the variable ranges as $0.3 \leq x_i \leq 1.0$ for $\forall i$. For the high-fidelity case a noise level is added to the equation which corresponds to approximately 5\% of the response interval, as $\alpha_1=0.0125$. Accordingly, for the low-fidelity case much higher level of noise is added to the equation corresponding to 10\% of the response interval, $\alpha_2=0.075$. The resulting response surfaces are shown in Fig.~\ref{fig:PaciorekPlot}. \section{Setup of the Numerical Experiments}\label{sec:setup} Table \ref{t:ExSetup} summarizes the recommended set up for the assessment and comparison of the performance of multifidelity methods. The following subsections provide an overview of the rationale and the criteria motivating the recommendations along with additional complementary suggestions. The overview encompasses criteria for the assignment of the evaluation costs to the different fidelity levels as a fraction of the computational expense of the highest fidelity function, criteria for the initialization of the search, and criteria to terminate the search. \begin{table}[!h] \caption{Experiments setup summary} \footnotesize \centering \begin{tabular}{lccccccc} \toprule \multirow{3}{*}{\bf Function} & \multirow{3}{*}{\bf Benchmark ID} & \multirow{3}{*}{$D$} & \multirow{3}{*}{\bf Budget } & \multicolumn{4}{c}{\bf Fidelity cost} \\ \cmidrule{5-8} & & & & $f_1$ & $f_2$ & $f_3$ & $f_4$ \\ \midrule Forrester & MF1.1 & 1 & 100 & 1.00000E-0 & 5.00000E-1 & 1.00000E-1 & 5.00000E-2 \\%[1.5ex] Jump Forrester & MF1.2 & 1 & 100 & 1.00000E-0 & 2.00000E-1 & - & - \\ \midrule & MF2.1 & 2 & 200 & 1.00000E-0 & 5.00000E-1 & 1.00000E-1 & - \\ Rosenbrock & MF2.2 & 5 & 500 & 1.00000E-0 & 5.00000E-1 & 1.00000E-1 & - \\ & MF2.3 & 10 & 1000& 1.00000E-0 & 5.00000E-1 & 1.00000E-1 & - \\%[1.5ex] \midrule \multirow{3}{*}{Shifted-rotated Rastrigin} & MF3.1 & 2 & 200 & 1.00000E-0 & 6.25000E-2 & 3.90625E-3 & - \\ & MF3.2 & 5 & 500 & 1.00000E-0 & 6.25000E-2 & 3.90625E-3 & - \\ & MF3.3 & 10 & 1000& 1.00000E-0 & 6.25000E-2 & 3.90625E-3 & - \\%[1.5ex] \midrule & MF4.1 & 1 & 100 & 1.00000E-0 & 2.00000E-1 & - & - \\ Heterogeneous & MF4.2 & 2 & 200 & 1.00000E-0 & 2.00000E-1 & - & - \\ & MF4.3 & 3 & 300 & 1.00000E-0 & 2.00000E-1 & - & -\\%[1.5ex] \midrule Springs & MF5.1 & 2 & 200 & 1.00000E-0 & 1.66667E-2 & - & - \\ Springs-masses & MF5.2 & 4 & 400 & 1.00000E-0 & 1.66667E-2 & - & - \\%[1.5ex] \midrule Pacioreck & MF6 & 2 & 200 & 1.00000E-0 & 2.00000E-1 & - & - \\ \bottomrule \end{tabular} \label{t:ExSetup} \end{table} \subsection{Fidelity cost assignment criteria}\label{s:FidelityCost} The $f_1({\bf x})$ functions are given the unitary cost $\lambda_1=1$, while their lower fidelity representations are assigned costs values as fractions of the $f_1({\bf x})$ cost. Table \ref{t:ExSetup} proposes a set of cost assignments for the different fidelity representations of each of the benchmark problems. The values indicated for the shifted-rotated Rastrigin are determined according to the non-linear function proposed by Wang et al. \cite{2018-IEEE-Wang_etal} for the allocation of cost values to an arbitrary number of fidelities: \begin{equation} \label{e:CostF1} \lambda_l = \left(1/2^{l-1}\right)^4 \qquad \forall \ l\geq 1 \end{equation} where higher values of the integer $l$ indicate representations of the objective functions at progressively lower levels or fidelity. Differently, the values indicated for the Forrester, the Rosenbrock, the Heterogeneous, and the Spring-Mass problems are driven by the shared experience within the AVT-331 research task group. \subsection{Initialization criteria} \label{s:Init} To assure a fair comparison across the different families of multifidelity methods, statistics over a set of different starting sample/points is recommended for all those methods that are not driven by an infill criterion. The cardinality and composition of the initial sample would not be constrained, but determined for or by the specific multifidelity method. This approach is preferred over the definition of specific initial samples to be used. The performance assessment would then consider statistics over the set of all the convergence histories. \subsection{Termination criteria} \label{s:Term} Real-world design problems are constrained by the limited time and computing resources available to conduct analysis, search, and optimization of the alternative candidate solutions. This motivates the choice to recommend conducting the experiments at given computational budgets assigned to the overall modelling and optimization task, rather than prescribing the maximum number of allowed iterations. The termination condition is reached when no computational budget is left. Table \ref{t:ExSetup} indicates the computational budget assigned to each experiment in terms of equivalent number of evaluations of the high-fidelity representation. \section{Performance Assessment Metrics}\label{sec:metrics} Many different multifidelity approaches and original strategies have been developed, which may or may not rely on the use of a surrogate model combining the information from the different sources. The proposed metrics are selected to offer a comprehensive framework and to be able to compare a broad spectrum of different goal-driven multifidelity methods regardless of their specific features. Considering the initialization criteria recommended in Subsection~\ref{s:Init}, the performance of the different multifidelity methods will be assessed through statistics over the set of all the convergence histories. Two types of metrics are defined: goal sensitive and goal insensitive. Goal-sensitive metrics evaluate the accuracy of the optima $\mathbf{x}^\star$ computed with MF approximations, whereas goal-insensitive metrics address the global accuracy of MF approximations over the design space. The ability to compute these metrics is very dependent on the nature of the design space, the benchmark complexity, and the methods employed for optimization. Construction and use of these metrics thus involves compromises, which are hopefully struck in a manner that hit a sweet spot of generality, usefulness, and feasibility. With regards to goal-insensitive metrics, the global accuracy of MF approximations can be well interrogated for small $D$ and low benchmark complexity. As $D$ and benchmark complexity increase, the quantification of global accuracy rapidly becomes untenable through the effects of the \textit{curse of dimensionality} (CoD) \cite{bellman1957dynamic}. In contrast, global accuracy is not explicitly measured by goal-sensitive metrics, which simply assess computed optima. However, computing optima with global methods is also afflicted by the CoD, as is the global approximation of the functions that are optimized (whose cost can be mitigated by reduced sampling in unproductive areas). Central to quantifying accuracy is a scaling of the design space. By scaling variables to equivalent ranges of variation (on the unit hypercube), the influence of parameter sensitivities can be balanced and the relative accuracy of computed optima better characterized. Unless stated otherwise, scaling of each design parameter is performed linearly between lower and upper limits {\it supplied with the benchmark definition}: \begin{equation} \tilde{x}_k \equiv {x_k-l_k \over u_k-l_k}\ \ \ (k=1,...,D), \end{equation} or $\tilde{\mathbf{x}} = \mathbf{S}\left(\mathbf{x}-{\bf l}\right)$, where $\mathbf{S}$ is a diagonal scaling matrix and $\bf l$ is the lowest valued corner of the un-scaled design space. For notational convenience, the tilde notation is neglected unless needed for clarity. Lower and upper limits are provided for scaling and simplification, even when a parameter might potentially be unbounded. Scaling of the design space is independent of constraint surfaces existing in the benchmark problem. Goal-insensitive metrics are defined first, which do not require knowledge of $\mathbf{x}^\star$. There are several competing goals for defining these metrics: \begin{itemize} \item {\it High consistency}, all methods should be evaluated in the same way; \item {\it Low bias}, the evaluation strategy should not be biased; \item {\it High utility}, the evaluation of metrics should be highly accurate and informative; \item {\it Affordable cost}, computing metrics should be affordable. \end{itemize} The goal-insensitive metric $\mathcal{E}_{\rm RMSE}$ records the root-mean-squared error between the highest fidelity model of the objective $f$, to an approximation to $f$, $\widehat{f}$, over the design space: \begin{equation} \mathcal{E}_{\rm RMSE} \equiv {1\over f_{\max}-f_{\min}} {\sqrt{\frac{1}{S}{\sum_{i=1}^{S}}(f(\mathbf{x}_i)-\widehat{f}(\mathbf{x}_i))^2}}, \label{e:RMSE} \end{equation} where $i$ is the sample index, $S$ is the number of samples, and $f_{\min}$ and $f_{\max}$ are minimum and maximum values of $f$ observed over the training data: \begin{equation} \label{e:fminmax} f_{\min} \equiv \min_{\mathbf{x}_i} f(\mathbf{x}_i), \ \ \ f_{\max} \equiv \max_{\mathbf{x}_i} f(\mathbf{x}_i). \end{equation} The sampling plan attempts to achieve two compromises: balancing computational cost with accuracy, and balancing consistency with low bias. The design spaces corresponding to the (inexpensive) analytical benchmarks are exhaustively sampled to rigorously assess method accuracy and performance for different functions. As benchmark complexity is increased, the amount of sampling can be reduced. Global accuracy cannot be computed for high-fidelity-based computational solutions, for their obvious high cost of sampling, but can be expected that MF-method strengths and weaknesses, in the context of global accuracy, can be extensively addressed with the analytical benchmarks proposed. Goal-sensitive metrics are divided between metrics for the location of the optimum point (design accuracy) and metrics for the objective value (goal accuracy). The metrics are expressed as errors, which incorporate information about the optima assumed to be known {\it apriori}. Three error metrics are defined for the benchmark problems, where optima are typically known analytically. These characterize normalized error in the design space, the objective function, and Euclidean distance in the normalized $\mathbf{x}$-$f$ hyperspace, respectively: \begin{equation} \label{e:error_x} \mathcal{E}_{x} \equiv \dfrac{\|\hat{\mathbf{x}}^\star-\mathbf{x}^\star\|}{\sqrt{N}}, \end{equation} \begin{equation} \label{e:error_f} \mathcal{E}_{f} \equiv {f(\hat{\mathbf{x}}^\star)-f_{\min} \over f_{\max}-f_{\min}}, \end{equation} \begin{equation} \label{e:error_t} \mathcal{E}_{t} \equiv \sqrt{\frac{\mathcal{E}_{x}^{2}+\mathcal{E}_{f}^{2}}{2}}, \end{equation} where $\mathbf{x}$ is the {\it scaled} array of design variables on the unit hypercube, $\hat{\mathbf{x}}^\star$ is the location of the optimum of the approximation to $f$ (in scaled parameters), and $f_{\min}$ and $f_{\max}$ are the minimum and maximum of $f$ in $\mathcal{A}$. The error metrics $\mathcal{E}_{x}$ and $\mathcal{E}_{f}$ evaluate design and goal accuracy, whereas the aggregated metric $\mathcal{E}_{t}$ introduced by Serani et. al.~\cite{serani2016-ASOC} evaluates accuracy in both senses, which can be of heightened significance when optima are in very flat or very peaky portions of the design space. Like the limits on design variables, $f_{\min}$ and $f_{\max}$ are provided with the benchmark to ensure consistent application by different MF methods. Reference values for the evaluation of the goal-sensitive metrics are summarized in Tab. \ref{t:referv}. It should be emphasized that Eq. (\ref{e:error_f}) utilizes an evaluation of the model of highest fidelity level at $\hat{\mathbf{x}}^\star$, a quantity computed with the MF approximation to $f$. Evaluation of the full-fidelity model is favored over that with the approximation to better characterize the true objective realized through the MF approximation. \begin{table}[!t] \caption{Summary of multifidelity benchmark values for the evaluation of the goal-sensitive metrics}\label{t:referv} \footnotesize \centering \begin{tabular}{lccccc} \toprule \bf Function & \bf Benchmark ID & $D$ & $\mathbf{x}^\star$ & $f_{\min}$ & $f_{\max}$ \\ \midrule Forrester & MF1.1 & 1 & 7.5752E-1 & -6.0207E-0 & 1.5830E+1 \\%[1.5ex] Jump Forrester & MF1.2 & 1 & 1.4260E-1 & -9.8630E-1 & 2.5830E+1 \\ \midrule & MF2.1 & 2 & \{1.0000, 1.0000\}$^\mathsf{T}$ & 0.000E-0 & 3.6090E+3 \\ Rosenbrock & MF2.2 & 5 & \{1.0000, \dots, 1.0000\}$^\mathsf{T}$ & 0.000E-0 & 1.4436E+4 \\ & MF2.3 & 10 & \{1.0000, \dots, 1.0000\}$^\mathsf{T}$ & 0.000E-0 & 3.2481E+4 \\%[1.5ex] \midrule \multirow{3}{*}{Shifted-rotated Rastrigin} & MF3.1 & 2 & \{0.1000, 0.1000\}$^\mathsf{T}$ & 0.000E-0 & 4.0200E-0 \\ & MF3.2 & 5 & \{0.1000, \dots, 0.1000\}$^\mathsf{T}$ & 0.000E-0 & 1.0050E+1 \\ & MF3.3 & 10 & \{0.1000, \dots, 0.1000\}$^\mathsf{T}$ & 0.000E-0 & 2.0100E+1 \\%[1.5ex] \midrule & MF4.1 & 1 & 2.7550E-1 & -6.2500E-1 & 3.6151E-1\\ Heterogeneous & MF4.2 & 2 & $\forall x_1=$ 0.000E-0 & -5.6271E-1 & 1.8350E-0\\ & MF4.3 & 3 & $\forall x_1=$ 0.000E-0 & -5.6271E-1 & 4.3594E-0\\%[1.5ex] \midrule Springs & MF5.1 & 2 & \{2.4674, 2.1932\}$^\mathsf{T}$ & -1.0000E-0 & 1.0000E-0 \\ Springs-masses & MF5.2 & 4 & \{1.0000, 3.9460, 4.0000, 3.2863\}$^\mathsf{T}$ & -1.0000E-0 & 1.0000E-0 \\%[1.5ex] \midrule Pacioreck & MF6 & 2 & $\forall x_1x_2=2/[(3+j)\pi]$ with $j=0,4$ & -1.0000E-0 & 1.0000E-0 \\ \bottomrule \end{tabular} \label{t:ExSetup} \end{table} \section{Concluding Remarks}\label{sec:conclusion} A benchmark suite of analytical test function has been proposed to assess the efficiency and effectiveness of multifidelity optimization methods. The proposed benchmarks are meant to stress multifidelity optimization methods, dealing with various challenges typical of complex real-world optimization problems such as handling localized, multimodal, and discontinuous behaviors in the objective functions, as well as handling the possible presence of noise in the objective functions. The paper provides a set of standard problems, as well as recommended experimental setups and performance assessment metrics to support the rigorous test and comparison of different computational methods. Future work will include the performance assessment of a Gaussian process model and a trust region model to demonstrate the use cases. \section*{Acknowledgements} The work is conducted in collaboration with the NATO task group AVT-331 on ``Goal-driven, multifidelity approaches for military vehicle system-level design''. Distribution A: Approved for public release, distribution unlimited. Case number AFRL-2022-1596. \bibliographystyle{abbrv}
proofpile-arXiv_065-3397
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Closed quantum systems are described by the Schrödinger equation. If we take a pure initial state, it will always stay pure during the evolution of the system undergoing the unitary symmetry transformation. This equation can be put in another form, using a density matrix---it is a Liouville equation. When we proceed to open quantum systems, we find that environmental influences may transform an initial pure-state density matrix into the mixed-state density matrix \cite{breuer, schlosshauer, weiss}. Then, the Schrödinger equation can no longer describe this transition; a density matrix is evolving under transformations belonging to a dynamical semigroup and the standard equation for this case is the Franke--Gorini--Kossakowski--Lindblad--Sudarshan (FGKLS) equation~\cite{franke, gorini, lindblad} : \begin{equation} \label{lindblad}\dot{\rho} = -i[H,\rho]+\sum_a L^{(a)} \rho L^{(a),\dag} - \frac{1}{2} \left\{\sum_a L^{(a),\dag} L^{(a)},\rho\right\} \end{equation} {Here,} $\rho$ is a density matrix of the system, $H$ is a Hamiltonian of that system, $ L^{(a)}$ are a set of operators that incorporate an interaction of a system with an environment. The latter Lindbladians generate deviations from purely unitary evolution and the entire evolution operators belong to a set of a dynamical semigroup. This approach has quite a few applications in the analysis of decoherence in nonrelativistic quantum systems \cite{breuer, schlosshauer, weiss}. We notice also that recently the FGKLS approach has been found to be fruitful in high-energy \mbox{physics \cite{akamatsu, Blaizot, hep}}, \protect condensed matter \cite{cond1,cond3,cond4} and quantum biology \cite{bio2,bio3}. This equation indeed describes well open quantum systems. We can see this if we take the system of interest and an environment as a large closed system that is described by the Liouville equation. Then, provided that the current understanding of the quantum theory remains to be valid on the fundamental level, the FGKLS equation should arise as an effective description of a subsystem of the larger system that includes environmental degrees of freedom that is assumed to undergo the unitary evolution driven by a \mbox{self-adjoint Hamiltonian.} It will be of the FGKLS form if one ensures the positivity and completeness (its trace equal to $1$) of the density matrix, as well as its complete positivity \cite{benatti_floreanini}, which delivers exactly this form of the equation. It is linear and supports the superposition of quantum states. Thus, it may substitute the Schrodinger dynamics of wave functions by the extended fundamental dynamics of density matrices \cite{ weinberg_2014}. The Lindblad operators $L^{(a)}$ may be associated with elements of measuring apparatus \cite{weinberg_2014} or with an external noise \cite{andrianovtarrach}. This kind of quantum dynamics may be referred to as a non-Hamiltonian one \cite{tarasov}. As the system interacts with an environment, the process of \textit{{decoherence} } takes over. During this process, pure quantum states transform into the mixed ones, the information about the initial state partially becomes lost, and the system becomes more classical. As expected, the system may reach a certain final state, which is the same for all initial states. Such final states are called \textit{{pointers } \cite{zurek}. The fact that they are robust and do not change over time is expressed by the following simple equation: \begin{equation} \label{statpointer} \dot\rho = 0,\quad t \geq t_{final}. \end{equation} The goal of this work is to study in detail the decoherence process for the systems of a small dimension of the Hilbert space equal to $2$. We exhaustively consider all possible forms of a Lindblad operator: diagonal and Jordan block type. In the second section, we find pointers for these two cases. In the third section, we obtain a general solution to the FGKLS equation and confirm that it indeed converges to a pointer with time. In the fourth section, we check that the obtained solution has physical meaning, i.e., the density matrix is Hermitian, positive and has trace equal to $1$. In the fifth section, we explore how the solution behaves for weak interaction with an environment. In the sixth section, we make conclusions about the existence of oscillation solutions, resembling the solutions of a closed system. In the seventh section, we summarize and make final remarks. This work is partially based on our previous study in \cite{pertalg}. \section{Pointers for the FGKLS Equation in Two Dimensions}\label{sec2} Let us consider the FGKLS equation for the systems in two-dimensional Hilbert space: both the Hamiltonian $H$ and the Lindblad operator ({For simplicity,} the case of one Lindblad operator $L$ will be considered. Generalization to the systems with many operators $L^{(a)}$ is straightforward.) $L$ describing interaction with the environment are $2 \times 2$ operators. We build the Hilbert space spanned on a two-level energy orthonormal basis of\linebreak $H \ket{E_l}=E_l\ket{E_l}; \, l=1,2$: an arbitrary orthonormal basis in this space is $\ket{\psi_i} = \sum_{k=1}^{k=2}u_{ik} \ket{E_k} $ with unitary matrix $U$ coefficients. The Hamiltonian, generated by a Hermitian matrix of the size $2\times 2$, takes the form \begin{equation} \label{h2} H= \sum_{i,k=1}^{i,k=2}\varepsilon_{ik}\ket{\psi_i} \bra{\psi_k} . \end{equation} {On} the same basis, the Lindblad operator is defined as: \begin{equation} \label{l2} L = c \sum_{i,k=1}^{i,k=2}l_{ik} \ket{\psi_i} \bra{\psi_k} \end{equation} with arbitrary complex coefficients $l_{ik}$ and real coefficient $c$, which will be useful further on to control the behavior of solutions for small $L.$ The density matrix \begin{equation} \label{rho2} \rho = \sum_{i,k=1}^{i,k=2}f_{ik}(t)\ket{\psi_i} \bra{\psi_k} \end{equation} has to satisfy the well-known FGKLS equation \begin{equation} \label{lindblad1}\dot{\rho} = -i[H,\rho]+ L \rho L^\dagger - \frac{1}{2} \left\{ L^\dagger L,\, \rho\right\} \end{equation} and to obey the following properties: 1. $\rho$ is Hermitian: $\rho=\rho^\dag$, i.e., \begin{equation} \label{herm2} f_{11}, f_{22} \in \mathbb{R}; \;\;\; f_{21} = f_{12}^* \end{equation} {2.} $\Tr \rho = 1$: \begin{equation} \label{trace2} f_{11} + f_{22} = 1 \end{equation} {3.} $\rho$ is non-negative. Further on, the first two properties will be taken into account directly in the equations, while the last one will be checked for the obtained solutions of (\ref{lindblad}) postfactum. By default, the basis elements $|\psi_i >$ as well as coefficients $\varepsilon_{ik}$ and $l_{ik}$ are assumed to be time-independent. Substituting the expressions (\ref{h2})--(\ref{rho2}) into the FGKLS equation (\ref{lindblad}) and taking into account the first two properties of $\rho$ above, we {obtain:} \begin{equation} \boldsymbol{\dot f_{11}} = A \boldsymbol{f_{11}} + B \boldsymbol{f_{22}} + E \boldsymbol{f_{12}} + E^* \boldsymbol{f_{21}} \end{equation} \begin{equation} \boldsymbol{\dot f_{22}} = -A \boldsymbol{f_{11}} - B \boldsymbol{f_{22}} - E \boldsymbol{f_{12}} - E^* \boldsymbol{f_{21}} \end{equation} \begin{equation} \boldsymbol{\dot f_{12}} = G \boldsymbol{f_{11}} + H \boldsymbol{f_{22}} + J \boldsymbol{f_{12}} + K \boldsymbol{f_{21}} \end{equation} \begin{equation} \boldsymbol{\dot f_{21}} = G^* \boldsymbol{f_{11}} + H^* \boldsymbol{f_{22}} + K^* \boldsymbol{f_{12}} + J^* \boldsymbol{f_{21}}, \end{equation} where \begin{equation} \label{bvar} B = c^2 |l_{12}|^2 \end{equation} \begin{equation} \label{evar} E = i\varepsilon_{21} + \frac{1}{2} c^2 (l_{11} l^*_{12} - l^*_{22} l_{21}) \end{equation} \begin{equation} \label{gvar} G = i\varepsilon_{12} + c^2( l_{11} l^*_{21} - \frac{1}{2}l^*_{11} l_{12} - \frac{1}{2} l^*_{21}l_{22}) \end{equation} \begin{equation} \label{hvar} H = -i\varepsilon_{12} + c^2( l^*_{22} l_{12} - \frac{1}{2} l^*_{11} l_{12} - \frac{1}{2} l^*_{21} l_{22} ) \end{equation} \begin{equation} \label{jvar} J = -i \Delta\varepsilon + c^2( l_{11} l^*_{22} - \frac{1}{2} |l_{11}|^2 - \frac{1}{2} |l_{22}|^2 - \frac{1}{2} |l_{12}|^2 - \frac{1}{2} |l_{21}|^2) \end{equation} \begin{equation} \label{kvar} K = c^2 l_{12} l^*_{21} \end{equation} where $\Delta\varepsilon \equiv \varepsilon_{11} - \varepsilon_{22}$. By definition, the pointers of the FGKLS equation are solutions $\rho^{(p)}$ of (\ref{lindblad}), which become stable asymptotically for large $t \to\infty$, i.e., $\dot{f}^{(p)}_{ik}(t\to\infty)=0.$ Thus, we have the system of three linear equations for independent variables $f^{(p)}_{11}, f^{(p)}_{12}, f^{(p)\,\star}_{12}:$ \begin{equation} (A-B) \boldsymbol{f^{(p)}_{11}} + E \boldsymbol{f^{(p)}_{12}} + E^* \boldsymbol{f^{(p)\,\star}_{12}} = -B \end{equation} \begin{equation} (G-H) \boldsymbol{f^{(p)}_{11}} + J \boldsymbol{f^{(p)}_{12}} + K \boldsymbol{f^{(p)\,\star}_{12}} = -H \end{equation} \begin{equation} (G^* - H^*) \boldsymbol{f^{(p)}_{11}} + K^* \boldsymbol{f^{(p)}_{12}} + J^* \boldsymbol{f^{(p)\,\star}_{12}} = -H^* \end{equation} or, in a matrix form: \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\begin{pmatrix} - |l_{12}|^2 - |l_{21}|^2 & i e_{21} + \frac{1}{2} (l_{11}l^*_{12} - l^*_{22} l_{21}) & -ie_{12} + \frac{1}{2} (l^*_{11} l_{12} - l_{22} l^*_{21}) \\ 2ie_{12} + l_{11} l^*_{21} - l^*_{22} l_{12} & \frac{J}{c^2} & l_{12}l^*_{21} \\ -2ie_{21} + l^*_{11} l_{21} - l_{22} l^*_{12} & l^*_{12}l_{21} & \frac{J^*}{c^2} \\ \end{pmatrix} \nonumber \\ && \times\begin{pmatrix} \boldsymbol{f^{(p)}_{11}} \\ \boldsymbol{f^{(p)}_{12}} \\ \boldsymbol{f^{(p)\,\star}_{12}}\\ \end{pmatrix} = \begin{pmatrix} -|l_{12}|^2 \\ - \frac{H}{c^2}\\ - \frac{H^*}{c^2} \\ \end{pmatrix}, \label{matreq1} \end{eqnarray} where $J$ and $H$ are given {by} (\ref{jvar}) and (\ref{hvar}), and $e_{ij}$ are defined as $e_{ij} \equiv \frac{\varepsilon_{ij}}{c^2}$. The existence of pointers depends on the determinant of the matrix in (\ref{matreq1}). After choosing an appropriate basis, a Lindblad operator $L$ can be reduced to one of two possible forms: \begin{itemize} \item The Lindblad operator is diagonal; \item The Lindblad operator is a Jordan block. \end{itemize} \subsection{Diagonal Lindblad Operator} For the diagonal operators $L$, \begin{equation} \label{ldiag} L = c \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\\ \end{pmatrix} \end{equation} with complex values $\lambda_i,$ Equation (\ref{matreq1}) for the pointers becomes: \begin{equation} \label{matreq1abdiag} \begin{pmatrix} 0 & a & a^* \\ -2 a^* & b & 0 \\ -2a & 0 & b^* \\ \end{pmatrix}\times \begin{pmatrix} \boldsymbol{f^{(p)}_{11}} \\ \boldsymbol{f^{(p)}_{12}} \\ \boldsymbol{f^{(p)\,\star}_{12}} \\ \end{pmatrix} = \begin{pmatrix} 0 \\ -a^*\\ -a \\ \end{pmatrix} \end{equation} where $a = i e_{21}$ and $b = -i (e_{11} - e_{22}) + \lambda_1 \lambda^*_2 - \frac{1}{2} |\lambda_1|^2 - \frac{1}{2} |\lambda_2|^2$. The determinant of the matrix in (\ref{matreq1abdiag}) is $2|a|^2 (b+b^*)={-2|e_{21}|^2 |\lambda_1 - \lambda_2|^2},$ and several cases have to be considered: \begin{itemize} \item Determinant of the matrix in (\ref{matreq1abdiag}) does not vanish: $$|e_{21}|^2 |\lambda_1 - \lambda_2|^2 \neq 0,$$ and the solution is: \begin{equation} \rho^{(p)} = \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2}\\ \end{pmatrix}. \end{equation} {This} pointer does not depend on the parameters $c,\lambda_1,\lambda_2$ in $L,$ being a maximally mixed state. \item Determinant vanishes due to $e_{21} = 0.$ Two options in this case exist: \begin{enumerate} \item If, additionally, $b$ vanishes, i.e., $-i (e_{11} - e_{22}) + \lambda_1 \lambda^*_2 - \frac{1}{2} |\lambda_1|^2 - \frac{1}{2} |\lambda_2|^2 = 0,$ the pointer is an arbitrary matrix with unit trace: \begin{equation} \label{pointerdiag1} \rho^{(p)} = \begin{pmatrix} f^{(p)}_{11} & f^{(p)}_{12} \\ f^{(p)}_{21} & 1 - f^{(p)}_{11}\\ \end{pmatrix}. \end{equation} \item If, additionally, $b$ does not vanish, i.e., $-i (e_{11} - e_{22}) + \lambda_1 \lambda^*_2 - \frac{1}{2} |\lambda_1|^2 - \frac{1}{2} |\lambda_2|^2 \neq 0,$ the pointer is an arbitrary diagonal matrix with unit trace: \begin{equation} \label{pointerdiag2} \rho^{(p)} = \begin{pmatrix} f^{(p)}_{11} & 0 \\ 0 & 1 - f^{(p)}_{11}\\ \end{pmatrix} \end{equation} \end{enumerate} \item Determinant vanishes due to $\lambda_1 = \lambda_2.$ Then, the pointer is expressed via one arbitrary real parameter $x\equiv f^{(p)}_{11}:$ \begin{equation} \rho^{(p)} = \begin{pmatrix} x & e_{12} \frac{2x-1}{e_{11}-e_{22}} \\ e_{21} \frac{2x-1}{e_{11}-e_{22}} & 1 - x\\ \end{pmatrix}. \end{equation} \end{itemize} \subsection{Lindblad Operator of a Jordan Block Form} Let us consider the operator $L$ of the Jordan block form: \begin{equation} \label{ljordan} L = c(\lambda \hat I + \sigma_+ ) = c \begin{pmatrix} \lambda & 1 \\ 0 & \lambda\\ \end{pmatrix}; \end{equation} {with complex $\lambda_i$ and real parameter $c,$ which will allow us to make operator $L$ small} \mbox{if necessary.} A remark on the form (\ref{ljordan}) is appropriate here. The FGKLS equation with such $L, \, L^{\dag}$ can be transformed as follows: \begin{eqnarray} \dot \rho &=& - i[H, \rho] + c^2( \lambda\hat I + \sigma_+) \rho ( \lambda^{\star}\hat I + \sigma_-)- \nonumber \\ & & \frac12 c^2 \rho( \lambda^{\star}\hat I + \sigma_-) ( \lambda\hat I + \sigma_+) - \frac12 c^2( \lambda^{\star}\hat I + \sigma_-) ( \lambda\hat I + \sigma_+)\rho \nonumber \\ &=& - i[\widetilde H, \rho] + c^2 \sigma_+\rho \sigma_- -\frac12 c^2( \sigma_- \sigma_+\rho + \rho\sigma_- \sigma_+). \end{eqnarray} {This} means the invariance of our model under simultaneous replacements: \begin{eqnarray} L = c ( \lambda\hat I + \sigma_+) &\to & \widetilde L = c\sigma_+, \label{r1}\\ H &\to & \widetilde H = H - \frac{ic^2}{2}(\lambda\sigma_- - \lambda^{\star}\sigma_+). \label{r2} \end{eqnarray} {Thus}, instead of the Jordan block (\ref{ljordan}) form of the Lindblad operator, the nilpotent operators $\sigma_{\pm}$ might be used up to a shift in the Hamiltonian onto a fixed Hermitian matrix. These two models are equivalent, and below, the form (\ref{ljordan}) will be used. A possible simulation of environmental effects to obtain a Lindblad operator of a Jordan block form is presented in the example in Appendix \ref{appc}. Now, we come back to solving the FGKLS equation for a general case of a Lindlad operator of a Jordan block form. Matrix equation (\ref{matreq1}) is simplified as: \begin{equation} \label{matreq1ab} \begin{pmatrix} -1 & a & a^* \\ -2a^* & b & 0 \\ -2a & 0 & b^* \\ \end{pmatrix} \begin{pmatrix} \boldsymbol{f^{(p)}_{11}} \\ \boldsymbol{f^{(p)}_{12}} \\ \boldsymbol{f^{(p)\star}_{12}} \\ \end{pmatrix} = \begin{pmatrix} -1 \\ -a^* \\ -a \\ \end{pmatrix} \end{equation} where $a = ie_{21} + \frac{1}{2} \lambda$ and $b = -\frac{1}{2} - i \Delta e$, $\Delta e \equiv e_{11}-e_{22}$. The expression for the determinant of the matrix in (\ref{matreq1ab}) is obviously negative: $${-|b|^2 + 2|a|^2(b+b^*)} = {= - \frac{1}{4} - (\Delta e)^2 - 2|e_{21}-\frac{1}{2}i\lambda|^2 < 0},$$ and the solution of Equation (\ref{matreq1ab}) $\{f^{(p)}_{11}, f^{(p)}_{12}, f^*_{p12}\}$ exists: \begin{eqnarray} &&\rho_p =\begin{pmatrix} f^{(p)}_{11} & f^{(p)}_{12} \\ f^{(p)}_{21} & f^{(p)}_{22} \\ \end{pmatrix} = \frac{1}{2 |a|^2 + |b|^2} \begin{pmatrix} |a|^2+|b|^2 & a^* b^* \\ ab & |a|^2\\ \end{pmatrix} = \nonumber \\ &&= \gamma \begin{pmatrix} \frac{1}{4} + (\Delta e)^2 + |e_{21}+\frac{1}{2}i\lambda|^2 & (- ie_{12} + \frac{1}{2} \lambda^*)(-\frac{1}{2} + i \Delta e ) \\ ( ie_{21} + \frac{1}{2} \lambda)(-\frac{1}{2} - i \Delta e ) & |e_{21}-\frac{1}{2}i\lambda|^2 \\ \end{pmatrix}, \label{pointer2}\\ &&\gamma = \frac{1}{ \frac{1}{4} +(\Delta e)^2 + 2|e_{21}-\frac{1}{2}i\lambda|^2} \end{eqnarray} {This} solution is physically reasonable, since $\Tr \rho_p = 1$, and positivity of $\rho_p$ is provided by inequality $\det \rho_p > 0$. \begin{itemize} \item For the special case of degenerate $H,$ then $\varepsilon_1 = \varepsilon_2$, $H = \varepsilon \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix}$, the pointer is \begin{equation} \rho_{p, \text{deg}} = \frac{1}{ 2|\lambda|^2 + 1} \begin{pmatrix} |\lambda|^2 + 1 & - \lambda^*\\ - \lambda & |\lambda|^2 \\ \end{pmatrix} \end{equation} {It} does not depend on parameter $c$. This is because, for such $H$, the first term in the Lindblad equation (\ref{lindblad}) with pointer condition (\ref{statpointer}) disappears, and the parameter $c$, after substituting $L$ (\ref{ljordan}), falls away. \end{itemize} In Appendix \ref{appa}, one can find the expression for a pointer for weak interaction \mbox{with the environment}. \section{The General Solution of the FGKLS Equation in Two-Dimensional Hilbert Space}\label{sec3} \subsection{The Case of Diagonal Lindblad Operator} To find the general solution of the FGKLS equation (\ref{lindblad}) with the Lindblad operator of diagonal form (\ref{ldiag}), one has to solve the system of three linear equations: \begin{equation} \boldsymbol{\dot f_{11}} = i \varepsilon_{21} \boldsymbol{f_{12}} -i\varepsilon_{12} \boldsymbol{f_{21}} \end{equation} \begin{equation} \boldsymbol{\dot f_{12}} = 2i \varepsilon_{12} \boldsymbol{f_{11}} + \left( -i \Delta\varepsilon - \frac{1}{2} c^2|\lambda_1|^2 - \frac{1}{2} c^2|\lambda_2|^2 + c^2\lambda_1 \lambda^*_2 \right) \boldsymbol{f_{12}} -i\varepsilon_{12} \end{equation} \begin{equation} \boldsymbol{\dot f_{21}} = -2i \varepsilon_{21} \boldsymbol{f_{11}} + \left( i \Delta\varepsilon - \frac{1}{2} c^2|\lambda_1|^2 - \frac{1}{2} c^2|\lambda_2|^2 + c^2\lambda^*_1 \lambda_2 \right) \boldsymbol{f_{21}} +i\varepsilon_{21} \end{equation} {The} trace condition (\ref{trace2}) will give the solution for the variable $f_{22}$ as well. The solution of this system of differential equations is the sum of a partial solution of the non-homogeneous system and the general solution of the homogeneous system. Since only the pointer (\ref{pointer2}) can obviously play a role of partial solution to this system, the problem left is to find the general solution of the homogeneous system of equations, which in the matrix form is: \begin{eqnarray} &&\begin{pmatrix} \dot f_{11} \\ \dot f_{12} \\ \dot f_{21} \\ \end{pmatrix} = \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\begin{pmatrix} 0 & i \varepsilon_{21} & -i \varepsilon_{12} \\ 2i \varepsilon_{12}&-i \Delta\varepsilon - \frac{1}{2} c^2|\lambda_1|^2 - \frac{1}{2} c^2|\lambda_2|^2 + c^2\lambda_1 \lambda^*_2 & 0 \\ -2i \varepsilon_{21}& 0 & \!\!\!\!\!\!\!\!\!\!i \Delta\varepsilon - \frac{1}{2} c^2|\lambda_1|^2 - \frac{1}{2} c^2|\lambda_2|^2 + c^2\lambda^*_1 \lambda_2 \\ \end{pmatrix} \nonumber\\ &&\times\begin{pmatrix} f_{11} \\ f_{12} \\ f_{21} \\ \end{pmatrix} \equiv A \begin{pmatrix} f_{11} \\ f_{12} \\ f_{21} \\ \end{pmatrix}\label{homdiffeq} \end{eqnarray} We will look for solutions in the form \begin{equation} \label{x} \begin{pmatrix} f_{11} (t) \\ f_{12} (t) \\ f_{21} (t) \\ \end{pmatrix} \equiv \vec X(t) = e^{\Lambda t} \vec V \end{equation} where $\vec V$ is a constant vector: $\vec V = \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ \end{pmatrix}$, $\Lambda$ is a c-number. Thus, the general solution of the non-homogeneous system of equations can be written as ({Concerning some exclusions,} see an important remark at the end of this subsection.) \begin{equation} \label{gensol} \begin{pmatrix} f_{11} (t) \\ f_{12} (t) \\ f_{21} (t) \\ \end{pmatrix} = \begin{pmatrix} f_{p11} \\ f_{p12} \\ f_{p21} \\ \end{pmatrix}+ c_1 e^{\Lambda_1 t} \begin{pmatrix} v^{(1)}_1 \\ v^{(1)}_2 \\ v^{(1)}_3 \\ \end{pmatrix} + c_2 e^{\Lambda_2 t} \begin{pmatrix} v^{(2)}_1 \\ v^{(2)}_2 \\ v^{(2)}_3 \\ \end{pmatrix} + c_3 e^{\Lambda_3 t} \begin{pmatrix} v^{(3)}_1 \\ v^{(3)}_2 \\ v^{(3)}_3 \\ \end{pmatrix} \end{equation} where $c_1, c_2, c_3$ are arbitrary complex constants. Substituting (\ref{x}) into (\ref{homdiffeq}), one gets \begin{equation} \Lambda e^{\Lambda t} \vec V = A e^{\Lambda t} \vec V \Rightarrow \end{equation} \begin{equation} \label{eig0} \Rightarrow A \vec V = \Lambda \vec V\end{equation} Thus, we need the eigenvalues and eigenvectors of the matrix $A$ (defined in (\ref{homdiffeq})). Equation (\ref{eig0}) can be represented in the following form: {\footnotesize \begin{eqnarray} &&\!\!\!\!\!\!\!\!\begin{pmatrix} 0 & i e_{21} & -i e_{12} \\ 2i e_{12}&-i \Delta e - \frac{1}{2} |\lambda_1|^2 - \frac{1}{2} |\lambda_2|^2 + \lambda_1 \lambda^*_2 & 0 \\ -2i e_{21}& 0 & i\Delta e - \frac{1}{2} |\lambda_1|^2 - \frac{1}{2} |\lambda_2|^2 + \lambda^*_1 \lambda_2 \\ \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ \end{pmatrix} = \nonumber\\ &&= s \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ \end{pmatrix}, \label{eigdiag} \end{eqnarray} } where we have made a new notation: $s = \frac{\Lambda}{c^2}, e_{ij} = \frac{\varepsilon_{ij}}{c^2}$. The characteristic polynomial of the matrix in Equation (\ref{eigdiag}) is \begin{eqnarray} &&s^3 + s^2 |\lambda_1 - \lambda_2|^2\nonumber\\&& + s \left[ 4 |e_{12}|^2 + \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 + \frac{1}{4} |\lambda_1 - \lambda_2|^4 \right] +\nonumber \\ &&+ 2 |e_{12}|^2 |\lambda_1 - \lambda_2|^2 = 0 \label{cubicdiag} \end{eqnarray} Next, we have to consider two cases: when the last term vanishes and when it does not. These two cases give completely different dynamics of a system. \begin{enumerate} \item $\boldsymbol{e_{12} = e_{21} = 0$ or $\lambda_1 = \lambda_2}$:\\ {Solving} Equation (\ref{cubicdiag}), we obtain \begin{equation} s_1 = 0 \end{equation} \begin{equation} s_{2,3} = -\frac{1}{2} |\lambda_1 - \lambda_2|^2 \pm i \sqrt{4|e_{12}|^2 + \left|e_{11} - e_{22} + \frac{1}{2} i (\lambda_1 \lambda_2^* - \lambda_1^* \lambda_2)\right|^2} \end{equation} We see that, when $\lambda_1 \neq \lambda_2$, according to (\ref{gensol}), last two exponents decrease, while the first one reduces to a constant, which merges with a pointer. Therefore, we conclude that, during the evolution, such a system approaches a constant that is not constrained by interaction with an environment, parameters in Hamiltonian, etc. When $\lambda_1 = \lambda_2$, the real part of $s_{2,3}$ vanishes, and we have neverending oscillations of a solution. \item $\boldsymbol{|e_{12}|^2 |\lambda_1 - \lambda_2|^2 \neq 0}$:\\ Below, we will prove that, for this case $\operatorname{Re} s < 0$ for each root $s$ of this equation, i.e, according to (\ref{gensol}), $\rho(t)$ converges to the pointer $\rho_p$ for $t\rightarrow \infty$. In general, two options are possible for this case. \begin{enumerate} \item All three roots of (\ref{cubicdiag}), $s_1, s_2, s_3$, are real \\ The form of l.h.s. of (\ref{cubicdiag}) is such that it is strictly positive for $s \geq 0$, and therefore, $s_1, s_2, s_3$ have to be negative. \item Equation (\ref{cubicdiag}) has two complex roots, $s, s^*$, and one real root $t$ To start with, we write Vieta's formulas for Equation (\ref{cubicdiag}): \begin{equation} \label{sys1diag} 2 \operatorname{Re} (s) + t = -|\lambda_1 - \lambda_2|^2 \end{equation} \begin{equation} \label{sys2diag} |s|^2 + 2 \operatorname{Re} (s) t =4 |e_{12}|^2 + \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 + \frac{1}{4} |\lambda_1 - \lambda_2|^4 \end{equation} \begin{equation} \label{sys3diag} |s|^2 t = - 2 |e_{12}|^2 |\lambda_1 - \lambda_2|^2 \end{equation} $|s|^2 \neq 0$, since $s=0$ cannot be the root of (\ref{cubicdiag}). Therefore, from the last equation, it is obvious that $t<0$. We are left to prove that $\operatorname{Re} (s) < 0$. Expressing $|s|^2$ and $t$ from the system of Equations (\ref{sys1diag})--(\ref{sys3diag}), we get the equation for $\operatorname{Re} s$: \begin{equation} \operatorname{Re} (s)^3 + |\lambda_1 - \lambda_2|^2 \operatorname{Re} (s)^2 \nonumber \end{equation} \begin{equation} + \left( |e_{12}|^2 + \frac{1}{4} \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 + \frac{5}{16} |\lambda_1 - \lambda_2|^4 \right) \operatorname{Re} (s) \nonumber\end{equation} \begin{equation} + \left(\frac{1}{4} |e_{12}|^2 + \frac{1}{8}\left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 + \frac{1}{32} |\lambda_1 - \lambda_2|^4 \right)|\lambda_1 - \lambda_2|^2 = 0 \end{equation} As before, we notice that $\operatorname{Re} s \geq 0$ cannot be a solution of this equation; therefore, $\operatorname{Re} s < 0$. \end{enumerate} The fact that $\operatorname{Re} s_1, \operatorname{Re} s_2, \operatorname{Re} s_3 < 0$ is very important. It signifies the vanishing of the exponents in the general solution of the Lindblad equation (\ref{gensol}) over time (according to the definition, $\Lambda_1 = c^2 s$, $\Lambda_2 = c^2 s^*$, $\Lambda_3 = c^2 t$), which, finally, gives that the density matrix for any values of the parameters in $H, L$ (of the form (\ref{ljordan})) converges to the pointer (\ref{pointer2}). \end{enumerate} The other forms of the solution, not of the type (\ref{gensol}), one can find in Appendix \ref{appb}, where we analyze the coinciding roots of the Equation (\ref{cubicdiag}). \subsection{The Case of the Lindblad Operator of the Jordan Block Form} For the case of the Lindblad operator in the Jordan form (\ref{ljordan}), the FGKLS equation (\ref{lindblad}) also can be rewritten in a matrix form: \begin{equation} \label{homdiffeq++} \begin{pmatrix} \dot f_{11} \\ \dot f_{12} \\ \dot f_{21} \\ \end{pmatrix} = c^2 \begin{pmatrix} -1 & i e_{21}+ \frac{1}{2} \lambda & -ie_{12} + \frac{1}{2} \lambda^* \\ 2i e_{12} - \lambda^* & -i(e_{11}-e_{22}) - \frac{1}{2} & 0 \\ -2ie_{21} - \lambda & 0 & i(e_{11} - e_{22}) - \frac{1}{2} \\ \end{pmatrix} \begin{pmatrix} f_{11} \\ f_{12} \\ f_{21} \\ \end{pmatrix} \end{equation} and $f_{22}$ can be found from the trace condition. Equation (\ref{eig0}) can be represented in the following form: \begin{equation} \label{eig}\begin{pmatrix} -1 & i e_{21}+ \frac{1}{2} \lambda & -ie_{12} + \frac{1}{2} \lambda^* \\ 2i e_{12} - \lambda^* & -i(e_{11}-e_{22}) - \frac{1}{2} & 0 \\ -2ie_{21} - \lambda & 0 & i(e_{11} - e_{22}) - \frac{1}{2} \\ \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ \end{pmatrix} = s \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ \end{pmatrix} \end{equation} where $s$ is defined as follows: $s = \frac{\Lambda}{c^2}$. The characteristic polynomial of the matrix in Equation (\ref{eig}) is \begin{eqnarray}&& \label{cubic} s^3 + 2s^2 + \left( \frac{5}{4} + (e_{11}-e_{22})^2 + 4 \left| \frac{1}{2}\lambda + ie_{21}\right|^2\right) s\nonumber\\&& + \left( \frac{1}{4} + (e_{11}-e_{22})^2 + 2\left|\frac{1}{2}\lambda + ie_{21}\right|^2 \right) = 0 \end{eqnarray} Next, we have to prove that $\operatorname{Re} s < 0$ for each root $s$ of this equation. It means, according to (\ref{gensol}), that $\rho(t)$ converges to the pointer $\rho_p$ for $t\rightarrow \infty$. Two options exist for the cubic equation (\ref{cubic}): \begin{enumerate} \item If the roots of (\ref{cubic}), $s_1, s_2, s_3$ are real, they are negative. Otherwise, the left part of (\ref{cubic}) would be strictly positive but not $0.$ \item If Equation (\ref{cubic}) has two complex roots $s, s^*$, and one real root $t,$ all of them have negative real parts. This important fact is derived by means of the well-known \mbox{Vieta's formulas.} To start with, we write Vieta's formulas for Equation (\ref{cubic}): \begin{equation} \label{sys1} 2 \operatorname{Re} (s) + t = -2 \end{equation} \begin{equation} \label{sys2} |s|^2 + 2 \operatorname{Re} (s) t =\frac{5}{4} + (e_{11}-e_{22})^2 + 4 \left| \frac{1}{2}\lambda + ie_{21}\right|^2 \end{equation} \begin{equation} \label{sys3} |s|^2 t = - \left(\frac{1}{4} + (e_{11}-e_{22})^2 + 2\left|\frac{1}{2}\lambda + ie_{21}\right|^2 \right) \end{equation} $|s|^2 \neq 0$, since $s=0$ cannot be the root of (\ref{cubic}). Therefore, from the last equation, it is obvious that $t<0$. We are left to prove that $\operatorname{Re} (s) < 0$. Expressing $|s|^2$ and $t$ from the system of \mbox{Equations (\ref{sys1})--(\ref{sys3}),} we get the cubic equation for $\operatorname{Re} s$: \begin{equation} \operatorname{Re} (s)^3 + 2 \operatorname{Re} (s)^2 + \operatorname{Re} (s) \left(\frac{21}{16} + \frac{1}{4} (e_{11} - e_{22})^2 +\left|\frac{1}{2}\lambda + ie_{21}\right|^2\right) + \nonumber\end{equation} \begin{equation} \left(\frac{9}{32} + \frac{1}{8}(e_{11} - e_{22})^2 + \frac{3}{4}\left|\frac{1}{2}\lambda + ie_{21}\right|^2 \right)= 0. \end{equation} As before, we conclude that $\operatorname{Re} s \geq 0$ cannot be a solution of this equation; therefore, $\operatorname{Re} s < 0$. \end{enumerate} The fact that $\operatorname{Re} s_1, \operatorname{Re} s_2, \operatorname{Re} s_3 < 0$ is very important. It signifies the vanishing of the exponents in the general solution of the FGKLS equation (\ref{gensol}) over time (according to the definition, $\Lambda_1 = c^2 s$, $\Lambda_2 = c^2 s^*$, $\Lambda_3 = c^2 t$), which, finally, gives that the density matrix for any values of the parameters in $H, L$ (of the form (\ref{ljordan})) converges to the pointer (\ref{pointer2}). For the Linblad operator of the Jordan block form, we have also found the other forms of the solution, not of the type (\ref{gensol}), corresponding to coinciding roots of (\ref{cubic}). One can find the complete analysis in Appendix \ref{appb}. \section{Positivity of the Solution of FGKLS Equation} Next, we have to make sure that the remaining property of the density matrix $\rho(t)$ is satisfied---namely, that it is positive. Otherwise, it does not have physical meaning. For this purpose, we shall take explicitly into account in (\ref{gensol}) that the condition $\Tr \rho(t) = 1$ (\ref{trace2}) has to be satisfied for any moment of time including asymptotical $t\to\infty .$ Therefore, the solution~(\ref{gensol}) can be rewritten in a matrix form (we consider only the case of non-coinciding roots $s_i$): \begin{equation} \label{rhot0} \rho(t) = \begin{pmatrix} f_{p11} & f_{p12} \\ f_{p21} & f_{p22} \\ \end{pmatrix} + \nonumber\end{equation} \begin{equation} c_1 e^{s_1 c^2 t} \begin{pmatrix} v^{(1)}_1 & v^{(1)}_2 \\ v^{(1)}_3 & -v^{(1)}_1 \\ \end{pmatrix} + c_2 e^{s_2 c^2 t} \begin{pmatrix} v^{(2)}_1 & v^{(2)}_2 \\ v^{(2)}_3 & -v^{(2)}_1 \\ \end{pmatrix}+ c_3 e^{s_3 c^2 t} \begin{pmatrix} v^{(3)}_1 & v^{(3)}_2 \\ v^{(3)}_3 & -v^{(3)}_1 \\ \end{pmatrix} \end{equation} After this, we shall parameterize it conveniently, representing all the complex numbers in the polar form: \begin{equation} c_k = r_k e^{i \phi_k}, \;\;\; v_l^{(k)} = \alpha_l^{(k)} e^{i \beta^{(k)}}, \;\;\;\;\; k,l = 1,2,3\end{equation} and then constrain the expression by the hermiticity condition (\ref{herm2}). We obtain the following parametrization: \begin{itemize} \item For the case of two complex roots and one real root (${s_1,s_2 = s_1^* \in \mathbb{C}}, {s_3 \in \mathbb{R}}$): \begin{equation} \label{rhoparam1} \rho(t) = \begin{pmatrix} f_{p11} & f_{p12} \\ f_{p21} & f_{p22} \\ \end{pmatrix} + u e^{-i \phi} e^{s_1 c^2 t} \begin{pmatrix} d e^{-i\delta} & b e^{-i\gamma} \\ a e^{-i\alpha} & -d e^{-i\delta} \end{pmatrix}\nonumber\end{equation} \begin{equation} + u e^{i \phi} e^{s^*_1 c^2 t} \begin{pmatrix} d e^{i\delta} & a e^{i\alpha} \\ b e^{i\gamma} & -d e^{i\delta} \end{pmatrix} \pm w e^{s_3 c^2 t} \begin{pmatrix} h & p e^{i \frac{\beta}{2}} \\ p e^{-i \frac{\beta}{2}} & -h \end{pmatrix} \end{equation} where $a,b,d,p,h,u,w\in \mathbb{R}_+$, $\alpha,\beta,\gamma,\delta, \phi \in \mathbb{R}$.\\ Since $(v^{k}_1,v^{k}_2,v^{k}_3)$ is an eigenvector of the matrix $A$ from (\ref{homdiffeq}), $\{v_l^{(k)}\}$ are completely determined by the particular form of the Hamiltonian $H$ and Lindblad operator $L$, i.e., by the values of parameters $\varepsilon_1, \varepsilon_2, c, \lambda$. Therefore, $\{a,b,d,p,h,\alpha,\beta,\gamma,\delta\}$ are also completely determined by $H$ and $L$. As for $\{u,w\in R_+,\phi\in R$,''$\pm$'' sign$\}$, these are free parameters. They depend on the choice of the initial state $\rho(0)$. \item For the case of three real roots ($s_1, s_2, s_3 \in \mathbb{R}$): \begin{equation} \label{rhoparam2} \rho(t) = \begin{pmatrix} f_{p11} & f_{p12} \\ f_{p21} & f_{p22} \\ \end{pmatrix} \pm \nonumber\end{equation} \begin{equation} w^{(1)} e^{s_1 c^2 t} \begin{pmatrix} h^{(1)} & p^{(1)} e^{i \frac{\beta^{(1)}}{2}} \\ p^{(1)} e^{-i \frac{\beta^{(1)}}{2}} & -h^{(1)} \end{pmatrix} \pm w^{(2)} e^{s_2 c^2 t} \begin{pmatrix} h^{(2)} & p^{(2)} e^{i \frac{\beta^{(2)}}{2}} \\ p^{(2)} e^{-i \frac{\beta^{(2)}}{2}} & -h^{(2)} \end{pmatrix} \pm \nonumber\end{equation} \begin{equation} w^{(3)} e^{s_3 c^2 t} \begin{pmatrix} h^{(3)} & p^{(3)} e^{i \frac{\beta^{(3)}}{2}} \\ p^{(3)} e^{-i \frac{\beta^{(3)}}{2}} & -h^{(3)} \end{pmatrix} \end{equation} where $\{ p^{(k)}, h^{(k)} \in \mathbb{R}_+, \beta^{(k)} \in \mathbb{R}, \; k=1,2,3 \}$ are determined by $H$ and $L$, while $\{\pm$ signs, $w^{(k)} \in \mathbb{R}_+, \; k=1,2,3\}$ depend on the choice of the initial state $\rho(0)$. \end{itemize} Interestingly, the initial state has just exactly three real parameters: $x,y,z \in \mathbb{R}$, \begin{equation} \rho (0) = \begin{pmatrix} z & x+ iy \\ x-iy & 1-z \\ \end{pmatrix} \nonumber \end{equation} The correspondence between $\{x,y,z \in \mathbb{R}\}$ and $\{u,w \in \mathbb{R}_+;\phi \in \mathbb{R};\pm\}$ or $\{\pm$ signs, $w^{(k)} \in \mathbb{R}_+, \; k=1,2,3\}$ is a problem to be solved. The choice of signs ''$\pm$'' could signify several different paths that the system may choose to take. In order to check the positivity of (\ref{rhoparam1}) and (\ref{rhoparam2}), we need to take the determinant of this matrix $\rho(t)$ and find the conditions (i.e., the moments of time $t$), when the determinant $\det{\rho(t)}$ is non-negative. The resulting inequalities are not solvable explicitly in a general form. Therefore, we will restrict ourselves by the particular case when there is only the last exponent in (\ref{rhoparam1}) and (\ref{rhoparam2}) with a real value $s_3:$ \begin{equation} \label{redsol}\rho(t) = \begin{pmatrix} f_{p11} & f_{p12} \\ f_{p21} & f_{p22} \\ \end{pmatrix} \pm w e^{s_3 c^2 t} \begin{pmatrix} h & p e^{i \frac{\beta}{2}} \\ p e^{-i \frac{\beta}{2}} & -h \\ \end{pmatrix} \end{equation} We will denote $\pm w e^{s_3 c^2 t}$ as $x$, and the positivity of the density matrix means: \begin{equation} \det{\rho(t)} = \left| \begin{matrix} f_{p11} + x h & f_{p12} + x p e^{i \frac{\beta}{2}} \\ f_{p21} + x p e^{-i \frac{\beta}{2}} & f_{p22} - x h \\ \end{matrix} \right| \nonumber\end{equation} \begin{equation} = - x^2 \left(p^2 +h^2\right) + x \left(h f_{p22} - h f_{p11} - p e^{i \frac{\beta}{2}} f_{p21} - p e^{-i \frac{\beta}{2}} f_{p12}\right) + f_{p11} f_{p22} - f_{p21} f_{p12}\nonumber\end{equation} \begin{equation} = -\left(p^2+ h^2 \right) (x-x_1) (x-x_2) \geq 0,\end{equation} or, \begin{equation} \label{ineqexp} x_1 \leq \pm w e^{s_3 c^2 t} \leq x_2 \end{equation} where $x_1, x_2, \;x_1 \leq x_2$, are the roots of the quadratic polynomial above. One of Vieta's formulas gives \begin{equation} -(p^2+h^2) x_1 x_2 = f_{p11} f_{p22} - f_{p21} f_{p12} \geq 0 \end{equation} {The} latter inequality follows from the positivity of the pointers in Section \ref{sec3}. It means that $x_1 x_2 \leq 0 \Rightarrow x_1 \leq 0, x_2 \geq 0$. There are three options to fulfill the inequality (\ref{ineqexp}): \begin{itemize} \item $w=0$\\ It is a trivial case. $\rho(t)$ is just a pointer in any moment of time $t$. It is positive, as we checked earlier, and the inequality (\ref{ineqexp}) is obviously correct. \item ''$+$'' sign, $w>0$ \\ The inequalities (\ref{ineqexp}) are reduced to: \begin{equation} s_3 c^2 t \leq \ln{x_2} - \ln{w}. \end{equation} Thus, the solution $\rho(t)$ (\ref{redsol}) of the FGKLS equation has physical sense only for \begin{equation} t \geq \frac{\ln{w} - \ln{x_2}}{-s_3 c^2} \end{equation} \item ''$-$'' sign, $w>0$ \\ In a similar way, we obtain: \begin{equation} s_3 c^2 t \leq \ln{(-x_1)} - \ln{w}, \end{equation} and possible time is: \begin{equation} t\geq \frac{ \ln{w} -\ln{(-x_1)}}{-s_3 c^2}. \end{equation} \end{itemize} For $w\neq 0,$ only these time intervals provide a reasonable solution of the FGKLS equation. \section{The Behavior of Solutions for Weak Interaction with Environment} In this section, we examine the behavior of the solution (\ref{rhot0}) when interaction with an environment disappears, i.e., for $c \rightarrow 0.$ We start from the Lindblad operator $L$ of diagonal form (\ref{ldiag}) solving Equations (\ref{eigdiag}) and (\ref{cubicdiag}) by means of perturbation theory with parameter $c^2$. For simplicity, we take the case of diagonal Hamiltonian $\varepsilon_{12} = \varepsilon_{21} = 0$. Since, in all the equations, we have integer powers of $c^2$, we decompose the parameters $\Lambda$ of the general solution (\ref{gensol}) in a series: \begin{equation} \Lambda = c^2 s = a_0 + a_1 c^2 + \dots \end{equation} In the leading order, Equation (\ref{cubic}) reads: \begin{equation} a_0^3 + a_1 (\varepsilon_{11}-\varepsilon_{22})^2= 0 \end{equation} with three solutions: $a_0 = 0,$ and $a_0 = \pm i (\varepsilon_{11}-\varepsilon_{22})$. Considering next-to-leading order of Equation (\ref{cubicdiag}), we obtain for $a_1:$ \begin{equation} a_1 = -\frac{|\lambda_1 - \lambda_2|^2 a_0^2 + a_0 i (\lambda_1 \lambda_2^* - \lambda_1^* \lambda_2)\Delta\varepsilon}{3 a_0^2 + (\varepsilon_{11}-\varepsilon_{22})^2} \end{equation} which gives three options: \begin{itemize} \item $a_0 = 0$ and $a_1 = 0;$ \item $a_0 = i (\varepsilon_{11}-\varepsilon_{22})$ and $a_1 = -\frac{1}{2}(|\lambda_1|^2 + |\lambda_2|^2 - 2 \lambda_1^* \lambda_2);$ \item $a_0 = - i (\varepsilon_{11}-\varepsilon_{22})$ and $a_1 = -\frac{1}{2}(|\lambda_1|^2 + |\lambda_2|^2 - 2 \lambda_1 \lambda_2^*).$ \end{itemize} Then, the general solution of (\ref{gensol}) with diagonal $L$ for small $c^2$ looks like: \begin{equation} \rho(t) = v_{const} + e^{i\Delta\varepsilon t -\frac{1}{2}(|\lambda_1|^2 + |\lambda_2|^2 - 2 \lambda_1^* \lambda_2) c^2 t + \dots} v_1+ e^{-i\Delta\varepsilon t -\frac{1}{2}(|\lambda_1|^2 + |\lambda_2|^2 - 2 \lambda_1 \lambda_2^*) c^2 t + \dots} v_1^\dag \end{equation} where $v_{const}, v_1$ are time-independent matrices. One can check explicitly that the real parts in both exponents of this expression are negative. For the case of the Lindblad operator with Jordan block structure (\ref{ljordan}) and diagonal Hamiltonian $H$ for small values of $c^2,$ the behavior of the solution (\ref{rhot0}) can be obtained in a similar way: \begin{equation} \Lambda = c^2 s = a_0 + a_1 c^2 + \dots \end{equation} In the leading order, Equation (\ref{cubic}) reads: \begin{equation} a_0^3 + a_0 (\varepsilon_{11}-\varepsilon_{22})^2 = 0, \end{equation} providing again three solutions: $a_0 = 0$ and $a_0 = \pm i (\varepsilon_{11}-\varepsilon_{22}).$ Next, we find $a_1$ considering next-to-leading order of Equation (\ref{cubic}): \begin{equation} a_1 = -\frac{2 a_0^2 + (\varepsilon_{11}-\varepsilon_{22})^2}{3 a_0^2 + (\varepsilon_{11}-\varepsilon_{22})^2} \end{equation} which gives: \begin{itemize} \item $a_0 = 0;\,\, a_1 = -1;$ \item $a_0 = i (\varepsilon_{11}-\varepsilon_{22}); \,\, a_1 = -\frac{1}{2};$ \item $a_0 = - i (\varepsilon_{11}-\varepsilon_{22}); \,\, a_1 = -\frac{1}{2}$ \end{itemize} Finally, solving the eigenvector Equation (\ref{eig}) in the leading and next-to-leading order, we get for the density matrix: \begin{eqnarray} \rho(t) &=& \begin{pmatrix} f^{(p)}_{11} & f^{(p)}_{12} \\ f^{(p)}_{21} & f^{(p)}_{22} \\ \end{pmatrix} + c_1 e^{- c^2 t + \dots} \begin{pmatrix} 1 + x c^2 + \dots & i \frac{\lambda^* c^2}{\varepsilon_{11} - \varepsilon_{22}} +\dots \\ - i \frac{\lambda c^2}{\varepsilon_{11} - \varepsilon_{22}} +\dots & -1 - x c^2 - \dots \\ \end{pmatrix} + \nonumber \\ & & c_2 e^{i\Delta\varepsilon t - \frac{1}{2} c^2 t + \dots} \begin{pmatrix} -\frac{1}{2} i \frac{\lambda^* c^2}{\varepsilon_{11} - \varepsilon_{22}} + \dots & 0 + \dots \\ 1+ y c^2 + \dots & \frac{1}{2} i \frac{\lambda^* c^2}{\varepsilon_{11} - \varepsilon_{22}} + \dots \\ \end{pmatrix}+\nonumber \\ & & c_3 e^{-i\Delta\varepsilon t - \frac{1}{2} c^2 t + \dots} \begin{pmatrix} \frac{1}{2} i \frac{\lambda c^2}{\varepsilon_{11} - \varepsilon_{22}} + \dots & 1+ z c^2 + \dots \\ 0 + \dots & -\frac{1}{2} i \frac{\lambda c^2}{\varepsilon_{11} - \varepsilon_{22}} + \dots \\ \end{pmatrix}, \end{eqnarray} where $c_1, c_2, c_3, x, y, z$ are arbitrary complex numbers. Applying the hermiticity condition (\ref{herm2}), we further constrain the solution: \begin{eqnarray} \rho(t) &=& \begin{pmatrix} f^{(p)}_{11} & f^{(p)}_{12} \\ f^{(p)}_{21} & f^{(p)}_{22} \\ \end{pmatrix} + p e^{- c^2 t + \dots} \begin{pmatrix} 1 + a c^2 + \dots & i \frac{\lambda^* c^2}{\varepsilon_{11} - \varepsilon_{22}} +\dots \\ - i \frac{\lambda c^2}{\Delta\varepsilon} +\dots & -1 - a c^2 - \dots \\ \end{pmatrix} + \nonumber \\ & & q e^{i\Delta\varepsilon t - \frac{1}{2} c^2 t + \dots} \begin{pmatrix} -\frac{1}{2} i \frac{\lambda^* c^2}{\Delta\varepsilon} + \dots& 0 + \dots \\ 1+ y c^2 + \dots & \frac{1}{2} i \frac{\lambda^* c^2}{\Delta\varepsilon} + \dots \\ \end{pmatrix}+ \nonumber \\ & &q^* e^{-i\Delta\varepsilon t - \frac{1}{2} c^2 t + \dots} \begin{pmatrix} \frac{1}{2} i \frac{\lambda c^2}{\Delta\varepsilon} + \dots & 1+ y^* c^2 + \dots \\ 0 + \dots & -\frac{1}{2} i \frac{\lambda c^2}{\Delta\varepsilon} + \dots \\ \end{pmatrix} \end{eqnarray} where $p, a \in \mathbb{R}$, $q, y \in \mathbb{C}$. We see that if the Lindblad operator $L \neq 0$ has the form (\ref{ljordan}), it must be that $$\rho(t) \rightarrow \begin{pmatrix} f^{(p)}_{11} & f^{(p)}_{12} \\ f^{(p)}_{21} & f^{(p)}_{22} \\ \end{pmatrix},$$ when $t \rightarrow \infty$, i.e., this system has a pointer (the late-time state of the system) of a very specific form. This is the effect of decoherence---the loss of information in the process of evolution of an open system, when the final state loses information about the initial state. However, if the Lindblad operator $L=0$, or $c=0$, then we obtain the ordinary solution of the Liouville equation with oscillating exponents. It contains arbitrary parameters, which can be fixed by initial conditions. In other words, it does not approach the Lindblad pointer $\begin{pmatrix} f^{(p)}_{11} & f^{(p)}_{12} \\ f^{(p)}_{21} & f^{(p)}_{22} \\ \end{pmatrix}$ in the late time limit. Evolution of the system fully depends on the initial state, and information about the initial state is contained in the final state. We have seen the same situation in our previous work \cite{pertalg}, when (no matter how small) $L \neq 0$ produced a very specific form of the pointer. However, at the same time, $L=0$ suggested that the pointer is a matrix with arbitrary elements (if the Hamiltonian is non-degenerate, then it is a diagonal matrix with arbitrary elements; if the Hamiltonian is degenerate, then it is a matrix with arbitrary diagonal elements and elements, corresponding to \mbox{degenerate indices).} \section{Unitons} In this section, a special kind of open system---unitons---will be considered, such that, despite an openness, their density matrix evolution is unitary. In other words, the density matrix $\rho_u(t)$ of the system interacting with the environment still obeys the FGKLS equation with a single Lindblad operator $L$, but simultaneously its Lindbladian part vanishes: \begin{equation} \label{uniton1} L \rho_u L^\dag - \frac{1}{2} \left\{L^\dag L,\rho_u \right\} = 0\end{equation} \begin{equation} \label{uniton2} \dot{\rho_u} = -i[H,\rho_u] \;\; \Leftrightarrow \;\; \rho_u (t) = e^{-i H t} \rho_u (0) e^{i H t}\end{equation} {Keeping} decompositions of $H$ and $L$ as in (\ref{h2}) {and} (\ref{l2}), the uniton's density matrix is now: \begin{equation} \rho_u = \sum_{ij} f^{(u)}_{ij} \ket{\psi_i} \bra{\psi_k}. \end{equation} The first relation (\ref{uniton1}) is written on this basis as follows: \begin{equation} \sum_{k,l}l_{mk}f^{(u)}_{kl}l^*_{nl} - \frac{1}{2}\sum_{k,l}l^*_{km}l_{kl}f^{(u)}_{ln}-\frac{1}{2}\sum_{k,l}f^{(u)}_{mk}l^*_{lk}l_{ln} = 0. \end{equation} {It} can be transformed into an explicit form of a matrix with four indexes multiplied by a vector with two indexes: \begin{equation} \sum_{kl} A_{mn,kl} f^{(u)}_{kl} = 0, \end{equation} where \begin{equation} A_{mn,kl} \equiv l_{mk} l^*_{nl} - \frac{1}{2} \sum_s l^*_{sm} l_{sk} \delta_{ln} - \frac{1}{2} \sum_s \delta_{km} l^*_{sl} l_{sn}. \end{equation} {Thus}, the existence of unitons depends crucially on the determinant of this matrix. \begin{itemize} \item If the determinant does not vanish, the only solution is $f^{(u)}_{mn} = 0 \;\forall m,n$, and no unitons exist in this case. \item If the determinant vanishes, three options appear: \begin{itemize} \item The solution $f^{(u)}_{mn}\}$ has one free parameter.\\ This parameter is eliminated by the trace condition $\sum_k f^{(u)}_{kk} =1$. We have a constant density matrix that has to satisfy (\ref{uniton2}). Thus, unitons exist only if they commute with Hamiltonian: $[H,\rho_u] = 0$. As a result, the only uniton does not depend on time : $\rho_u (t) = \rho_u (0)$. Actually, it can be considered as a pointer. \item The solution $\{u_{mn}\}$ has two parameters.\\ One of the parameters is eliminated by the trace condition ${\sum_k u_{kk} =1}$, and the dependence of the other on time is found from Equation (\ref{uniton2}). We have a solution with no free parameters, depending on time in some way. \item The solution $\{u_{mn}\}$ has three or more parameters.\\ The same, as in the previous case, but the solution, besides depending on time in some way, also contains one or more parameters. Their dependence on time can be chosen in any manner. \end{itemize} \end{itemize} {We} are not able to calculate the determinant for an arbitrary dimension of operators, and we shall consider again a simpler task of dimension two. Note that, for this problem, we do not need perturbation theory. Equation (\ref{uniton1}) loses its part related to the Hamiltonian, and all the terms there turn out to be of the same order. Solving Equation (\ref{uniton1}) in two-dimensional Hilbert space is actually the same as finding pointers in two dimensions for a vanishing Hamiltonian. This means that we should look at our previous results of Section \ref{sec2} and take $\varepsilon_{ij}=0$. The matrix equation that we arrive at is Equation (\ref{matreq1}) with $J, H$ given by Formulas (\ref{jvar}) and (\ref{hvar}) with $\varepsilon_{ij} = 0$. As before, we consider two types of Lindblad operator: \begin{enumerate} \item Diagonal Lindblad operator (\ref{ldiag}) The uniton for this case is easily found from the previous results for the pointer (\ref{pointerdiag1}) and (\ref{pointerdiag2}): \begin{enumerate} \item $\lambda_1 = \lambda_2$, then\\ \begin{equation} \rho_u = \begin{pmatrix} f_{u11} & f_{u12} \\ f_{u21} & 1 - f_{u11}\\ \end{pmatrix} \end{equation} If this condition for $L$ is satisfied, then every density matrix turns out to be a uniton, obeying Equation (\ref{uniton2}). \item $\lambda_1 \neq \lambda_2$, then\\ \begin{equation} \rho_u = \begin{pmatrix} f_{u11} & 0 \\ 0 & 1 - f_{u11}\\ \end{pmatrix} \end{equation} Solving (\ref{uniton2}) for the diagonal density matrix, we see that the density matrix reduces to a constant matrix. It is not possible to make non-trivial time dependence with Equation (\ref{uniton2}). Unitons do not exist for this case. \end{enumerate} \item Lindblad operator of the Jordan block form (\ref{ljordan}) The uniton is found from the previous result for the pointer (\ref{pointer2}): \begin{equation} \rho_u = \begin{pmatrix} \frac{|\lambda|^2 + 1}{2|\lambda|^2 + 1} & -\frac{\lambda^*}{2|\lambda|^2 + 1}\\ -\frac{\lambda}{2|\lambda|^2 + 1} & \frac{|\lambda|^2}{2|\lambda|^2 + 1}\\ \end{pmatrix} \end{equation} Since this solution of Equation (\ref{uniton1}) does not contain any free parameters, we cannot construct a uniton that non-trivially evolves in time according to (\ref{uniton2}). Unitons do not exist for this case. \end{enumerate} {We} conclude that the only case when unitons exist is when the Lindblad operator is proportional to the identity matrix. It just falls off from the FGKLS equation. It is a trivial case that can be reduced to the Liouville equation, for which there is no interaction with an environment. Therefore, the FGKLS equation does not have oscillating Liouville solutions for the Hilbert space of dimension $2$. \section{Conclusions} Throughout the paper, we have exhaustively studied the evolution of an open quantum system for a Hilbert space of dimension $2$. We obtained final fixed states of an evolution of an open system (called pointers), and then we found a solution to the FGKLS equation and proved that it always converges to a pointer. It signifies a decoherence process, when information about an initial state becomes lost during an evolution as a result of an interaction with an environment. After this, we checked that the solution has a physical meaning, i.e., the density matrix is Hermitian, positive and has trace equal to $1$, and found a moment of time starting from which the density matrix is positive, i.e., a Lindblad equation can be used. Next, we studied a behavior of the solution when an interaction with an environment is weak. When the interaction is on, the general solution of the FGKLS equation has a special type, leading over time to a pointer. When it is off, the solution is a standard oscillating one, provided by the Liouville equation. Finally, we found that the FGKLS equation does not have oscillating solutions, of the same form as the solutions of Liouville equation, for the Hilbert space of dimension $2$. \vspace{6pt} \newpage \section*{Appendix A. Expression for a Pointer for Weak Interaction with Environment}\label{appa} For the special case of diagonal but non-degenerate Hamiltonian $\varepsilon_1 \neq \varepsilon_2, \varepsilon_{12}=\varepsilon_{21}=0$ and Lindblad operator of the Jordan block form (\ref{ljordan}), a pointer (\ref{pointer2}) can be decomposed in a series over the parameter $c$, if we imagine that this parameter is small (weak interaction with an environment): \vspace{6pt} \begin{eqnarray} &&f^{(p)}_{11} = 1 + \frac{|\lambda|^2}{4} \sum_{k=1}^{\infty} c^{4k} \frac{(-1)^k (\frac{1}{2}|\lambda|^2 + \frac{1}{4})^{k-1}}{(\varepsilon_1-\varepsilon_2)^{2k}} \label{fp11row}\\ &&f^{(p)}_{22} = \frac{|\lambda|^2}{4} \sum_{k=1}^{\infty} c^{4k} \frac{(-1)^{k+1} (\frac{1}{2}|\lambda|^2 + \frac{1}{4})^{k-1}}{(\varepsilon_1-\varepsilon_2)^{2k}} \\ &&f^{(p)}_{12} = \frac{1}{2} i \lambda^* \sum_{k=1; k \;\text{is odd}}^{\infty} c^{2k} \frac{(-1)^{\frac{1}{2}(k-1)}(\frac{1}{2} |\lambda|^2 +\frac{1}{4})^{\frac{1}{2}(k-1)}}{(\varepsilon_1 - \varepsilon_2)^k} + \nonumber\\ && \;\;\;\;\;\;\;\;\;\;\; \frac{1}{4} \lambda^* \sum_{k=2; k \;\text{is even}}^{\infty} c^{2k} \frac{(-1)^{\frac{1}{2}k}(\frac{1}{2} |\lambda|^2 +\frac{1}{4})^{\frac{1}{2}k-1}}{(\varepsilon_1 - \varepsilon_2)^k}\\ &&f^{(p)}_{21} = -\frac{1}{2} i \lambda \sum_{k=1; k \;\text{is odd}}^{\infty} c^{2k} \frac{(-1)^{\frac{1}{2}(k-1)}(\frac{1}{2} |\lambda|^2 +\frac{1}{4})^{\frac{1}{2}(k-1)}}{(\varepsilon_1 - \varepsilon_2)^k} + \label{fp21row}\nonumber\\ && \;\;\;\;\;\;\;\;\;\;\; \frac{1}{4} \lambda \sum_{k=2; k \;\text{is even}}^{\infty} c^{2k} \frac{(-1)^{\frac{1}{2}k}(\frac{1}{2} |\lambda|^2 +\frac{1}{4})^{\frac{1}{2}k-1}}{(\varepsilon_1 - \varepsilon_2)^k} \end{eqnarray} or, more explicitly, \begin{eqnarray} &&f^{(p)}_{11} = 1 - c^4 \frac{|\lambda|^2}{4(\varepsilon_1-\varepsilon_2)^2} + c^8 \frac{|\lambda|^2 (\frac{1}{2} |\lambda|^2 + \frac{1}{4})}{4(\varepsilon_1 - \varepsilon_2)^4} + \dots \\ &&f^{(p)}_{22} = c^4 \frac{|\lambda|^2}{4(\varepsilon_1-\varepsilon_2)^2} - c^8 \frac{|\lambda|^2 (\frac{1}{2} |\lambda|^2 + \frac{1}{4})}{4(\varepsilon_1 - \varepsilon_2)^4} + \dots \\ &&f^{(p)}_{12} = c^2 \frac{i \lambda^*}{2(\varepsilon_1 - \varepsilon_2)} - c^4 \frac{\lambda^*}{4(\varepsilon_1 - \varepsilon_2)^2} - \nonumber\\ && \;\;\;\;\;\;\;\;\;\;\; c^6 \frac{i\lambda^*(\frac{1}{2}|\lambda|^2+\frac{1}{4})}{2(\varepsilon_1 - \varepsilon_2)^3} + c^8 \frac{\lambda^*(\frac{1}{2}|\lambda|^2+\frac{1}{4})}{4(\varepsilon_1-\varepsilon_2)^4} + \dots \\ &&f^{(p)}_{21} = - c^2 \frac{i \lambda}{2(\varepsilon_1 - \varepsilon_2)} - c^4 \frac{\lambda}{4(\varepsilon_1 - \varepsilon_2)^2} + \nonumber\\ && \;\;\;\;\;\;\;\;\;\;\; c^6 \frac{i\lambda(\frac{1}{2}|\lambda|^2+\frac{1}{4})}{2(\varepsilon_1 - \varepsilon_2)^3} + c^8 \frac{\lambda(\frac{1}{2}|\lambda|^2+\frac{1}{4})}{4(\varepsilon_1-\varepsilon_2)^4} + \dots \end{eqnarray} \section*{Appendix B. Other Forms of the Solution of the FGKLS Equation}\label{appb} {The construction of the general solution described above has some exclusions, which correspond to the coinciding roots of Equation~(\ref{cubicdiag}) or (\ref{cubic}). In this case, the \mbox{solution (\ref{gensol})} has to be modified: the exponents, corresponding to coinciding roots, acquire} \mbox{polynomial multipliers.} \begin{itemize} \item \textbf{{Diagonal} Lindblad operator} The analysis of Equation (\ref{cubicdiag}) shows that the roots coincide when \begin{enumerate} \item $|e_{12}|^2 = \frac{1}{54} |\lambda_1 - \lambda_2|^4,$\\ and\\ $\left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 = \frac{1}{108} |\lambda_1 - \lambda_2|^4$,\\ then \begin{equation} s_1 = s_2 = s_3 = -\frac{|\lambda_1 - \lambda_2|^2}{3} \nonumber \end{equation} The solution is of the form: \begin{equation} \rho(t) = \rho_p + e^{-\frac{|\lambda_1 - \lambda_2|^2}{3} c^2 t} v_1 + t e^{-\frac{|\lambda_1 - \lambda_2|^2}{3} c^2 t} v_2 + t^2 e^{-\frac{|\lambda_1 - \lambda_2|^2}{3} c^2 t} v_3, \end{equation} where $v_1, v_2, v_3$ are time-independent matrices. \item Provided\\ $|\lambda_1 - \lambda_2|^4 > 48|e_{12}|^2 + 12 \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2$\\ and\\ $\left[ \frac{1}{4} |\lambda_1 - \lambda_2|^4 - 12 |e_{12}|^2 - 3 \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 \right]^3 = $\\ $= |\lambda_1 - \lambda_2|^4 \left[ -\frac{1}{8} |\lambda_1 - \lambda_2|^4 + 9 |e_{12}|^2 - \frac{9}{2} \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 \right]^2$, \begin{enumerate} \item If \\ $|\lambda_1 - \lambda_2|^2 \Big[ -\frac{1}{8} |\lambda_1 - \lambda_2|^4 + 9 |e_{12}|^2$ \\$- \frac{9}{2} \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 \Big] > 0$\\ then \begin{eqnarray} &&s_1 = s_2 = -\frac{|\lambda_1 - \lambda_2|^2}{3} \nonumber \\&& + \frac{1}{3} \sqrt{\frac{1}{4} |\lambda_1 - \lambda_2|^4 - 12 |e_{12}|^2 - 3 \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2} \nonumber\end{eqnarray} \begin{eqnarray} && s_3 = -\frac{|\lambda_1 - \lambda_2|^2}{3} \nonumber\\&&- \frac{2}{3} \sqrt{\frac{1}{4} |\lambda_1 - \lambda_2|^4 - 12 |e_{12}|^2 - 3 \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2} \nonumber\end{eqnarray} \item If\\ $|\lambda_1 - \lambda_2|^2 \Big[ -\frac{1}{8} |\lambda_1 - \lambda_2|^4 + 9 |e_{12}|^2 $ \\$- \frac{9}{2} \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2 \Big] < 0$,\\ then $s_1 = s_2 = -\frac{|\lambda_1 - \lambda_2|^2}{3}$\\$ - \frac{1}{3} \sqrt{\frac{1}{4} |\lambda_1 - \lambda_2|^4 - 12 |e_{12}|^2 - 3 \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2}$\\ $s_3 = -\frac{|\lambda_1 - \lambda_2|^2}{3} $\\$+ \frac{2}{3} \sqrt{\frac{1}{4} |\lambda_1 - \lambda_2|^4 - 12 |e_{12}|^2 - 3 \left|\Delta e + \frac{1}{2} i (\lambda_1 \lambda^*_2 - \lambda^*_1 \lambda_2)\right|^2} $ \end{enumerate} The solution is of the form: \begin{equation} \rho(t) = \rho_p + e^{s_1 c^2 t} \tilde v_1 + t e^{s_1 c^2 t} \tilde v_2 + e^{s_3 c^2 t} \tilde v_3 \end{equation} where $\tilde v_1, \tilde v_2, \tilde v_3$ are time-independent matrices. \end{enumerate} \item \textbf{{Lindblad} operator of a Jordan block form} In the following, we will use the notation: \\ $E\equiv (e_{11} - e_{22})^2$, $K \equiv \left(\frac{1}{2} \lambda + i e_{21}\right)\left(\frac{1}{2} \lambda^* - i e_{12}\right)$. \begin{enumerate} \item The analysis of Equation (\ref{cubic}) shows that all three roots coincide \begin{equation} s_1 = s_2 = s_3 = -\frac{2}{3} \nonumber \end{equation} when $K = \frac{1}{54}, E = \frac{1}{108}$. Moreover, the solution is of the form: \begin{equation} \rho(t) = \rho_p + e^{-\frac{2}{3} c^2 t} v_1 + t e^{-\frac{2}{3} c^2 t} v_2 + t^2 e^{-\frac{2}{3} c^2 t} v_3 \end{equation} where $v_1, v_2, v_3$ are time-independent matrices. \item Two of three roots coincide if $E + 4K < \frac{1}{12}$ and $\left(\frac{1}{4} - 3 E - 12 K\right)^3 =\linebreak \left( \frac{1}{8} + \frac{9}{2} E - 9 K\right)^2$. Then, two situations are possible: \begin{enumerate} \item $\frac{1}{36} + E - 2 K > 0$, then \begin{equation} s_1 = s_2 = -\frac{2}{3} + \frac{1}{3} \sqrt{\frac{1}{4} - 3E -12K} \nonumber \end{equation} \begin{equation} s_3 = -\frac{2}{3} - \frac{2}{3} \sqrt{\frac{1}{4} - 3E -12K} \nonumber \end{equation} \item $\frac{1}{36} + E - 2 K < 0$, then \begin{equation} s_1 = s_2 = -\frac{2}{3} - \frac{1}{3} \sqrt{\frac{1}{4} - 3E -12K} \nonumber \end{equation} \begin{equation} s_3 = -\frac{2}{3} + \frac{2}{3} \sqrt{\frac{1}{4} - 3E -12K} \nonumber \end{equation} \end{enumerate} In both situations, the solution has the form: \begin{equation} \rho(t) = \rho_p + e^{s_1 c^2 t} \tilde v_1 + t e^{s_1 c^2 t} \tilde v_2 + e^{s_3 c^2 t} \tilde v_3 \end{equation} where $\tilde v_1, \tilde v_2, \tilde v_3$ are time-independent matrices. \end{enumerate} \end{itemize} \section*{Appendix C. Lindblad Operator of a Jordan Block Form: An Example}\label{appc} Provided that the current understanding of the quantum theory remains to be valid on the fundamental level, the Lindblad equation should arise as an effective description of a system as a subsystem of a larger system that includes environmental degrees of freedom that is assumed to undergo the unitary evolution driven by a self-adjoint Hamiltonian, \begin{equation} H_{full}=H_S\otimes I+I\otimes H_{env}+H_{int}. \end{equation} The interaction Hamiltonian $H$ can always be decomposed into a superposition of the tensor products of the operators acting solely on the system and environment, \begin{equation} H_{int}=\sum_{k} A_{S,k}\otimes B_{k}. \end{equation} To describe all the measurements performed on the subsystem of interest without affecting the environmental degrees of freedom, the reduced density matrix can be used: \begin{equation} \rho_S=\Tr_{env}\rho . \end{equation} To derive the Lindblad equation, one has to apply certain approximations \cite{breuer} (p. 130). \begin{enumerate} \item \textit{{Born} approximation}: we assume that, due to a weak coupling between system and environment, the total density matrix is close to a factorized one: \begin{equation} \rho\simeq \rho_S\otimes \rho_{env}. \end{equation} The environmental density matrix is usually assumed to be stationary. \item \textit{{Born--Markov} approximation}: {we assume that the correlation functions in} \mbox{the environment}, \begin{equation} \langle B_i^\dagger(t) B_j(\tau)\rangle_{env}=\Tr_{env}\Big(\rho_{env}B_i^\dagger(t)B_j(\tau)\Big), \end{equation} where time-dependent operators $A_{S,k}$ and $B_k$ are defined in the interaction picture, and decay sufficiently fast compared to the relaxation time scale of the system to the equilibrium with environment. This means that for the typical values of $t$, we can apply the following approximation: \begin{align} &\int_0^t d\tau \langle B_i^\dagger(t) B_j(\tau)\rangle_{env} A_{S,l}(t)A_{S,k}(\tau) \rho_S(\tau)\simeq \nonumber\\ &\left[\int_0^{+\infty} d\tau \langle B_i^\dagger(\tau) B_j(0)\rangle_{env} A_{S,l}(t)A_{S,k}(t-\tau)\right]\rho_S(t) \end{align} \item \textit{{Rotating} wave approximation}: we assume that the relaxation time scale of the system to the equilibrium with the environment is much smaller than the time scale of the system. We introduce the basis of the system Hamiltonian eigenoperators defined as \begin{equation} [H_S,A(\omega)]=-\omega A(\omega), \end{equation} so that we can take $A_{S,\omega}(t)=A(\omega)e^{-i\omega t}$. In the rotating wave approximation, the terms with different $\omega$ correspond to the rapid oscillations that can be neglected on the time scales under consideration. \end{enumerate} Applying this to the qubit interacting with the environment, we can choose the eigenbasis of the free Hamiltonian: \begin{equation} H_S=\begin{pmatrix}E_\uparrow&0\\0&E_\downarrow\end{pmatrix}, \end{equation} {There} is a single eigenoperator for each of the non-zero frequencies $\pm\omega=\pm\frac{E_\downarrow-E_\uparrow}{2}$, \begin{equation} A(\omega)=\begin{pmatrix}0&1\\0&0\end{pmatrix},\quad A(-\omega)=\begin{pmatrix}0&0\\1&0\end{pmatrix} \end{equation} and two eigenoperators for zero frequency: \begin{equation} A_\uparrow(0)=\begin{pmatrix}1&0\\0&0\end{pmatrix},\quad A_\downarrow(0)=\begin{pmatrix}0&0\\0&1\end{pmatrix} \end{equation} The rotating wave approximations enforce the following form of the non-unitary part of the master equation: \begin{align} \Gamma_{++}\Big(2A(\omega)\rho A(\omega)^\dagger-\{\rho,A(\omega)^\dagger A(\omega)\}\Big)+\nonumber\\ \Gamma_{--}\Big(2A(-\omega)\rho A(-\omega)^\dagger-\{\rho,A(-\omega)^\dagger A(-\omega)\}\Big)+\nonumber\\ \sum_{a,b=\uparrow,\downarrow}\Gamma_{ab}\Big(2A_a(0)\rho A_b(0)^\dagger-\{\rho,A_b(0)^\dagger A_a(0)\}\Big) \end{align} where the $\Gamma$-parameters are determined by the environmental correlation functions \begin{align} \Gamma_{++}=\int_0^\infty dt \cos{\omega t} \langle B_{+}^\dagger(t) B_{+}(0)\rangle_{env}, \nonumber \\ \Gamma_{--}=\int_0^\infty dt \cos{\omega t} \langle B_{-}^\dagger(t) B_{-}(0)\rangle_{env}\\ \Gamma_{ab}=\int_0^{+\infty}dt \langle B^\dagger_a(t) B_b(0)\rangle_{env}. \nonumber \end{align} If we require that this part would be equivalent to the Lindbladian with a single Lindblad operator, \begin{equation} L\rho L^\dagger - \frac{1}{2}L^\dagger L\rho - \frac{1}{2}\rho L^\dagger L \nonumber \end{equation} there are three possible options: \begin{enumerate} \item $\Gamma_{++}=\Gamma_{--}=0$, and matrix $\Gamma_{ab}$ is arbitrary. This is equivalent to the Lindbladian with \begin{equation} L=\begin{pmatrix}0&c\\\bar{c}&0\end{pmatrix}, \quad L=L^\dagger, \nonumber \end{equation} where the parameter $c$ is determined by the matrix $\Gamma_{ab}$. \item $\Gamma_{++}\neq 0,\Gamma_{--}=0$, the contribution of the matrix $\Gamma_{ab}$ vanishes. This is equivalent to the Lindbladian with \begin{equation} L=\begin{pmatrix}0&2c\\0&0\end{pmatrix},\quad |c|^2=\frac{\Gamma_{++}}{2} \nonumber \end{equation} \item $\Gamma_{--}\neq 0,\Gamma_{++}=0$, the contribution of the matrix $\Gamma_{ab}$ vanishes. This is equivalent to the Lindbladian with \begin{equation} L=\begin{pmatrix}0&0\\2c&0\end{pmatrix},\quad |c|^2=\frac{\Gamma_{--}}{2}. \nonumber \end{equation} \end{enumerate} {The} second option that results in the Jordan form of the Lindblad operator may be achieved, e.g., for the reservoir of oscillators in the squeezed vacuum state with $B_{+}=\sum_k c_k a_k^\dagger$, $B_{-}=\sum_k c_k^\ast a_k$, resulting in $\Gamma_{--}=0$ but $\Gamma_{++}\neq 0$ \cite{breuer} (p. 149). The less restricted form of the non-diagonalizable Lindblad operator may imply a deviation from the rotating \mbox{wave approximation.} \newpage
proofpile-arXiv_065-3414
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The spectrum-rich millimeter-wave (mmWave) frequencies between $30-300$ GHz have the potential to alleviate the current spectrum crunch in sub-6GHz bands that service providers are already experiencing. This major potential of the mmWave band has made it one of the most important components of future mobile cellular and emerging WiFi networks. However, due to significant differences between systems operating in mmWave and legacy sub-6 GHz bands, providing reliable and low-delay communication in the mmWave bands is extremely challenging. Specifically, to achieve the high spectral efficiency of mmWave communications, accurate channel state information (CSI) is the key \cite{spatially,Heath16, Hur13,ZhangSD,ZhangSequ}, which is, however, challenging due to the high dimensionality of the channel as well as the mmWave hardware constraints. Nevertheless, the mmWave multiple-input multiple-output (MIMO) channel exhibits sparse property \cite{5gWhite,hur2016proposal}, facilitating the sparse channel representation by using small numbers of the angles of arrivals (AoAs), angles of departures (AoDs), and path gains. Typically, by approximating the AoAs and AoDs to be on quantized angle grids, the compressed sensing (CS)-based approaches transform the AoA and AoD estimation problem to a sparse signal recovery problem \cite{alk,OMPchannel}, where the transmitter sends the channel sounding beams to the receiver and the receiver jointly estimates AoAs and AoDs. We refer to this method as the one-stage channel sounding scheme. In particular, due to easy implementation and amenability for analysis, the orthogonal matching pursuit (OMP) has been widely studied \cite{OMPchannel,MultiOMP,cstOMP,jOMP,duan}. The OMP iteratively searches a pair of AoA and AoD over an over-complete dictionary. However, the computational complexity of OMP increases quadratically with the sizes of the dictionaries, i.e., $O(L K G_r G_t)$, where $K$ is the number of channel uses for the channel sounding, $L$ is the number of channel paths, and $G_r$ and $G_t$ are the dimensions of angle dictionaries for AoA and AoD, respectively. It is worth pointing out that when the dimensions of the over-complete dictionaries, i.e., $G_r$ and $G_t$, increase, the complexity of the one-stage CS-based methods such as OMP becomes exceedingly impractical. The over-complete dictionary and high computational complexity issues have been addressed in an adaptive-CS point-of-view with the primary focus on the sensing vector adaptation to the previous observations \cite{Hur13,alk,twoStageC}. Theoretically, it has been shown that the adaptive CS can be benificial in low SNR \cite{OptimalAdaptive}. The multi-level (hierarchical) AoA and AoD search techniques \cite{Hur13,alk} leveraged the feedback, where the receiver conveys a feedback to the transmitter to guide the next level angle dictionary design. It is worth noting that these adaptation methods \cite{Hur13,alk} need multiple feedbacks and its performance critically relies on the reliability of the feedback. To reduce the feedback overhead, a two-stage CS was proposed in \cite{twoStageC}, where the first stage is to obtain a coarse estimation of the support set and the second stage refines the result of the first stage. This method \cite{twoStageC} only requires one-time feedback, but achieves compatible estimation performance in low SNR. \subsection{Our Contributions} We newly study a sequential, two-stage AoA and AoD estimation framework for reduced computational complexity and improved estimation performance. Specifically, in Stage I, the support set of AoAs is recovered at the receiver by solving a multiple measurement vectors (MMV) problem. Leveraging the shared sparse set, it has been found that the MMV approach can provide improved estimation performance compared to the single measurement vector (SMV) approach \cite{ChenMMV,vanMMV,LeeMMV}. In Stage II, the receiver estimates the AoDs of the channel by exploiting the estimated AoAs from Stage I. Importantly, the estimated AoAs guide the design of receive sounding signals, which saves the channel use overhead and improves the accuracy of AoD estimation. In each stage, since we only estimate AoAs or AoDs, the dimensions of the signal and angle dictionary are much smaller than those of the one-stage joint AoA and AoD estimation \cite{OMPchannel,cstOMP,jOMP}, readily reducing the computational complexity substantially. This can be viewed as of converting the multiplicative channel sounding overhead (e.g., $\cO(G_r G_t)$ of OMP) to an additive overhead. By analyzing the MMV statistics, we present a lower bound for the successful probability of recovering the support sets. Furthermore, based on the successful recovery probability (SRP) analysis of the proposed two-stage method, a resource allocation (between Stage I and Stage II) strategy is newly proposed to improve SRPs for both AoA and AoD estimation. The numerical results validate the efficacy of the proposed resource allocation method. Finally, in order to address the issue of unresolvable quantization error, we extend the proposed two-stage method to the one with super resolution. Specifically, in each stage of AoA or AoD estimation, we reformulate the MMV problem as an atomic norm minimization problem \cite{OffGridCS,MmvAtomic,superMM}, which is solved by using alternating direction method of multipliers (ADMM). Compared to the dictionary-based methods, the atomic norm minimization can be thought of as the case when the infinite dictionary matrix is employed. We demonstrate through simulations that the quantization error of the two-stage method with super resolution can be effectively reduced. \subsection{Paper Organization and Notations} The paper is organized as follows. In Section \ref{section model}, we introduce the signal model and the CS-based channel estimation problem. In Section \ref{section algorithm}, based on the angular-domain channel representation, the proposed sequential AoA and AoD estimation method is presented. In Section \ref{sec analyze}, we analyze the proposed method in terms of SRP and introduce the resource allocation strategy. In Section \ref{section atomic}, the atomic norm-based design is described, which resolves the quantization error in the estimated AoAs and AoDs. The simulation results and conclusion are presented in Section \ref{section simulation} and Section \ref{section conclusion}, respectively. \emph{Notations:} A bold lower case letter $\mathbf{a}$ is a vector and a bold capital letter $\mathbf{A}$ is a matrix. ${{\mathbf{A}}^{T}}$, ${{\mathbf{A}}^{*}}$, ${{\mathbf{A}}^{H}}$, ${{\mathbf{A}}^{-1}}$, $\tr(\mathbf{A})$, $\left| \mathbf{A} \right|$, ${{\left\| \mathbf{A} \right\|}_{F}}$ and ${{\left\| \mathbf{a} \right\|}_{2}}$ are, respectively, the transpose, conjugate, Hermitian, inverse, trace, determinant, Frobenius norm of $\mathbf{A}$, and $\ell_2$-norm of $\mathbf{a}$. $\mathbf{A}^{\dagger} = (\mathbf{A}^H \mathbf{A})^{-1}\mathbf{A}^H$ denotes the pseudo inverse of a tall matrix $\mathbf{A}$. ${{[\mathbf{A}]}_{:.i}}$, ${{[\mathbf{A}]}_{i,:}}$, ${{[\mathbf{A}]}_{i,j}}$, and $[\mathbf{a}]_i$ are, respectively, the $i$th column, $i$th row, $i$th row and $j$th column entry of $\mathbf{A}$, and $i$th entry of vector $\mathbf{a}$. $\mathrm{\mathop{vec}}(\mathbf{A})$ stacks the columns of $\mathbf{A}$ and forms a long column vector. $\mathrm{\mathop{diag}}(\mathbf{a})$ returns a square diagonal matrix with the vector $\mathbf{a}$ on the main diagonal. ${{\mathbf{I}}_{M}}\in {{\mathbb{R}}^{M\times M}}$ is the $M$-dimensional identity matrix. {The $\mathbf{1}_{M,N} \in \R^{M\times N}$ and $\mathbf{0}_{M,N} \in \R^{M\times N}$ are the all one matrix, and zero matrix, respectively.} $\cR(\mathbf{F})$ denotes the subspace spanned by the columns of matrix $\mathbf{F}$. $\mathbf{A}\otimes \mathbf{B}$ and $\mathbf{A} \circ \mathbf{B}$ denote the Kronecker product and Khatri-Rao product of $\mathbf{A}$ and $\mathbf{B}$, respectively. The $\lceil x \rceil$ returns the smallest integer greater than or equal to $x$. \section{System Model and General Statement of Techniques} \label{section model} \subsection{Channel Model} The mmWave transmitter and receiver are equipped with $N_t$ and $N_r$ antennas, respectively. Suppose that the number of separable paths between the transmitter and receiver is $L$, where $L\ll \min\{ N_r, N_t \}$. The physical mmWave channel representation based on the uniform linear array \cite{li2017millimeter,OMPchannel,wan2019compressive,zhang2020downlink} is given by\footnote{ In wideband communication systems, one can model the channel as constant AoA/AoD and varying path gains \cite{qin2018time,park2018spatial}. Here we could also assume a narrow band block fading channel where the channel is static during the channel coherence time. The CSI acquisition and data transfer are framed to happen within the channel coherence time \cite{li2017millimeter,OMPchannel,wan2019compressive,zhang2020downlink}. }, \begin{align} \mathbf{H}=\sqrt{\frac{N_rN_t}{L}}\sum\limits_{l=1}^{L}{{{\alpha }_{l}}}\mathbf{a}_r({f_{r,l}}){\mathbf{a}_t^H}({f_{t,l}}), \label{channel model} \end{align} where $\mathbf{a}_t(\cdot)\in \C^{N_t \times 1}$ and $\mathbf{a}_r(\cdot) \in \C^{N_r \times 1}$ are the array response vectors of the transmit and receive antenna arrays. Specifically, $\mathbf{a}_t(f)$ and $\mathbf{a}_r(f)$ are given by $\mathbf{a}_t(f)=\frac{1}{\sqrt{N_t}}{{\left[ 1,{{e}^{j2\pi f}},\ldots ,{{e}^{j2\pi (N_t-1)f}} \right]}^{T}} $ and $\mathbf{a}_r(f)=\frac{1}{\sqrt{N_r}}{{\left[ 1,{{e}^{j2\pi f}},\ldots ,{{e}^{j2\pi (N_r-1)f}} \right]}^{T}}$, where $f\in [0,1)$ is the normalized spatial angle. Here we assume ${{f}_{r,l}} $ and ${{f}_{t,l}}$ in \eqref{channel model} are independent and uniformly distributed in $[0,1 )$, and the gain of the $l$th path $\alpha_l$ follows the complex Gaussian distribution, i.e., ${{\alpha }_{l}} \sim \mathcal{C}\mathcal{N}(0,\sigma_l^2)$. Angular domain representation of the channel in \eqref{channel model} can be rewritten as \begin{align} \mathbf{H}={{\mathbf{A}}_{r}}\diag(\mathbf{h})\mathbf{A}_{t}^H, \label{compact channel} \end{align} where ${{\mathbf{A}}_{r}}=[\mathbf{a}_r({{f}_{r,1}}),\ldots ,\mathbf{a}_r({{f}_{r,L}})]\in {\mathbf{C}^{N_r\times L}}$ , ${{\mathbf{A}}_{t}}=[\mathbf{a}_t({{f}_{t,1}}),\ldots ,\mathbf{a}_t({{f}_{t,L}})]\in {\mathbf{C}^{N_t\times L}}$, and $\mathbf{h} =[h_1,\ldots,h_L]\in \C^{L \times 1}$ with $h_l=\sqrt{\frac{N_rN_t}{L}}{{\alpha }_{l}}$, $l=1,\ldots ,L$. \begin{figure} \centering \includegraphics[width=.7\textwidth]{figures/diagram.eps} \caption{Conventional one-stage mmWave channel sounding} \label{system diagram} \end{figure} \subsection{Channel Sounding} \label{section channel sounding} Fig. \ref{system diagram} illustrates the conventional one-stage mmWave channel sounding operation, where the transmitter and receiver are equipped with the large-dimensional hybrid analog-digital MIMO arrays that are driven by a limited number of RF chains, i.e., $N\ll \min\{ N_t, N_r \}$. In each channel use of downlink channel sounding, the transmitter generates a beam conveying the pilot signal and the receiver simultaneously generates $N$ separate beams, using the $N$ RF chains, to obtain a $N$-dimensional observation. We let the numbers of the transmit sounding beams (TSBs) and receive sounding beams (RSBs) for channel estimation be $B_t$ and $B_r$, respectively. For convenience, we assume that $B_r$ is an integer multiple of $N$. The total number of channel uses for the conventional one-stage sounding process is then $K=B_rB_t/N$. Specifically, the RSB matrix in Fig. \ref{system diagram} is given by \begin{align} \mathbf{W}_b=[{\mathbf{W}_{1}},{\mathbf{W}_{2}},\ldots ,{\mathbf{W}_{B_r/N}}]\in {\mathbf{C}^{N_r\times {B_r}}}, \label{RSB matrix} \end{align} where $\mathbf{W}_i \in {\mathbf{C}^{N_r\times N}}$ for $i=1,2,\ldots,B_r/N$, and $\mathbf{W}_i=\mathbf{W}_{A,i}\mathbf{W}_{D,i}$ with $\mathbf{W}_{A,i}\in {\mathbf{C}^{N_r\times N}}$ and $\mathbf{W}_{D,i} \in {\mathbf{C}^{N\times N}}$ being the receive analog and digital sounders, respectively. Similarly, the TSB matrix is given by \begin{align} \mathbf{F}_b=[{{\mathbf{f}}_{1}},{{\mathbf{f}}_{2}},\cdots ,{\mathbf{f}}_{B_t}]\in {\mathbf{C}^{N_t\times {B_t}}}, \label{TSB matrix} \end{align} where $\mathbf{f}_j\in {\mathbf{C}^{N_t\times 1}}$ for $j=1,2,\cdots,B_t$ is the $j$th transmit sounder, and $\mathbf{f}_j={\mathbf{F}_{A,j}}{\mathbf{f}_{D,j}}s_j$ with ${\mathbf{F}_{A,j}}\in {\mathbf{C}^{N_t\times N}}$ and $\mathbf{f}_{D,j}\in {\mathbf{C}^{N\times 1}}$ being the transmit analog and digital sounders, respectively. Each observation $\mathbf{y}_{i,j}\in \C^{N\times 1}$ in Fig. \ref{system diagram}, associated with the $i$th RSB and $j$th TSB, $i\in \{ 1,\ldots, B_r/N \}$ and $j \in \{1,2,\ldots, B_t \}$, can be expressed as \begin{align} {\mathbf{y}}_{i,j}=\mathbf{W}_{i}^H\mathbf{H}{{\mathbf{f}}_{j}}s_j+\mathbf{W}_{i}^H{{\mathbf{n}}_{j}}. \label{single channel uses} \end{align} The $s_j$ denotes the training signal and without loss of generality, we let $s_j=1$. It is worth noting that only phase shifters are employed to constitute the analog arrays for power saving, where $|{{[{\mathbf{W}_{A,i}}]}_{m,n}}|=1/\sqrt{N_r}$, and $|{{[{\mathbf{F}_{A,j}}]}_{m,n}}|=1/\sqrt{N_t}, \forall m,n$. Moreover, the power constraint $\left\| {{\mathbf{f}}_{j}} \right\|_{2}^{2} = p$ is imposed to the transmit sounding beam at each channel use with $p$ being the power budget, and the noise vector follows ${{\mathbf{n}}_{j}} \sim \mathcal{C}\mathcal{N}(\mathbf{0}_{N_r},{{\sigma }^{2}}{{\mathbf{I}}_{N_r}})$. Thus, the signal to noise ratio is $p/{{\sigma }^{2}}$. We collect all observations in \eqref{single channel uses} by using $\mathbf{W}_b$ in \eqref{RSB matrix} and $\mathbf{F}_b$ in \eqref{TSB matrix} as \begin{align} \mathbf{Y}={\mathbf{W}_b^H}\mathbf{H} \mathbf{F}_b+{\mathbf{W}_b^H}\mathbf{N}, \label{matrix observations} \end{align} where $\mathbf{Y} \in \C^{B_r\times B_t}$ and $\mathbf{N} =[ \mathbf{n}_1, \ldots, \mathbf{n}_{B_t} ] \in \C^{N_r \times B_t}$. For example, $\mathbf{W}_b$ and $\mathbf{F}_b$ in \eqref{matrix observations} can be generated randomly \cite{ZhangSD} or designed as a partial discrete Fourier transform (DFT) matrix \cite{OMPchannel}. We assume that the number of observations is strictly lower than the dimension of the channel matrix, i.e., ${B_r}{B_t}\ll N_rN_t$. The channel estimation task is to utilize the observations in \eqref{single channel uses} (equivalently, \eqref{matrix observations}) to obtain the estimate of the channel matrix $\bold{H}$ in \eqref{compact channel}. Encountering \eqref{compact channel}, the channel estimation task boils down to reconstructing $\{{{f}_{r,1}},\ldots ,{{f}_{r,L}}\}$, $\{{{f}_{t,1}},\ldots ,{{f}_{t,L}}\}$ and $\{{{h}_{1}},\ldots ,{{h}_{L}}\}$ from the observations. \subsubsection{Oracle Estimator} The oracle estimator that we will utilize for benchmark\footnote{ Both Cramer-Rao lower bound (CRLB) \cite{Ahmed2020VTC} and the oracle estimator \cite{OMPchannel} can be utilized to evaluate the accuracy of estimation algorithms. Since the CRLB can only be calculated for one-stage method, in this work we use the oracle estimator as the benchmark instead.} is obtained by assuming perfect knowledge of AoAs and AoDs in \eqref{compact channel}. The oracle channel estimate only needs to estimate the path gain $\mathbf{h}$, thus the channel estimate is expressed as $\widehat{\mathbf{H}}= \mathbf{A}_r \diag(\widehat{\mathbf{h}}) \mathbf{A}_t^H$, where $\diag(\widehat{\mathbf{h}}) \in \C^{L \times 1}$ is the solution to the following problem: \begin{align} \widehat{\mathbf{h}} = \argmin_{\mathbf{h}} \| \mathbf{Y} - {\mathbf{W}_b^H}\mathbf{A}_r \diag({\mathbf{h}}) \mathbf{A}_t^H \mathbf{F}_b\|_F^2. \label{oracal estimator} \end{align} Because \eqref{oracal estimator} is convex, the optimal solution is $\widehat{\mathbf{h}} = (\mathbf{X}^H\mathbf{X})^{-1} \mathbf{X}^H \vect(\mathbf{Y})$, where $\mathbf{X} \in \C^{B_rB_t\times L}$ is given by $\mathbf{X} = \left[\vect {([\mathbf{W}_b^H \mathbf{A}_r]_{:,1} [\mathbf{A}_t^H \mathbf{F}_b]_{1,:})}, \ldots,\vect{([\mathbf{W}_b^H \mathbf{A}_r]_{:,L} [\mathbf{A}_t^H \mathbf{F}_b]_{L,:})} \right]$. Because we have $B_r B_t \gg L$, $\mathbf{X}^H \mathbf{X}$ is invertible. \subsection{Compressed Sensing-Based Channel Estimation}\label{traditional CS} Recalling the channel model in \eqref{compact channel}, a typical CS framework restricts the normalized spatial angles ${{f}_{r,l}},{{f}_{t,l}},~l=1,2,\ldots, L$, to be chosen from the discrete angle dictionaries, $ {{f}_{r,l}}\in \left[0,{1}/{G_r},\ldots, {(G_r-1)}/{G_r}\right]$, and ${{f}_{t,l}}\in \left[0,{1}/{G_t},\ldots, {(G_t-1)}/{G_t}\right] $, where $G_r=\lceil sN_r \rceil$ and $G_t=\lceil s N_t \rceil$ with $s \ge 1$ are, respectively, the cardinalities of the receive and transmit spatial angle dictionaries. The transmit and receive array response dictionaries are then given by \begin{align} ~~~\bar{\mathbf{A}}_r=\left[\mathbf{a}_r(0),\mathbf{a}_r\left(\frac{1}{G_r}\right),\ldots ,\mathbf{a}_r\left(\frac{G_r-1}{G_r}\right)\right]\in {\mathbf{C}^{N_r\times G_r}}\nonumber \end{align} and \begin{align} ~~~\bar{\mathbf{A}}_t=\left[\mathbf{a}_t(0),\mathbf{a}_t\left(\frac{1}{G_t}\right),\ldots ,\mathbf{a}_t\left(\frac{G_t-1}{G_t}\right)\right]\in {\mathbf{C}^{N_t\times G_t}}. \nonumber \end{align} For the latter array response dictionaries, the channel model in \eqref{compact channel} can be rewritten as \begin{align} \mathbf{H}= \bar{\mathbf{A}}_r{\bar{\mathbf{H}}_a}\bar{\mathbf{A}}_t^H + \mathbf{E}, \label{redundant channel estimation} \end{align} where ${\bar{\mathbf{H}}_a}\in {\mathbf{C}^{G_r\times G_t}}$ is an $L$-sparse matrix with $L$ non-zero entries corresponding to the positions of AoAs and AoDs on their respective angle grids, and $\mathbf{E} \in \C^{N_r \times N_t}$ denotes the quantization error. Because the dictionary matrices $\bar{\mathbf{A}}_r$ and $\bar{\mathbf{A}}_t$ are known, the channel estimation task is equivalent to estimating the non-zero entries in $\bar{\mathbf{H}}_a$. Plugging the model in \eqref{redundant channel estimation} into \eqref{matrix observations} gives \begin{align} \mathbf{Y}={\mathbf{W}_b^H}\bar{\mathbf{A}}_r({\bar{\mathbf{H}}_a}+\mathbf{E})\bar{\mathbf{A}}_t^H\mathbf{F}_b+{\mathbf{W}_b^H}\mathbf{N}. \label{bold Y} \end{align} Vectorizing $\mathbf{Y}$ in \eqref{bold Y} yields \begin{align} \vect(\mathbf{Y}) & =({\mathbf{F}_b^{T}}\bar{\mathbf{A}}_t^*\otimes {\mathbf{W}_b^H}\bar{\mathbf{A}}_r)(\vect({\bar{\mathbf{H}}_a} + \mathbf{E}))+\vect({\mathbf{W}_b^H}\mathbf{N}). \label{vector cs} \end{align} Denoting $\mathbf{D}={\mathbf{F}_b^{T}}\bar{\mathbf{A}}_t^*\otimes {\mathbf{W}_b^H}\bar{\mathbf{A}}_r\in {\mathbf{C}^{{B_r}{B_t}\times G_r{{G}_{b}}}}$ and $\bar{\mathbf{n}}=\mathbf{D} \vect(\mathbf{E})+\vect({\mathbf{W}_b^H}\mathbf{N})\in {\mathbf{C}^{{B_r}{B_t}\times 1}}$ gives $ \vect(\mathbf{Y})=\mathbf{D}\vect({\bar{\mathbf{H}}_a})+\bar{\mathbf{n}}$. Hence, the estimation of $\vect({\bar{\mathbf{H}}_a})$ from \eqref{vector cs} can be stated as a sparse signal reconstruction problem: \begin{align} \min_{\bar{\mathbf{H}}_a} \| \vect(\mathbf{Y}) - \mathbf{D}\vect({\bar{\mathbf{H}}_a}) \|_2 ~\text{subject to } \| \vect( {\bar{\mathbf{H}}_a}) \|_0=L, \label{vector observations} \end{align} where $\|\cdot \|_0$ is the $\ell_0$-norm that returns the number of non-zero coordinates of a vector. The problem in \eqref{vector observations} can be solved by using standard CS methods \cite{DonohoCS,TroppOMP}. The number of required observations to reconstruct $L$-sparse vector $\vect(\bar\mathbf{H}_a)\in \C^{G_r G_t \times 1} $ in \eqref{vector observations} has previously characterized as $O\left( L\cdot \log (G_rG_t) \right)$ \cite{DonohoCS}, which is much smaller than $O(N_r N_t)$. However, the computational complexity for estimating $\vect(\bar{\mathbf{H}}_a)$ in \eqref{vector observations} by using OMP, for example, is $O(LB_rB_t G_r G_t)$. Though the quantization error associated with using dictionaries can be made small by increasing the sizes of the dictionaries, the growing computational complexity remains a critical challenge. Instead of developing another one-stage channel sounding method (as in Fig. \ref{system diagram}), we propose a new two-stage channel sounding and estimation framework to overcome the large overhead and complexity drawbacks. \begin{figure} \centering \includegraphics[width=.57\textwidth]{figures/alg_diagram_1.eps} \caption{Illustration of the proposed two-stage AoA and AoD estimation.} \label{alg diagram} \end{figure} \section{Two-stage AoA and AoD Estimation} \label{section algorithm} A conceptual diagram of the proposed two-stage AoA and AoD estimation framework is presented in Fig. \ref{alg diagram}. The proposed sequential technique has constituent two stages of channel sounding, where each stage exclusively exploits much low-dimensional dictionary compared to the one-stage channel sounding in Fig. \ref{system diagram}. Under the similar definitions of one-stage method in \eqref{matrix observations}, in Stage I of the two-stage framework of Fig. \ref{alg diagram}, the transmit and receive sounding beams are represented by ${\mathbf{F}_{b,1}}\in {\mathbf{C}^{N_t\times B_{t,1}}}$ and ${\mathbf{W}_{b,1}}\in {\mathbf{C}^{N_r\times B_{r,1}}}$, respectively. The AoA estimates of Stage I produce the estimation of array response matrix $\mathbf{A}_r$ in \eqref{compact channel}, i.e., $\widehat{\mathbf{A}}_r \in \C^{N_t \times L}$. In Stage II, the transmit and receive sounding beams are denoted by ${\mathbf{F}_{b,2}}\in {\mathbf{C}^{N_t\times B_{t,2}}}$ and ${\mathbf{W}_{b,2}}\in {\mathbf{C}^{N_r\times B_{r,2}}}$, respectively. In particular, the receive sounding beams ${\mathbf{W}_{b,2}}$ is optimized based on the estimated AoA array response matrix $\widehat{\mathbf{A}}_r$ at Stage I, which leads to improved estimation accuracy as our analysis and simulation show. The total number of observations is given by $N_p = B_{t,1}B_{r,1}+B_{t,2}B_{r,2}.$ Accordingly, the total number of channel uses is $K=(B_{t,1}B_{r,1}+B_{t,2}B_{r,2})/N$. \subsection{Stage I: AoA Estimation} We rewrite the channel model in \eqref{redundant channel estimation} as $ \mathbf{H} =\bar{\mathbf{A}}_r\bar{\mathbf{H}}_a\bar{\mathbf{A}}_t^H +\mathbf{E} =\bar{\mathbf{A}}_r{{\mathbf{Q}}_{r}}+\mathbf{E}$, where ${{\mathbf{Q}}_{r}}\in {\mathbf{C}^{G_r\times N_t}}$ has $L$ non-zero rows, whose indices are collected into the support set $\Omega_r \subset \{1,2,\ldots,G_r \}$ and $|\Omega_r| = L$. Using $\Omega_r$, the $\mathbf{A}_r$ in \eqref{compact channel} can be written using the columns of $\bar{\mathbf{A}}_r$ indexed by $\Omega_r$ as $[\bar{\mathbf{A}}_r]_{:,\Omega_r} = \mathbf{A}_r$. \par To estimate the AoAs, we need to recover the support set $\Omega_r$. Similar to the one-stage sounding in \eqref{matrix observations}, at Stage I in Fig.~\ref{alg diagram}, the observations ${{\mathbf{Y}}_{1}} \in \C^{B_{r,1}\times B_{t,1}}$ is expressed as \begin{align} {{\mathbf{Y}}_{1}} &=\mathbf{W}_{b,1}^H \mathbf{H} {\mathbf{F}_{b,1}}+\mathbf{W}_{b,1}^H\mathbf{N}_1\nonumber\\ &=\mathbf{W}_{b,1}^H\bar{\mathbf{A}}_r{{\mathbf{Q}}_{r}}{\mathbf{F}_{b,1}}+ \mathbf{W}_{b,1}^H\mathbf{E}{\mathbf{F}_{b,1}} +\mathbf{W}_{b,1}^H\mathbf{N}_1\nonumber\\ &=\bm{\Phi}_1 \mathbf{C}_1+ \mathbf{W}_{b,1}^H\mathbf{E}{\mathbf{F}_{b,1}} +\mathbf{W}_{b,1}^H\mathbf{N}_1, \label{MMV observation} \end{align} where $\bm{\Phi}_1 =\mathbf{W}_{b,1}^H\bar{\mathbf{A}}_r \in \C^{B_{r,1} \times G_r}$, $\mathbf{C}_1 = {{\mathbf{Q}}_{r}}{\mathbf{F}_{b,1}} \in \C^{G_r \times B_{t,1}}$, and $\mathbf{N}_1\in {\mathbf{C}^{N_r\times B_{t,1}}}$ is the noise matrix with \gls{iid} entries according to ${{[\mathbf{N}_1]}_{i,j}}\sim \mathcal{C}\mathcal{N}(0,{{\sigma }^{2}})$, $\forall i,j$. Due to the row sparsity of $\mathbf{Q}_r$, it is clear that $\mathbf{C}_1$ also has $L$ non-zero rows indexed by $\Omega_r$. If $B_{t,1}=1$, the recovery of $\mathbf{C}_1$ in \eqref{MMV observation} can be formulated as a common SMV CS problem. When $B_{t,1}>1$, it becomes an MMV CS problem \cite{MMV_Tropp}, where the multiple columns of $\mathbf{C}_1$ in \eqref{MMV observation} shares a common support. The optimization problem estimating the row support of $\mathbf{C}_1$ for MMV is now given by \begin{align} \widehat{\mathbf{C}}_1 = \underset{\mathbf{C}_1}{\mathop{\argmin }}\,\left\| {{\mathbf{Y}}_{1}}- \bm{\Phi}_1 \mathbf{C}_1 \right\|_{F}^{2}\text{~~subject to }{{\left\|\mathbf{C}_1\right\|}_{r,0}} \le L, \label{AoA OMP} \end{align} where $\left\|\mathbf{C}_1\right\|_{r,0}$ is defined as the number of non-zero rows of $\mathbf{C}_1$. Using a similar method as the OMP, the problem in \eqref{AoA OMP} can be solved by simultaneous OMP (SOMP) \cite{sompJ} that is described in Algorithm 1. The output is the estimated support set $\widehat{\Omega}_r$\footnote{ Here, we assume the number of paths is known as a priori for convenience of performance analysis in Section \ref{sec analyze}. When the number of paths is unavailable as a priori, a threshold can be introduced to compare with the power of the residual matrix $\mathbf{R}^{(l)}$ in Step 8 at each iteration \cite{TroppGreedy,zhang2021successful}. When the power of $\mathbf{R}^{(l)}$ is less then the threshold, Algorithm \ref{alg_SOMP} terminates, which generates the estimate of number of paths. }. For notational simplicity, we omit the subscripts in $\mathbf{Y}_1$ and $\bm{\Phi}_1$ in Algorithm \ref{alg_SOMP}. \begin{algorithm} [t] \caption{Simultaneous OMP: SOMP($\mathbf{Y},\bm{\Phi},L$)} \label{alg_SOMP} \begin{algorithmic} [1] \STATE Input: Observations $\mathbf{Y} $, measurement matrix $\bm{\Phi}$, sparsity level $L$. \STATE Initialization: Support set $\widehat{\Omega}^{(0)}= \emptyset$, residual matrix $\mathbf{R}^{(0)} = \mathbf{Y}$. \FOR{$l = 1$ to $L$} \STATE Calculate the coefficient matrix: $\mathbf{S} = \bm{\Phi}^H \mathbf{R}^{(l-1)}$. \STATE Select the largest index $\eta = \argmax \limits _{i=1,\cdots,G_r} \lA [\mathbf{S}]_{i,:} \rA_2$. \STATE Update the support set: $\widehat{\Omega}^{( l)}= \widehat{\Omega}^{( l-1)} \bigcup \eta$. \STATE Update the recovery of matrix: $\widehat{\mathbf{C}}= ([{\bm{\Phi}}]_{:,\widehat{\Omega}^{( l)} })^{\dagger}\mathbf{Y}$. \STATE Update the residual matrix: $\mathbf{R}^{(l)} =\mathbf{Y}- [{\bm{\Phi}}]_{:,\widehat{\Omega}^{( l)} }\widehat{\mathbf{C}} $. \ENDFOR \STATE Output: $\widehat{\Omega}^{( L)}, \widehat{\mathbf{C}}$. \end{algorithmic} \end{algorithm} It should be emphasized that the choice of the measurement matrix $\bm{\Phi}_1$ and $\mathbf{C}_1$ has a profound impact on the recovery performance of SOMP \cite{sompJ}. Observing \eqref{MMV observation}, the TSB $\mathbf{F}_{b,1}$ is incorporated in $\mathbf{C}_1$, and the RSB $\mathbf{W}_{b,1}$ is included in the measurement matrix $\bm{\Phi}_1$. Thus, in what follows, the design of RSB $\mathbf{W}_{b,1}$ and TSB $\mathbf{F}_{b,1}$, is of interest. \subsubsection{RSB and TSB Design} \label{sectionStageRSB} Firstly, we focus on the design of TSB $\mathbf{F}_{b,1}$. Considering $\mathbf{C}_1 = \bar{\mathbf{H}}_a\bar{\mathbf{A}}_t^H\mathbf{F}_{b,1}$, in order to guarantee that $\mathbf{F}_{b,1}$ is unbiased for each item (column) in $\bar{\mathbf{A}}_t$, we design $\mathbf{F}_{b,1}$ by maximizing the minimum correlation between $\mathbf{F}_{b,1}$ and each column in $\bar{\mathbf{A}}_t$, which yields \begin{align} \max_{\mathbf{F}_{b,1}} \min_{i} {{\| \mathbf{F}_{b,1}^H{{\left[ \bar{\mathbf{A}}_t \right]}_{:,i}} \|}_{2}} ~~\text{subject to}~ \mathbf{F}_{b,1}^H \mathbf{F}_{b,1} =p_1 \mathbf{I}_{B_{t,1}}, \label{AoA trans beam} \end{align} where $p_1$ is the power allocation of Stage I. After taking the constraint into account, the optimal solution to the problem in \eqref{AoA trans beam} should ideally satisfy the following ${{\| \mathbf{F}_{b,1}^H{{\left[ \bar{\mathbf{A}}_t \right]}_{:,i}} \|}_2}=\sqrt{{p_1B_{t,1}}/{N_t}},~\text{ }i=1,\ldots, G_t$. It means that $\mathbf{F}_{b,1}$ is isometric to all columns of $\bar{\mathbf{A}}_t$, which is obtained by \begin{align} \mathbf{F}_{b,1}=\sqrt{p_1}\left[ {{\mathbf{e}}_{1}},{{\mathbf{e}}_{2}},\ldots ,{{\mathbf{e}}_{B_{t,1}}} \right],\label{F beams AoAs} \end{align} where $\mathbf{e}_i$ the $i$th column of $\mathbf{I}_{N_t}$. The construction of $\mathbf{e}_j, ~ j=1, \ldots, B_{t,1}$ in \eqref{F beams AoAs} using the hybrid analog-digital array is possible due to the fact that any vector can be constructed by linearly combining $N (\geq 2)$ RF chains \cite{xzhang}. To be more specific, there exists $\mathbf{F}_{A,j}\in \C^{N_t \times N}$, $\mathbf{f}_{D,j}\in \C^{N \times 1}$, and $s_j=1$ such that $\mathbf{e}_{j}=\mathbf{F}_{A,j} \mathbf{f}_{D,j}s_j$, i.e., \begin{eqnarray} \mathbf{e}_{j} = \underbrace{\frac{1}{\sqrt{N_t}} [ \mathbf{1}_{N_t} ~ \tilde{\mathbf{1}}_{N_t}^{(j)} ~ \mathbf{1}_{N_t} \cdots \mathbf{1}_{N_t} ] }_{\triangleq \mathbf{F}_{A,j} } \underbrace{\frac{\sqrt{ N_t}}{2} \left[1,-1,0 ,\cdots , 0 \right]^T }_{\triangleq \mathbf{f}_{D,j}}\times 1, \label{design e1} \end{eqnarray} where $\tilde{\mathbf{1}}_{N_t}^{(j)} \in \R^{N_t \times 1}$ is defined as the all one vector $\bm{1}_{N_t}\in \R^{N_t\times 1}$ other than the $j$th entry being $-1$. For the measurement matrix $\bm{\Phi}_1 = \mathbf{W}_{b,1} \bar{\mathbf{A}}_r$, we optimize $\mathbf{W}_{b,1}$ by incorporating the isometric CS measurement matrix design criterion \cite{sensingOPT1, sensingOPT2,hadi2015}: \begin{align} \min _{\bm{\Phi}_1} \lA \bm{\Phi}_1 ^H \bm{\Phi}_1 - \mathbf{I}_{G_r}\rA_F^2. \label{sensing opt problem} \end{align} \iffalse We can simplify the objective function to \begin{align} \lA \bm{\Phi}_1 ^H \bm{\Phi}_1 - \mathbf{I}_{G_r}\rA_F^2 &=&\tr(\bm{\Phi}_1 ^H \bm{\Phi}_1\bm{\Phi}_1 ^H \bm{\Phi}_1 - 2\bm{\Phi}_1 ^H \bm{\Phi}_1 + \mathbf{I}_{G_r}) \nonumber \\ &\overset{(a)}{=}& \tr(\bm{\Phi}_1\bm{\Phi}_1 ^H \bm{\Phi}_1 \bm{\Phi}_1 ^H - 2 \bm{\Phi}_1 \bm{\Phi}_1 ^H +\mathbf{I}_{B_{r,1}}) + G_t- B_{r,1} \nonumber \\ &=&\lA\bm{\Phi}_1 \bm{\Phi}_1^H - \mathbf{I}_{B_{r,1}}\rA_F^2 + G_t - B_{r,1} \nonumber\\ &\overset{(b)}{=}&\lA \frac{G_t}{N_t}\mathbf{W}_{b,1}^H \mathbf{W}_{b,1} - \mathbf{I}_{B_{r,1}}\rA_F^2 + G_t - B_{r,1} \nonumber \end{align} where $(a)$ is due to $\tr(\mathbf{A} \mathbf{B}) = \tr(\mathbf{B} \mathbf{A})$, and $(b)$ holds from $\bar{\mathbf{A}}_t \bar{\mathbf{A}}_t^H = \frac{G_t}{N_t}\mathbf{I}_{N_t}$. Therefore, when the columns of $\mathbf{W}_{b,1}$ are orthogonal, the minimum of the objective value in \eqref{sensing opt problem} can be achieved. \fi After performing standard algebraic manipulations and exploiting the fact $\bar{\mathbf{A}}_r \bar{\mathbf{A}}_r^H = \frac{G_r}{N_r}\mathbf{I}_{N_r}$, the optimality condition for \eqref{sensing opt problem} is that the columns of $\mathbf{W}_{b,1}$ are orthogonal. Accounting for the analog-digital array constraint into $\mathbf{W}_{b,1}$ and setting $B_{r,1} = N_r$, we use the DFT matrix $\mathbf{S}_{N_r} \in \C^{N_r \times N_r}$ such that \begin{align} \mathbf{W}_{b,1} = \mathbf{S}_{N_r}, \label{RSB 1} \end{align} where $[\mathbf{S}_{N_r}]_{m,n}=\frac{1}{\sqrt{N_r}}e^{-j\frac{2\pi (m-1)(n-1)}{N_r}}, \forall m,n$. Based on the RSB in \eqref{RSB 1}, in the following, the distribution of the noise term in \eqref{MMV observation} is discussed. \begin{Proposition}\label{noise semi} For any semi-orthogonal matrix $\mathbf{A}\in \C^{m\times n}$ with $\mathbf{A} \mathbf{A}^H = \mathbf{I}$ and random vector $\mathbf{n}\in \C^{n \times 1}$ with \gls{iid} entries according to $\cC\cN(0, \sigma^2)$, then if we denote $\mathbf{b} = \mathbf{A}\mathbf{n}$, and the entries in $\mathbf{b}$ are also \gls{iid} $\cC\cN(0, \sigma^2)$. \end{Proposition} \begin{proof} The covariance matrix of $\mathbf{b}$ is given by $\E[ \mathbf{A}\mathbf{n}\bn^H\mathbf{A}^H] = \sigma^2 \mathbf{I}$. Because the entries in $\mathbf{b}$ are obviously complex Gaussian, thus, from the property of Gaussian distribution, the entries in $\mathbf{b}$ are also \gls{iid} $\cC\cN(0, \sigma^2)$. \end{proof} \begin{Remark} \label{remark 2} Due to the semi-orthogonality of $\mathbf{W}_{b,1}$ in \eqref{RSB 1}, according to Proposition \ref{noise semi}, the effective noise matrix $\mathbf{W}_{b,1}^H\mathbf{N}_1 \in \C^{N_r \times B_{t,1}}$ in \eqref{MMV observation} has i.i.d. Gaussian entries, i.e., $[\mathbf{W}_{b,1}^H\mathbf{N}_1]_{i,j} \sim \cC\cN(0,\sigma^2),\forall i,j$. Moreover, since $\bm{\Phi}_1 =\mathbf{W}_{b,1}^H \bar{\mathbf{A}}_r$, we have $\| [\bm{\Phi}_1]_{:,i} \|_2 = 1, \forall i$. \end{Remark} The algorithmic procedure estimating AoAs are described in Algorithm \ref{alg_AoAs}. Given the estimated support set $\widehat{\Omega}_r$ from Algorithm \ref{alg_SOMP}, the output of Algorithm \ref{alg_AoAs} is the estimated AoA array response matrix $\widehat{\mathbf{A}}_r = [\bar{\mathbf{A}}_r]_{:,\widehat{\Omega}_r} \in \C^{N_r \times L}$. Overall, the number of channel uses for the AoA estimation is $K_1 = B_{t,1} \frac{ N_r }{N}$. \begin{algorithm} [t] \caption{AoA Estimation Algorithm} \label{alg_AoAs} \begin{algorithmic} [1] \STATE Input: Channel dimension $N_r$, $N_t$, number of RF chains $N$, channel paths $L$, power allocation $p_1$, receive array response dictionary $\bar{\mathbf{A}}_r \in \C^{N_r \times G_r}$. \STATE Initialization: Generate the TSB $\mathbf{F}_{b,1} = \sqrt{p_1}[ \mathbf{e}_1,\ldots,\mathbf{e}_{B_{t,1}}]$ in \eqref{F beams AoAs} according to \eqref{design e1} and the RSB $\mathbf{W}_{b,1} = \mathbf{S}_{N_r}$ in \eqref{RSB 1}. \STATE Collect the observations $\mathbf{Y}_1 = \mathbf{W}_{b,1}^H \mathbf{H} \mathbf{F}_{b,1} + \mathbf{W}_{b,1}^H\mathbf{N}_1$. \STATE Solve the problem in \eqref{AoA OMP} by using Algorithm \ref{alg_SOMP} with the sparsity level $L$ and $\bm{\Phi}_1 = \mathbf{W}_{b,1}^H \bar{\mathbf{A}}_r$, \vspace{-0.1cm} \begin{align} (\widehat{\Omega}_r, \widehat{\mathbf{C}}_1) = \text{SOMP}(\mathbf{Y}_1, \bm{\Phi}_1, L). \nonumber \end{align} \vspace{-0.5cm} \STATE Output: Estimation of AoA array response matrix $\widehat{\mathbf{A}}_r = [\bar{\mathbf{A}}_r]_{:,\widehat{\Omega}_r}$. \end{algorithmic} \end{algorithm} \subsection{Stage II: AoD Estimation} \label{AoDs method} To attain the estimation of AoDs, we can utilize the similar method as Stage I. Similar to the one-stage sounding in \eqref{matrix observations}, the observations of Stage II in Fig. \ref{alg diagram} is expressed as $\mathbf{Y}_2 \in \C^{B_{r,2} \times B_{t,2}}$, \begin{align} {{\mathbf{Y}}_{2}}=\mathbf{W}_{b,2}^H \mathbf{H} {\mathbf{F}_{b,2}}+\mathbf{W}_{b,2}^H\mathbf{N}_2, \label{observation se} \end{align} where ${\mathbf{W}_{b,2}}\in {\mathbf{C}^{N_r\times B_{r,2}}}$ and ${\mathbf{F}_{b,2}}\in {\mathbf{C}^{N_t\times B_{t,2}}}$ are the RSB and TSB of the Stage II, respectively. The $\mathbf{N}_2 \in {\mathbf{C}^{N_r\times B_{t,2}}}$ is the noise matrix with i.i.d. entries according to $\mathcal{C}\mathcal{N}(0,{{\sigma }^{2}})$. \par Recall from \eqref{compact channel} and \eqref{redundant channel estimation}, the channel matrix is rewritten as \begin{align} \mathbf{H}=\bar{\mathbf{A}}_r{{\bar{\mathbf{H}}_a}\bar{\mathbf{A}}_t^H }+\mathbf{E}. \label{two stage channel} \end{align} One can find that $\bar{\mathbf{A}}_r\bar{\mathbf{H}}_a\in {\mathbf{C}^{N_r\times G_t}}$ has $L$ non-zero columns, indexed by $\Omega_t$ with $|\Omega_t|=L$. Then, plugging \eqref{two stage channel} into \eqref{observation se} and taking conjugate transpose give \begin{align} \mathbf{Y}_{2}^H &=\underbrace{\mathbf{F}_{b,2} ^H \bar{\mathbf{A}}_{t}}_{\triangleq \bm{\Phi}_2} \underbrace{ \bar{\mathbf{H}}_a^H \bar{\mathbf{A}}_r^H \mathbf{W}_{b,2} }_{\triangleq \mathbf{C}_2}+\mathbf{F}_{b,2} ^H \mathbf{E}^H \mathbf{W}_{b,2} + \mathbf{N}_2^H\mathbf{W}_{b,2} \nonumber\\ &=\bm{\Phi}_2{{\mathbf{C}}_{2}} + \mathbf{F}_{b,2} ^H \mathbf{E}^H\mathbf{W}_{b,2}+\mathbf{N}_2^H\mathbf{W}_{b,2}, \label{ob 2nd} \end{align} where $\bm{\Phi}_2 = \mathbf{F}_{b,2} ^H \bar{\mathbf{A}}_{t} \in \C^{B_{t,2}\times G_t}$, and $\mathbf{C}_2 = \bar{\mathbf{H}}_a^H \bar{\mathbf{A}}_r^H \mathbf{W}_{b,2} \in \C^{G_t \times B_{r,2}}$. It is straightforward that the ${{\mathbf{C}}_{2}}$ has only $L$ non-zero rows indexed by $\Omega_t$. Similar to \eqref{AoA OMP} in Stage I, the support set $\Omega_t$ estimation problem can be formulated as \begin{eqnarray} \widehat{\mathbf{C}}_2 = \underset{\mathbf{C}_2}{\mathop{\argmin }}\,\left\| \mathbf{Y}_{2}^H-\bm{\Phi}_2{{\mathbf{C}}_{2}} \right\|_{F}^{2}\text{ subject to }{{\left\| {{\mathbf{C}}_{2}} \right\|}_{r,0}}\le L,\label{s2 formulation} \end{eqnarray} which is solved by Algorithm \ref{alg_SOMP}. In what follows, the design of RSB ${\mathbf{W}_{b,2}}$ and TSB ${\mathbf{F}_{b,2}}$ for Stage II is of interest. \subsubsection{RSB and TSB Design} For the design of RSB ${\mathbf{W}_{b,2}}$, we leverage the estimated AoAs from Stage I to formulate \begin{align} \max_{\mathbf{W}_{b,2}} \min_{i} {{\| \mathbf{W}_{b,2}^H{{[\widehat{\mathbf{A}}_r ]}_{:,i}}\|}_{2}}. \label{W2 design} \end{align} If $\mathbf{W}_{b,2}$ is semi-unitary, i.e., $\mathbf{W}_{b,2}^H \mathbf{W}_{b,2} = \mathbf{I}_{B_{r,2}}$, the objective value in \eqref{W2 design} satisfies $\| \mathbf{W}_{b,2}^H{{[ \widehat{\mathbf{A}}_r ]}_{:,i}} \|_2 \le 1, \forall i$ with the equality holding if \begin{align} \cR(\mathbf{W}_{b,2}) = \cR(\widehat{\mathbf{A}}_r). \label{W2 design sub} \end{align} One can check \eqref{W2 design sub} holds only if $B_{r,2} \ge L$. Without loss of optimality and to save the number of sounders, we set $B_{r,2} = L$. One solution to \eqref{W2 design sub} is attained when the columns of $\mathbf{W}_{b,2}$ are the orthonormal basis of $\widehat{\mathbf{A}}_r$. For example, we let $\mathbf{W}_{b,2}$ be the $\mathbf{Q}$-matrix of the QR decomposition\footnote{The QR decomposition is a decomposition of a matrix $\mathbf{A}\in \C^{m\times n}$ into the product $\mathbf{A} = \mathbf{Q}\mathbf{R}$ of an orthonormal matrix $\mathbf{Q} \in \C^{m \times n}$ and an upper triangular matrix $\mathbf{R} \in \C^{ n\times n}$. } of $\widehat{\mathbf{A}}_r$ such that \begin{align} \mathbf{W}_{b,2} = \mathop{\mathrm{QR}}(\widehat{\mathbf{A}}_r), \label{expression Wb2} \end{align} where $\mathop{\mathrm{QR}}(\cdot)$ returns the $\mathbf{Q}$-matrix of a given matrix. \begin{Remark} \label{remark 3} Due to the semi-orthogonality of $\mathbf{W}_{b,2}$ and the conclusions in Proposition \ref{noise semi}, the effective noise matrix $\mathbf{W}_{b,2}^H\mathbf{N}_2 \in \C^{B_{r,2} \times B_{t,2}}$ in \eqref{observation se} has \gls{iid} Gaussian entries, i.e., $[\mathbf{W}_{b,2}^H\mathbf{N}_2]_{i,j} \sim \cC\cN(0,\sigma^2),\forall i,j$. \end{Remark} As for the design of ${\mathbf{F}_{b,2}}$, we exploit the isometric CS measurement matrix design criterion, \begin{align} \min _{\bm{\Phi}_2}\| \bm{\Phi}_2 ^H \bm{\Phi}_2 - \mathbf{I}_{G_t}\|_F^2. \label{design of F2} \end{align} After similar manipulations as \eqref{sensing opt problem}, the optimality condition for $\mathbf{F}_{b,2}$ of \eqref{design of F2} is that the columns of $\mathbf{F}_{b,2}$ are orthogonal. \iffalse since we are unaware of the AoAs, we design ${\mathbf{F}_{b,2}}$ by maximizing the minimum correlation between $\mathbf{F}_{b,2}$ and each column in $\bar{\mathbf{A}}_t$, \begin{align} \max_{\mathbf{F}_{b,2}} \min_{i} {{\left\| \mathbf{F}_{b,2}^H{{\left[ \bar{\mathbf{A}}_t \right]}_{:,i}} \right\|}_{2}}. \end{align} Trivially, the solution of the problem above is given by the following, \begin{align} {{\left\| \mathbf{F}_{b,2}^H{{\left[ \bar{\mathbf{A}}_t \right]}_{:,i}} \right\|}_{2}}=\text{constant},~\text{ }i=1,\cdots, G_t. \label{s2 precoder} \end{align} It means that $\mathbf{F}_{b,2}$ is unbiased for the columns $\bar{\mathbf{A}}_t$. \fi Then, following the same procedure as \eqref{F beams AoAs} and \eqref{design e1}, we obtain the design of TSP $\mathbf{F}_{b,2}$ below, \begin{align} \mathbf{F}_{b,2}=\sqrt{p_2}[ {{\mathbf{e}}_1},{{\mathbf{e}}_2},\ldots ,{{\mathbf{e}}_{B_{t,2}} }], \label{expression Fb2} \end{align} where $p_2$ is the power coefficient of Stage II. The algorithmic procedure of estimating AoDs are described in Algorithm \ref{alg_AoDs}. Provided the estimated support set $\widehat{\Omega}_t$, the output of Algorithm \ref{alg_AoDs} is the estimated AoD array response matrix $\widehat{\mathbf{A}}_t = [\bar{\mathbf{A}}_t]_{:,\widehat{\Omega}_t} \in \C^{N_t \times L}$. The number of channel uses for the AoD estimation in Stage II is $K_2 = B_{t,2}$, and the overall number of channel uses for two stages is \begin{align} K=K_1 + K_2 = B_{t,1} \frac{ N_r }{N} + B_{t,2}. \label{number of uses} \end{align} \begin{Remark} Recall that the number of observations for the conventional one-stage channel sounding in Fig. \ref{system diagram} is $\cO(L\cdot \log(G_r G_t/L))$ \cite{DonohoCS}. As a comparison, since the proposed two-stage channel sounding in Fig. \ref{alg diagram} only estimates AoA in Stage I, and estimates AoD in Stage II, the number of required observations is $\cO(L \cdot \log (G_r/L))$ in Stage I, and $\cO(L\cdot \log (G_t/L))$ in Stage II. The total number of required observations for the proposed two-stage channel sounding is $\cO(L \cdot \log (G_r/L)) + \cO(L \cdot \log (G_t/L)) = \cO(L \cdot \log (G_t G_r/L^2 ) $, which is less than the conventional one-stage sounding. \end{Remark} \begin{Remark} \label{RSBHappening} About happening of the design RSB and TSB, in Stage I, one can find that the design of RSB in \eqref{RSB 1} and TSB in \eqref{F beams AoAs} are completed before the channel estimation, which are then utilized by the transmitter and receiver. Like the fact that the training pilots are known for the transmitter and receiver in advance before the task of channel estimation, here we also assume that the TSB and RSB are known as a priori. In Stage II, the TSB ${\mathbf{F}}_{b,2}$ in \eqref{expression Fb2} is also designed in advance, while the RSB $\mathbf{W}_{b,2}$ in \eqref{expression Wb2} is designed and employed at the receiver side, which requires no feedback to the transmitter. Overall, the proposed method requires no feedback during the whole procedures of the channel estimation. \end{Remark} \begin{algorithm} [t] \caption{AoD Estimation Algorithm} \label{alg_AoDs} \begin{algorithmic} [1] \STATE Input: Channel dimension $N_r$, $N_t$, number of RF chains $N$, channel paths $L$, power allocation $p_2$, output of AoA estimation $\widehat{\mathbf{A}}_r$, transmit array response dictionary $\bar{\mathbf{A}}_t \in \C^{N_t \times G_t}$. \STATE Initialization: Generate the TSB ${\mathbf{F}}_{b,2} = \sqrt{p_2}[ \mathbf{e}_{1},\ldots,\mathbf{e}_{B_{t,2}}]$ in \eqref{expression Fb2} and RSB $\mathbf{W}_{b,2} = \mathop{\mathrm{QR}}(\widehat{\mathbf{A}}_r)$ in \eqref{expression Wb2}. \STATE Collect the observations ${\mathbf{Y}}_2 = \mathbf{W}_{b,2}^H \mathbf{H} {\mathbf{F}}_{b,2} + \mathbf{W}_{b,2}^H {\mathbf{N}}_2$. \STATE Solve the problem in \eqref{s2 formulation} by using Algorithm \ref{alg_SOMP} with the sparsity level $L$ and $\bm{\Phi}_2 = \mathbf{F}_{b,2}^H \bar{\mathbf{A}}_t$, \vspace{-0.1cm} \begin{eqnarray} (\widehat{\Omega}_t, \widehat{\mathbf{C}}_2) = \text{SOMP}(\mathbf{Y}_2^H,\bm{\Phi}_2,L). \nonumber \end{eqnarray} \vspace{-0.5cm} \STATE Output: Estimation of AoD array response matrix $\widehat{\mathbf{A}}_t = [\bar{\mathbf{A}}_t]_{:,\widehat{\Omega}_t}$. \end{algorithmic} \end{algorithm} \subsection{Channel Estimation} \label{R estimation} Recalling the channel representation in \eqref{compact channel} and after estimating $\widehat{\mathbf{A}}_r \in \C^{N_r \times L}$ in Algorithm \ref{alg_AoAs} and $\widehat{\mathbf{A}}_t \in \C^{N_t \times L}$ in Algorithm \ref{alg_AoDs}, we can express the channel estimate as \begin{align} \widehat{\mathbf{H}} = \widehat{\mathbf{A}}_r \widehat{\mathbf{R}} \widehat{\mathbf{A}}_t^H, \label{expression of estimation} \end{align} where $\widehat{\mathbf{R}} \in \C^{L \times L}$ denotes the estimation of $\diag(\mathbf{h})$ in \eqref{compact channel}. In the following, we will discuss how to obtain the estimate $\widehat{\mathbf{R}}$. It is worth noting that unlike \eqref{compact channel} we do not restrict $\widehat{\mathbf{R}}$ to be a diagonal matrix because of the possible permutations in the columns of $\widehat{\mathbf{A}}_r$ and $\widehat{\mathbf{A}}_t$. Recall the observations of each stage, i.e., $\mathbf{Y}_1= \mathbf{W}_{b,1}^H \bar{\mathbf{A}}_r{\bar{\mathbf{H}}_a}\bar{\mathbf{A}}_t^H \mathbf{F}_{b,1} + \mathbf{W}_{b,1}^H \mathbf{E} \mathbf{F}_{b,1} +\mathbf{W}_{b,1}^H\mathbf{N}_1$, and $\mathbf{Y}_2= \mathbf{W}_{b,2}^H \bar{\mathbf{A}}_r{\bar{\mathbf{H}}_a}\bar{\mathbf{A}}_t^H \mathbf{F}_{b,2} + \mathbf{W}_{b,2}^H \mathbf{E} \mathbf{F}_{b,2} +\mathbf{W}_{b,2}^H\mathbf{N}_1$. Since $\mathbf{W}_{b,1}^H\mathbf{N}_1$ and $\mathbf{W}_{b,2}^H\mathbf{N}_2$ are \gls{iid} Gaussian, incorporating the expressions of channel estimate in \eqref{expression of estimation}, the estimation of $\widehat{\mathbf{R}}$ is given by \begin{align} \nonumber \widehat{\mathbf{R}} = \argmin_{\mathbf{R}}\lA \begin{bmatrix} \vect( \mathbf{Y}_1 )\\ \vect( \mathbf{Y}_2) \end{bmatrix} - \begin{bmatrix} \vect( \mathbf{W}_{b,1}^H \widehat{\mathbf{A}}_r {\mathbf{R}} \widehat{\mathbf{A}}_t^H \mathbf{F}_{b,1} )\\ \vect( \mathbf{W}_{b,2}^H \widehat{\mathbf{A}}_r {\mathbf{R}} \widehat{\mathbf{A}}_t^H \mathbf{F}_{b,2}) \end{bmatrix} \rA_F^2, \end{align} \iffalse \begin{align} \widehat{\mathbf{R}} =\argmin_{\mathbf{R}} \| \mathbf{Y}_1 - \mathbf{W}_{b,1}^H \widehat{\mathbf{A}}_r {\mathbf{R}} \widehat{\mathbf{A}}_t^H \mathbf{F}_{b,1}\|_F^2 + \| \mathbf{Y}_2 - \mathbf{W}_{b,2}^H \widehat{\mathbf{A}}_r {\mathbf{R}} \widehat{\mathbf{A}}_t^H \mathbf{F}_{b,2}\|_F^2, \end{align} \fi where the optimal solution is given by \begin{align} \nonumber \vect(\widehat{\mathbf{R}}) = \left( \mathbf{A}_1^H \mathbf{A}_1+\mathbf{A}_2^H \mathbf{A}_2\right)^{-1}\left(\mathbf{A}_1^H \vect(\mathbf{Y}_1) +\mathbf{A}_2^H\vect(\mathbf{Y}_2)\right), \end{align} where $\mathbf{A}_1 = (\widehat{\mathbf{A}}_t^H \mathbf{F}_{b,1})^T\otimes\mathbf{W}_{b,1}^H \widehat{\mathbf{A}}_r \in \C^{N_r B_{t,1}\times L^2}$ and $\mathbf{A}_2 = (\widehat{\mathbf{A}}_t^H \mathbf{F}_{b,2})^T\otimes\mathbf{W}_{b,2}^H \widehat{\mathbf{A}}_r\in \C^{L B_{t,2}\times L^2}$. Because $N_r B_{t,1} \gg L^2$ and $B_{t, 2} \gg L$, the matrix $\mathbf{A}_1^H \mathbf{A}_1 + \mathbf{A}_2^H\mathbf{A}_2 \in \C^{L^2 \times L^2}$ is always invertible. \begin{Remark} After $\widehat{\mathbf{R}}$ is estimated, the pairing of AoAs and AoDs can be obtained by selecting positions of the largest $L$ entries in $\widehat{\mathbf{R}}$. Then, the path gain $h_l, l=1,2,\cdots,L,$ can be calculated by solving a problem like the oracle estimator in \eqref{oracal estimator}, where the two-stage RSBs and TSBs are utilized. \end{Remark} \section{Performance Analysis and Resource Allocation} \label{sec analyze} In this section, we discuss the reconstruction probability of AoAs and AoDs of the proposed two-stage method in Section \ref{section algorithm}. Moreover, we further enhance the reconstruction performance by performing power and channel use allocation to each stage. \subsection{Successful Recovery Probability Analysis} \subsubsection{SRP of AoA Estimation} As a starting point, we focus on the SRP of Algorithm \ref{alg_SOMP}. An SRP bound of SOMP was previously studied in \cite{NoiseSOMP}, where the analysis was based on the restricted isometry property constant of the measurement matrix $\bm{\Phi}$. In this work, we instead analyze the recovery performance of Algorithm \ref{alg_SOMP}, based on the mutual incoherence property (MIP) constant\footnote{The MIP constant of matrix $\bm{\Phi}$ is quantified by a variable $\mu = \max_{i\neq j}|\langle [\bm{\Phi}]_{:,i},[\bm{\Phi}]_{:,j}\rangle|$, where $\langle \cdot, \cdot \rangle$ denotes the inner product.} \cite{DonohoMIP} of $\bm{\Phi}$. \begin{Lemma}\label{lemma SOMP} Suppose $\mathbf{C} \in \C^{N \times d}$ is a row sparse matrix, where $L$ ($\ll N$) rows of $\mathbf{C}$, indexed by $\Omega$, are non-zero. We consider the observation $\mathbf{Y} = \bm{\Phi} \mathbf{C} + \mathbf{N}$, where $\mathbf{Y} \in \C^{M \times d}$, $\bm{\Phi} \in \C^{M \times N}$ is the measurement matrix with $L \leq M \ll N$ and $\| [\bm{\Phi}]_{:,i}\|_2=1, \forall i$, and $\mathbf{N} \in \C^{M \times d}$ is the noise matrix with each entry \gls{iid} according to complex Gaussian distribution $\cC\cN(0,\sigma^2)$. Given that the MIP constant $\mu$ of the measurement matrix $\bm{\Phi}$ is $\mu< 1/ (2L-1)$, the SRP of Algorithm \ref{alg_SOMP} satisfies \begin{align} \text{Pr}(\cV_{S})\ge F_2\left(\frac{{(1-(2L-1)\mu)^2 C_{\text{min}}^2}-4\sigma^2 \mu_{M,d}}{4\sigma^2 \sigma_{M,d}}\right), \label{SOMP prob} \end{align} where $\cV_{S}$ is the event of successful reconstruction of Algorithm \ref{alg_SOMP}, $C_{\min} = \min\limits _{i\in \Omega} \lA [\mathbf{C}]_{i,:} \rA_2$, $\mu_{M,d} =(M^{1/2} + d^{1/2}) ^2$, $ \sigma_{M,d} = (M^{1/2} + d^{1/2}) (M^{-1/2} + d^{-1/2}) ^{1/3} $, and the function $F_2(\cdot)$\footnote{ The CDF of Tracy-Widom law \cite{TW1,TW2} $F_2(\cdot)$ is expressed as \begin{align} F_2(s) = \exp\left( \int_s^{\infty}(x-s)q(x) dx\right), \nonumber \end{align} where $q(x)$ is the solution of Painlev\'{e} equation of type II: \begin{align} q''(x)=xq(x)+2q(x)^3,~ q(x) \sim \text{Ai}(x), x \rightarrow \infty, \nonumber \end{align} where $\text{Ai}(x)$ is the Airy function \cite{TW2, TW1}. To save computational complexity, we admit the table lookup method \cite{dataTW} to obtain the value of $F_2(\cdot)$. } is the cumulative distribution function (CDF) of Tracy-Widom law \cite{TW2, TW1}. \end{Lemma} \begin{proof} See Appendix \ref{appendix5-1}. \end{proof} \begin{Proposition} \label{with noise p} Suppose the signal model provided in Lemma \ref{lemma SOMP} and, given the quantization error, the observation model $\mathbf{Y} = \bm{\Phi} \mathbf{C} + \tilde{\mathbf{N}}$, where effective noise $\tilde{\mathbf{N}} = \mathbf{E}+\mathbf{N}$ with quantization error $\mathbf{E}$ and Gaussian noise $\mathbf{N}$ of \gls{iid} $\cC\cN(0,\sigma^2)$ entries. If $\mu$ is the MIP constant of the measurement matrix $\bm{\Phi}$ with $\mu < 1/(2L-1)$, the SRP of Algorithm \ref{alg_SOMP} is given by \begin{eqnarray} \text{Pr}(\cV_S) \ge F_2\left(\frac{{\left((1-(2L-1)\mu) C_{\text{min}}-2\| \mathbf{E}\|_2\right)^2}-4\sigma^2 \mu_{M,d}}{4\sigma^2 \sigma_{M,d}}\right), \label{prob noise case} \end{eqnarray} where $C_{\min} = \min\limits _{i\in \Omega} \lA [\mathbf{C}]_{i,:} \rA_2$, $\mu_{M,d} =(M^{1/2} + d^{1/2}) ^2$, and $\sigma_{M,d} = (M^{1/2} + d^{1/2}) (M^{-1/2} + d^{-1/2}) ^{1/3} $. \begin{proof} See Appendix \ref{proof prob noise case}. \end{proof} \end{Proposition} As a direct consequence of Proposition \ref{with noise p}, Theorem \ref{AoAs estimation} below quantifies the SRP of AoA estimation in Algorithm \ref{alg_AoAs}. \begin{Theorem} \label{AoAs estimation} Assume the MIP constant of the measurement matrix $\bm{\Phi}_1$ in Algorithm \ref{alg_AoAs} satisfies $\mu_1 < 1/(2L-1)$. Then, the SRP of Algorithm \ref{alg_AoAs} is lower bounded by \begin{align} \text{Pr}({\mathcal{A}}} \def\cB{{\mathcal{B}}} \def\cC{{\mathcal{C}}} \def\cD{{\mathcal{D}}_{S}) &\ge F_2 \left(\frac{ ( 1-(2L-1)\mu_1) \left({h_{\text{min}} \sqrt{\frac{p_1 B_{t,1}}{N_t} }}-2\|\mathbf{E}_1 \|_2\right)^2-4\sigma^2 \mu_{{N_r,B_{t,1}}}}{4\sigma^2 \sigma_{{N_r,B_{t,1}}}} \right) \nonumber \\ &\approx F_2 \left(\frac{ (1-(2L-1)\mu_1) {h_{\text{min}}^2 \frac{p_1 B_{t,1}}{N_t} }-4\sigma^2 \mu_{{N_r,B_{t,1}}}}{4\sigma^2 \sigma_{{N_r,B_{t,1}}}}\right) \label{aoa prob temp} \\ &\triangleq P_{\text{I}}(p_1, B_{t,1}), \label{aoa prob} \end{align} where ${\mathcal{A}}} \def\cB{{\mathcal{B}}} \def\cC{{\mathcal{C}}} \def\cD{{\mathcal{D}}_{S}$ is the event of successful reconstruction of AoA, $h_{\min} = \min_{l\le L} |h_l|$ with $h_l$ being the $l$th entry of $\mathbf{h}$ in \eqref{compact channel}, $\mu_{{N_r,B_{t,1}}} =(N_r^{1/2} + B_{t,1}^{1/2}) ^2$, $ \sigma_{{N_r,B_{t,1}}} = (N_r^{1/2} + B_{t,1}^{1/2}) (N_r^{-1/2} + B_{t,1}^{-1/2}) ^{1/3} $, and $\mathbf{E}_1 = \mathbf{W}_{b,1}^H\mathbf{E}{\mathbf{F}_{b,1}}$. The approximation in \eqref{aoa prob temp} is obtained by neglecting the quantization term $\mathbf{E}_1$. In \eqref{aoa prob}, the SRP lower bound is substituted as a function of $(p_1, B_{t,1})$. \end{Theorem} \begin{proof} Recalling the observation model in \eqref{MMV observation} with the TSB and RSB in \eqref{F beams AoAs} and \eqref{RSB 1}, respectively, the effective TSB matrix $\mathbf{C}_1$ in \eqref{MMV observation} satisfies ${{\| {{[{{\mathbf{C}}_{1}}]}_{{{r}_{l}},:}} \|}_{2}}=\sqrt{\frac{p_1B_{t,1}}{N_t}}\left| {{h}_{l}} \right|$, where ${{r}_{l}} \in \Omega_r$ is the index of the $l$th path of $\mathbf{A}_r$ in $\bar{\mathbf{A}}_r$ such that $[\bar{\mathbf{A}}_r]_{:,r_l}=[\mathbf{A}_r]_{:,l}$, $l=1, \ldots, L$. Substituting $C_{\min} = \min \limits _{r_l \in \Omega_r}\lA [\mathbf{C}_1]_{r_l,:} \rA_2 =\sqrt{\frac{p_1B_{t,1}}{N_t}}\left| {{h}_{\min}} \right|$ in \eqref{SOMP prob} results in \eqref{aoa prob}, which completes the proof. \end{proof} \begin{Remark} \label{aoa fixed p} According to Theorem \ref{AoAs estimation}, when the power $p_1$ of Stage I is fixed and the number of transmit sounding beams $B_{t,1}$($\ll N_r$) increases, the SRP of AoA increases accordingly. Interestingly, it is more efficient to increase the power allocation $p_1$ than the number of transmit sounding beams $B_{t,1}$ to achieve a higher SRP of AoA. This can be understood through the two cases where $p_1$ or $B_{t,1}$ grow at the same rate. Compared to the case of $p_1$, both $\mu_{N_r, B_{t,1}}$ and $\sigma_{N_r, B_{t,1}}$ increase slowly as $B_{t,1}$ grows, resulting in lower SRP in \eqref{aoa prob temp}. This aspect will be clearer in the next subsection when we optimize the allocation of $p_1$ and $B_{t,1}$. \iffalse The relationship between ${{\mu }_{1}}$ and the dimension of $\bm{\Phi}_1$ follows \cite{WelchMIP}, \begin{align} {{\mu }_{1}}\ge \sqrt{\frac{sN_r-{N_r}}{{N_r}(sN_r-1)}}. \nonumber \end{align} \fi \end{Remark} \subsubsection{SRP of AoD Estimation} Regarding the SRP of Algorithm \ref{alg_AoDs}, we assume for tractability that the AoA estimation in Stage I was perfect. The following theorem quantifies the SRP of AoD estimation in Algorithm \ref{alg_AoDs}. \begin{Theorem} \label{AoDs Prob} Provided the perfect AoA knowledge known a priori and MIP constant $\mu_2$ of matrix $\sqrt{{N_t}/{(p_2 B_{t,2})}}\bm{\Phi}_2$ satisfying $\mu_2 < 1/(2L-1)$, the SRP of Algorithm \ref{alg_AoDs} is lower bounded by \begin{align} \text{Pr}(\cD_S) &\ge\ F_2 \left(\frac{ (1-(2L-1)\mu_2) h_{\text{min}} - 2\| \mathbf{E}_2\|_2)^2 -4\sigma^2 \frac{N_t}{p_2 B_{t,2}} N_t \mu_{{B_{t,2},L}}}{4N_t\sigma^2 \frac{N_t}{p_2 B_{t,2}} \sigma_{{B_{t,2},L}}}\right) \nonumber\\ &\approx \ F_2 \left(\frac{ (1-(2L-1)\mu_2)^2 h_{\text{min}}^2 -4\sigma^2 \frac{N_t}{p_2 B_{t,2}} N_t \mu_{{B_{t,2},L}}}{4N_t\sigma^2 \frac{N_t}{p_2 B_{t,2}} \sigma_{{B_{t,2},L}}}\right)\label{AoDs pro}\\ &\triangleq P_{\text{II}}(p_2, B_{t,2}), \label{AoDs pro temp} \end{align} where $\cD_S$ denotes the event of successful AoD reconstruction, $h_{\min} = \min _{ l\le L} |h_l|$ with $h_l$ being the $l$th entry of $\mathbf{h}$ in \eqref{compact channel}, $\mu_{{B_{t,2},L}} =(L^{1/2} + B_{t,2}^{1/2}) ^2$, $\sigma_{{B_{t,2},L}} = (L^{1/2} + B_{t,2}^{1/2}) (L^{-1/2} + B_{t,2}^{-1/2}) ^{1/3}$, and $\mathbf{E}_2 = \frac{N_t}{p_2 B_{t,2}}\mathbf{F}_{b,2} ^H \mathbf{E}\mathbf{W}_{b,2}$. In \eqref{AoDs pro temp} , the SRP lower bound is substituted as a function of $(p_2, B_{t,2})$. \end{Theorem} \begin{proof} See Appendix \ref{appendix5-3}. \end{proof} \iffalse \begin{remark} It is interesting to find that, when the estimation of AoAs is perfect, the probability in \eqref{AoDs pro} will decrease with the number of RF chains, i.e., $N$. This can be interpreted as that the increasing of $N$ will not improve the signal power in \eqref{ob 2nd}, i.e., $\bar{\mathbf{A}}_t{{\mathbf{C}}_{2}}$. However, the power of noise part $\mathbf{N}_2^H\mathbf{W}_2$ will increase with $N$. Therefore, the successful reconstruction probability will decrease. \end{remark} \fi \iffalse Now, we calculate the total successful probability of the proposed sequential method. Combining AoAs in \eqref{aoa prob} and AoDs in \eqref{AoDs pro}, since we have $P(A,B)=P(A)P(\left. B \right|A)$ for any event $A$ and $B$, the whole successful probability is given by \begin{align} {{P}_{\text{two}}}\ge P_1 P_2. \label{total prob} \end{align} Note that the MIP constant can be approximated as \begin{align} {{\mu }_{1}} \approx \sqrt{\frac{sN_r-N_r}{N_r(sN_r-1)}},{{\mu }_{2}}\approx \sqrt{\frac{sN_t-N_t}{N_t(sN_t-1)}}. \nonumber \end{align} \iffalse Overall, as we can see, if we increase the number of $B_{t,1}$, we can acquire a high performance. \fi In order to make the analysis compact, we analyze the successful probability of AoDs estimation when then AoAs estimation is not accurate. \begin{proposition} \label{Non Accurate AoDs} For the case when the AoAs estimation is not perfect, the reconstruction probability for AoDs is at least \begin{align} {\tilde{P}_{2}} \ge \Phi \left( \sqrt{{L}N_t}\left( \frac{{{ z^2 p_2 \left| {{h}_{\min }} \right|}^{2}}{{ (1-(L-1){{\mu }_{2}} )^2 (1-L{{\mu }_{2}} )^2 }}}{4\sqrt{2}N_t{L}{{\sigma }^{2}}}-\frac{\sqrt{2}}{2} \right) \right), \label{non accurate AoAs} \end{align} where $z=\underset{i=1,\ldots, L}{\mathop{\min }}\,{{\left\| {{\left[ \mathbf{W}_{b,2}^H{{\mathbf{A}}_{r}} \right]}_{:,i}} \right\|}_{2}}$. \end{proposition} \begin{proof} The detailed proof is shown in Appendix \ref{appendix2}. \end{proof} \begin{remark} Comparing the probabilities in \eqref{non accurate AoAs} and \eqref{AoDs pro}, we can find the reconstruction probability in \eqref{non accurate AoAs} is decreased because of the coefficient $z$, which is introduced by the non-accurate AoAs estimation. \end{remark} Now, we compare the value in \eqref{total prob} with successful reconstruction probability of one-stage OMP \cite{OMPchannel}. For a fair comparison, we let the number of observation for one-stage is equal to the proposed sequential method \begin{align} N_p=N_rB_{t,1}+(N_t-B_{t,1})N. \nonumber \end{align} \iffalse The noise $\left\| \mathbf{N} \right\|_{F}^{2}/{{\sigma }^{2}}\sim \mathcal{N}(N_p,2N_p)$. Then, \begin{align} P(\left\| \mathbf{N} \right\|_{F}^{2}/{{\sigma }^{2}}\le {{b}^{2}}N_p)&\approx& \Phi (\frac{{{b}^{2}}N_p-N_p}{\sqrt{2N_p}}) \nonumber \\&=&\Phi (\sqrt{N_p}\frac{{{b}^{2}}-1}{\sqrt{2}}). \nonumber \end{align} \fi Since the derivations are quite similar, we omit the details and just give the conclusion here. When $\left| {{h}_{i}} \right|\ge \frac{2\sigma b\sqrt{N_rN_t}}{1-(2L-1)\mu }$ , the successful reconstruction probability for one-stage OMP \cite{OMPchannel} is \begin{align} {{P}_{\text{one}}}&\ge&\Phi (\sqrt{N_p}\frac{{{b}^{2}}-1}{\sqrt{2}}) \nonumber \\ & =& \Phi \left( \sqrt{N_p}{{\left( \frac{{{\left( 1-(L-1)\mu \right)}^{2}{\left( 1-L\mu \right)}^{2}}{{\left| {{h}_{min}} \right|}^{2}}}{4\sqrt{2}N_rN_t{{\sigma }^{2}}} \right)}^{2}} - \frac{\sqrt{2}}{2} \right) , \label{one stage pro} \end{align} where $\mu$ is the MIP constant of one-stage OMP measurement matrix, and $b$ is any positive number. In order to compare the successful reconstruction probabilities of sequential method in \eqref{total prob} and one-stage OMP in \eqref{one stage pro}, we do the simulation in Fig. \ref{probability new}. The simulation parameters are $N_r = 36, N_t=144, L=4, N=4, B_{t,1}=2, s=1$. As we can see, the proposed sequential method achieves a higher upper bound of successful reconstruction probability. \fi \iffalse \begin{figure} \centering \includegraphics[width=3.2 in]{figures/figure_probability.eps} \caption{Probability of Successful Reconstruction vs. SNR (dB) ($N_r = 36, N_t=144, L=4, N=4, B_{t,1}=2, s=1$).} \label{probability new} \end{figure} \fi \subsection{Power and Channel Use Allocation} \label{section power allocation} We recall that in the proposed two-stage method, the transmit sounding beams at Stage I and II are, respectively, $\mathbf{F}_{b,1} = \sqrt{p_1}[\mathbf{e}_1,\ldots,\mathbf{e}_{B_{t,1}}]$ in \eqref{F beams AoAs} and $\mathbf{F}_{b,2} = \sqrt{p_2}[\mathbf{e}_{1},\ldots,\mathbf{e}_{B_{t,2}}]$ in \eqref{expression Fb2}. The total power budget $E$ is therefore defined by \begin{align} E = \underbrace{ p_1 B_{t,1}{N_r}/{N}}_{\triangleq E_1} + \underbrace{ p_2 B_{t,2} }_{\triangleq E_2}, \label{power value} \end{align} where $E_1$ and $E_2$ are the power budgets at the Stage I and Stage II, respectively. We let $\eta_1>0$ and $\eta_2>0$ be the target SRP values at Stage I and Stage II, respectively. The SRP-guaranteed power budget minimization problem\footnote{ In \eqref{power allocation problem}, we present the SRP-constrained power minimization problem for optimizing power and channel use allocations. For instance, this criterion can be thought of as a prudent alternative of the performance maximization subject to power constraints in the MIMO literature because it provides a guarantee on the achievable performance \cite{bengtsson2002pragmatic}. Multiple variants of the performance-guaranteed power minimization problem can be found in the context of MIMO resource allocation \cite{wiesel2005linear,dahrouj2010coordinated}. } is then formulated as \begin{subequations} \label{power allocation problem} \begin{align} &\min_{p_1,p_2,B_{t,1},B_{t,2}} E_1 + E_2 \\ &\text{subject to}~~P_{\text{I}}(p_1,B_{t,1}) \ge \eta_1, ~ P_{\text{II}}(p_2,B_{t,2}) \ge \eta_2, \\ & ~~~~~~~~~~~E_1 = p_1B_{t,1}{N_r}/{N},~ E_2 = p_2 B_{t,2}, \\ & ~~~~~~~~~~~B_{t,1}\ge \widetilde{B}_{t,1}, B_{t,2}\ge \widetilde{B}_{t,2}, \end{align} \end{subequations} where $\widetilde{B}_{t,1}$ and $\widetilde{B}_{t,2}$ are the minimum numbers of allowed transmit beams at Stage I and Stage II, respectively. The problem in \eqref{power allocation problem} optimizes the power allocation $p_1$ and $p_2$, and the number of transmit beams $B_{t,1}$ and $B_{t,2}$ to minimizes the total power budget subject to the SRP requirements at Stage I and Stage II. It is worth noting that that because the problem in \eqref{power allocation problem} is separable, thus \eqref{power allocation problem} is equivalent to the following two sub-problems, \begin{subequations} \label{power allocation problem e1} \begin{align} &\min_{p_1,B_{t,1}}E_1\\ &\text{subject to}~P_{\text{I}}(p_1,B_{t,1}) \ge \eta_1,E_1 = p_1B_{t,1}\frac{N_r}{N},B_{t,1}\ge \widetilde{B}_{t,1}, \label{power allocation problem e1 b} \end{align} \end{subequations} and \begin{subequations}\label{power allocation problem e2} \begin{align} &\min_{p_2,B_{t,2}} E_2\\ & \text{subject to}~ P_{\text{II}}(p_2,B_{t,2}) \ge \eta_2, E_2 = p_2 B_{t,2},B_{t,2}\ge \widetilde{B}_{t,2}. \end{align} \end{subequations} \iffalse The power allocation is given by the following problem \begin{align} &&\max \limits_{{p}_1,p_2} {{{F}}_1{({p}_1,B_{t,1})}} F_2{(p_2)} \nonumber \\ &&\text{subject to } {p}_1B_{t,1}\frac{N_r}{N}+p_2(N_t-B_{t,1})=E. \label{opti power} \end{align} For convenience, we denote \begin{align} c_1 = \sqrt{B_{t,1}N_r}\left( \frac{{{ \left| {{h}_{\min }} \right|}^{2}}{{\left( 1-(2L-1){{\mu }_{1}} \right)}^{2}}}{4\sqrt{2}N_rN_t{{\sigma }^{2}}} \right)\nonumber\\ c_2 = \sqrt{{L}N_t}\left( \frac{{{\left| {{h}_{\min }} \right|}^{2}}{{\left( 1-(2L-1){{\mu }_{2}} \right)}^{2}}}{4\sqrt{2}N_t{L}{{\sigma }^{2}}} \right)\nonumber. \end{align} Then, after expressing $p_2$ using $p_1$, the problem in \eqref{opti power} becomes \begin{align} \max \limits_{{p}_1} \Phi(c_1 p_1) \Phi\left( \frac{c_2 E}{N_t-B_{t,1}}- p_1 \frac{c_2 B_{t,1}}{N_t-B_{t,1}}\right) \nonumber . \end{align} Though, the problem above is not concave with respect to $p_1$, it can be numerically computed. \fi First of all, we focus on the sub-problem of Stage I in \eqref{power allocation problem e1}. It is worth noting that directly solving \eqref{power allocation problem e1} is difficult due to the coupled constraints. Thus, we first maximize the SRP, i.e., $P_{\text{I}}(p_1, B_{t,1})$, with arbitrary power budget $E_1$, \begin{subequations} \label{opt11} \begin{align} & \max_{p_1,B_{t,1}} P_{\text{I}}(p_1, B_{t,1}) \\ & \text{subject to} ~~ p_1 B_{t,1} N_r/ N = E_1, ~~ B_{t,1} \geq \widetilde{B}_{t,1}. \end{align} \end{subequations} Prior to showing how to solve the problem in \eqref{opt11}, we first elaborate the relation between the problem in \eqref{power allocation problem e1} and \eqref{opt11}. It is easy to observe that as $E_1$ increases the achievable SRP of the objective function in \eqref{opt11} also increases. Thus, the minimum $E_1$ in \eqref{power allocation problem e1} is achieved when the SRP constraint in \eqref{power allocation problem e1 b}, i.e., $P_{\text{I}}(p_1,B_{t,1}) \ge\eta_1$, holds as the equality. Moreover, given any arbitrary power budget $E_1$ in problem \eqref{opt11}, the interrelation between the power allocation $p_1$ and the number of transmit sounding beams $B_{t,1}$ points to a fundamental tradeoff between them, which is demonstrated in the following theorem. \begin{Theorem} \label{relation of bt1} Consider the following non-linear programming \begin{subequations} \label{opt1} \begin{align} (\hat{p}_1, \widehat{B}_{t,1} ) &= \argmax_{p_1,B_{t,1}} P_{\text{I}}(p_1, B_{t,1}) \label{theorem 3a} \\ & \text{subject to} ~~ p_1 B_{t,1} N_r/ N = E_1, ~~ B_{t,1} \geq \widetilde{B}_{t,1}, \label{theorem 3b} \end{align} \end{subequations} where $E_1$ is an arbitrary power budget. The solution to \eqref{opt1} is given by $\widehat{B}_{t,1} =\widetilde{B}_{t,1}$ and $p_1=\frac{E_1N}{ \widetilde{B}_{t,1} N_r}$. \end{Theorem} \begin{proof} Substituting constraint $p_1 =\frac{E_1N}{{B}_{t,1} N_r}$ in \eqref{theorem 3b} into the objective function in \eqref{theorem 3a}, we first show that $P_{\text{I}}(\frac{E_1N}{{B}_{t,1} N_r}, B_{t,1})$ in \eqref{theorem 3a} is a monotonically decreasing function of the number of transmit sounding beams $B_{t,1}$ for a fixed $E_1$. Specifically, substituting $\mu_{{N_r,B_{t,1}}} =(N_r^{1/2} + B_{t,1}^{1/2}) ^2$ and $ \sigma_{{N_r,B_{t,1}}} = (N_r^{1/2} + B_{t,1}^{1/2}) (N_r^{-1/2} + B_{t,1}^{-1/2}) ^{1/3} $ of \eqref{aoa prob temp} into $P_{\text{I}}(\frac{E_1N}{{B}_{t,1} N_r}, B_{t,1})$ gives \begin{eqnarray} P_I\left(\frac{E_1N}{{B}_{t,1} N_r}, B_{t,1}\right)=F_2\left(\frac{h_{\text{min}}^2 (1-(2L-1)\mu_1)^2 E_1N - 4N_t N_r\sigma^2 (N_r^{\frac{1}{2}} + B_{t,1}^{\frac{1}{2}}) ^2 }{4N_tN_r\sigma^2 (N_r^{\frac{1}{2}} + B_{t,1}^{\frac{1}{2}}) (N_r^{-\frac{1}{2}} + B_{t,1}^{-\frac{1}{2}}) ^{\frac{1}{3}}} \right). \label{plug mu sig} \end{eqnarray} Taking the first derivative of the argument inside $F_2(\cdot)$ in \eqref{plug mu sig} with respect to $B_{t,1}$ reveals that the argument is a decreasing function of $B_{t,1}$. This implies that the $P_I(\frac{E_1N}{{B}_{t,1} N_r}, B_{t,1})$ in \eqref{plug mu sig} is a monotonically decreasing function of $B_{t,1}$. Hence, \eqref{opt1} is maximized when $B_{t,1}=\widetilde{B}_{t,1}$, which completes the proof. \end{proof} \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/AoA_Bt.eps} \caption{SRP of AoA vs. SNR (dB) ($N_r = 20,N_t=64, L=4, N=4, s=1,E_1=10, \widetilde{B}_{t,1}=1$).} \label{AoA_Bt} \end{figure} Therefore, based on Theorem \ref{relation of bt1}, the maximum SRP of AoA estimation for a given $E_1$ is given by \begin{eqnarray} P_I\left(\frac{E_1N}{ \widetilde{B}_{t,1} N_r}, \widetilde{B}_{t,1}\right) = F_2 \left(\frac{{h_{\text{min}}^2 (1-(2L-1)\mu_1)^2E_1 N /N_r }-4\sigma^2 N_t \mu_{{N_r, \widetilde{B}_{t,1}}}}{4N_t\sigma^2 \sigma_{{N_r, \widetilde{B}_{t,1}}}}\right) . \label{function va} \end{eqnarray} We demonstrate Theorem \ref{relation of bt1} via numerical simulations in Fig.~\ref{AoA_Bt}, in which the SRP of AoA is evaluated for different numbers of channel uses $B_{t,1}\in\{1,3,5,9,11\}$. The simulation parameters $N_r =20$, $N_t=64$, $L=4$, $N=4$, $s=1$, $E_1=10$, and $\widetilde{B}_{t,1}=1$ are assumed. The curves clearly show that the highest SRP is achieved when $B_{t,1}=1$. Now, based on Theorem \ref{relation of bt1}, the solution to \eqref{power allocation problem e1} is readily obtained as follows. In order to make SRP of AoA higher than $\eta_1$ in \eqref{power allocation problem e1}, we solve the inverse function in \eqref{function va} with respect to $E_1$ and conclude that the resource allocation of Stage I should meet the following conditions: \begin{subnumcases} {\label{resource E1} } E_1 = \frac{ 4 \sigma^2 N_t N_r(F_2^{-1}(\eta_1) \sigma_{{N_r,\widetilde{B}_{t,1}}} + \mu_{{N_r,\widetilde{B}_{t,1}}})}{h_{\text{min}}^2 (1-(2L-1)\mu_1)^2 N},\label{bound E1} \\ B_{t,1}=\widetilde{B}_{t,1}, \label{bound Bt1}\\ p_1= \frac{E_1N}{ \widetilde{B}_{t,1} N_r},\label{bound p1} \end{subnumcases} where $F_2^{-1}(\cdot)$ is the inverse function of $F_2(\cdot)$. By using similar procedures of the proof of Theorem \ref{relation of bt1}, we observe the following more general result about the number of vectors $d$ in the signal model stated Lemma \ref{lemma SOMP}. \begin{Corollary} \label{effect of d} The bound in \eqref{SOMP prob} is a monotonically decreasing function of the number of measurement vectors $d$. \end{Corollary} \iffalse \begin{proof} We substitute the definitions of $\mu_{M,d}$ and $\sigma_{M,d}$ into \eqref{SOMP prob}, and obtain \begin{align} &&F_2\left(\frac{{(1-(2L-1)\mu)^2 C_{\text{min}}^2}-4\sigma^2 \mu_{M,d}}{4\sigma^2 \sigma_{M,d}}\right) \nonumber\\ =&&F_2\left(\frac{{(1-(2L-1)\mu)^2 C_{\text{min}}^2}-4\sigma^2 (M^{1/2} + d^{1/2}) ^2}{4\sigma^2(M^{1/2} + d^{1/2}) (M^{-1/2} + d^{-1/2}) ^{1/3}}\right) \nonumber \end{align} It is straightforward to find that the function value above is increasing as $d$ decreases. This concludes the proof. \end{proof} \fi \begin{Remark} \label{remarkD} Corollary \ref{effect of d} states the effect of $d$ on the recovery performance of SOMP. It can be interpreted in the following way. The increase of the number of measurement vectors $d$ has an effect of increasing the number of columns of $\mathbf{C}$ in Lemma \ref{lemma SOMP} while keeping the $C_{\min}$ unchanged. This leads to the increase of the noise power due to the increase in the dimension of $\mathbf{N}$, which in turn reduces SRP. \end{Remark} { When it comes to the number of channel uses $B_{t,2}$ at Stage II, we cannot reach the same conclusion as Theorem \ref{relation of bt1} because the constant $ \mu_2$ in \eqref{AoDs pro} changes with $B_{t,2}$. Therefore, Given $B_{t,1} = \widetilde{B}_{t,1}$ and the total number of channel uses $K$ for channel sounding, $B_{t,2}$ is determined by \eqref{number of uses}, i.e., $K=\widetilde{B}_{t,1}{N_r}/{N} + B_{t,2}$. Then, the solution to \eqref{power allocation problem e2} is given by \begin{subnumcases} {\label{resource E2} } E_2 = \frac{ 4 \sigma^2 N_t (F_2^{-1}(\eta_2) \sigma_{{B_{t,2},L}} + \mu_{{B_{t,2},L}})}{h_{\text{min}}^2 (1-(2L-1)\mu_2)^2}, \label{bound E2}\\ B_{t,2}=K-\widetilde{B}_{t,1}{N_r}/{N}\label{bound Bt2},\\ p_2 = \frac{E_2}{K-\widetilde{B}_{t,1}{N_r}/{N}}. \label{bound p2} \end{subnumcases} } In summary, after solving the two-subproblems in \eqref{power allocation problem e1} and \eqref{power allocation problem e2}, we successfully solve the problem in \eqref{power allocation problem}. The specific resource allocations for two stages are shown in \eqref{resource E1} and \eqref{resource E2}, respectively. In particular, when the total power budget $E \ge E_{1}+E_{2}$, the joint SRP of AoA and AoD is at least $\eta_1 \eta_2$. \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/verify_resource_allocation.eps} \caption{{Power allocation to achieve the required SRP vs. SNR (dB) ($N_r = 20,N_t=64, L=4, N=4, s=1, \widetilde{B}_{t,1}=1, \eta_1=\eta_2=0.95$).}} \label{verify resource allocation} \end{figure} { In Fig. \ref{verify resource allocation}, we illustrate the designed resource allocations in \eqref{resource E1} and \eqref{resource E2} with the simulation results. The parameters are set as $\eta_1 = \eta_2 = 0.95$. The curves of theoretical results calculate the power allocations $p_1$ and $p_2$ through \eqref{bound p1} and \eqref{bound p2}. The curves of simulation results are the required power allocations to achieved SRPs of $\eta_1$ and $\eta_2 $. The simulation parameters $N_r =20$, $N_t=64$, $L=4$, $N=4$, $s=1$ are assumed. In Fig. \ref{verify resource allocation}, to achieve the same required SRP, i.e., $\eta_1=\eta_2=0.95$, Stage II requires less power allocation than Stage I. This is because the design of the sounding beams for Stage II saves the power consumption. Overall, the trend of the theoretical results is consistent with that of the simulation results, which validates the proposed resource allocation strategies in \eqref{resource E1} and \eqref{resource E2}. } \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/verify_power_allocation.eps} \caption{Evaluation of the power allocation strategy with equal power allocation ($N_r = 20,N_t=64, L=4, N=4, s=1, \widetilde{B}_{t,1}=1, \eta_1=\eta_2=0.95$).} \label{verify power allocation} \end{figure} In Fig. \ref{verify power allocation}, we demonstrate the SRP of AoA and AoD achieved by the power allocations in \eqref{resource E1} and \eqref{resource E2} compared to the equal power allocation. The power allocations $p_1$ and $p_2$ are calculated by setting $\eta_1=\eta_2=0.95$ and $\sigma=0.1$ in \eqref{resource E1} and \eqref{resource E2}. The simulation parameters are $N_r =20$, $N_t=64$, $L=4$, $N=4$, $s=1$. As we can see from Fig. \ref{verify power allocation}, the proposed power allocation achieves much higher SRP than that of the equal power allocation, which verifies the effectiveness of the proposed power allocation strategy. \section{Extension to Two-stage Method with Super Resolution} \label{section atomic} In this section, we extend the proposed two-stage method to the one with super resolution, through which we aim to address the issue of unresolvable quantization errors. Among the existing works, there are two directions to solve the quantization error for off-grid effect. Firstly, the works in \cite{yang2012off,tang2018off,qi2018off} model the response vector as the summation of on-grid part and the approximation error, in which the sparse Bayesian inference is utilized to estimate the approximation error. Secondly, the atomic norm minimization has been proposed in \cite{OffGridCS,MmvAtomic,superMM}, which can be viewed as the case when the infinite dictionary matrix is employed. Based on atomic norm minimization, the sparse signal recovery is reformulated as a semidefinite programming. Compared to the sparse Bayesian inference, one advantage of atomic norm minimization is that the recovery guarantee is analyzable \cite{OffGridCS,MmvAtomic,superMM}. Following the methodology of the atomic norm minimization, in this section, we aim to estimate the AoAs and AoDs, i.e., $\{f_{r,1},\ldots,f_{r,L}\}$ and $\{f_{t,1},\ldots,f_{t,L}\}$, under the proposed two-stage framework. \subsection{Super Resolution AoA Estimation} The sounding beams of Stage I, i.e., $\mathbf{F}_{b,1}$ and $\mathbf{W}_{b,1}$, are designed according to \eqref{F beams AoAs} and \eqref{RSB 1}. By using the exact expression of $\mathbf{H}$ in \eqref{compact channel} rather than the quantized version in \eqref{redundant channel estimation}, the observations for Stage I is given by \begin{align} \mathbf{Y}_1 &= \mathbf{W}_{b,1}^H \mathbf{H} \mathbf{F}_{b,1} + \mathbf{W}_{b,1}^H \mathbf{N}_1 \nonumber \\ & =\mathbf{W}_{b,1}^H {{\mathbf{A}}_{r}}\diag(\mathbf{h})\mathbf{A}_{t}^{H}{\mathbf{F}_{b,1}}+\mathbf{W}_{b,1}^H \mathbf{N}_1 \nonumber \\ & =\mathbf{W}_{b,1}^H {{\mathbf{A}}_{r}}{{\mathbf{C}}_{r}}+\mathbf{W}_{b,1}^H \mathbf{N}_1, \label{music s1} \end{align} where $ \mathbf{Y}_1\in\C^{N_r \times B_{t,1}}$ and $\mathbf{C}_r = \diag(\mathbf{h})\mathbf{A}_{t}^{H}{\mathbf{F}_{b,1}} \in \C^{L \times B_{t,1}}$. Since $\mathbf{W}_{b,1}=\mathbf{S}_{N_r}$ in \eqref{RSB 1}, projecting $\mathbf{Y}_1$ onto $\mathbf{W}_{b,1}$ yields \begin{align} \widetilde{\mathbf{Y}}_1 = \mathbf{W}_{b,1}\mathbf{Y}_1 = {{\mathbf{A}}_{r}}{{\mathbf{C}}_{r}}+ \mathbf{N}_1. \label{music s1 tilde} \end{align} The observation in \eqref{music s1 tilde} is rewritten by explicitly involving the array response vectors, \begin{align} \widetilde{\mathbf{Y}}_1 =[{\mathbf{a}_{r}}({{f}_{r,1}}),\ldots ,{\mathbf{a}_{r}}({{f}_{r,L}})]{{\mathbf{C}}_{r}}+ \mathbf{N}_1 =\mathbf{R}_1 + \mathbf{N}_1, \label{atomic observation} \end{align} where $\mathbf{R}_1 =[{\mathbf{a}_{r}}({{f}_{r,1}}),\ldots ,{\mathbf{a}_{r}}({{f}_{r,L}})]{{\mathbf{C}}_{r}} \in \C^{N_r \times B_{t,1}}$. The atom ${{\mathbf{A}}_{r}}(f,\mathbf{b})\in {\mathbf{C}^{N_r\times B_{t,1}}}$ is defined in \cite{MmvAtomic,OffGridCS} as ${{\mathbf{A}}_{r}}(f,\mathbf{b})={\mathbf{a}_{r}}(f){{\mathbf{b}}^{H}}$, where $f\in [0,1)$ and $\mathbf{b} \in \C^{B_{t,1} \times 1}$ with ${{\left\| \mathbf{b} \right\|}_{2}}=1$. We let the collection of all such atoms be the set $\mathcal{A}_{r}=\{{{\mathbf{A}}_{r}}(f,\mathbf{b}): f\in [0,1), {{\left\| \mathbf{b} \right\|}_{2}}=1\}$. Obviously, the cardinality of $\mathcal{A}_r$ is infinite. The matrix ${{\mathbf{R}}_{1}}$ in \eqref{atomic observation} can be written as the linear combination among the atoms from the atomic set ${{\mathcal{A}}_{r}}$, \begin{align} {{\mathbf{R}}_{1}}=\sum\limits_{l=1}^{L}{{{[\mathbf{c}_r]}_{l}}{{\mathbf{A}}_{r}}({{f}_{r,l}},{{\mathbf{b}}_{l}})}=\sum\limits_{l=1}^{L}{{{[\mathbf{c}_r]}_{l}}{\mathbf{a}_{r}}({{f}_{r,l}})}\mathbf{b}_{l}^{H}, \label{atom repre} \end{align} where $\mathbf{c}_r \in \R^{L \times 1}$ is the coefficient vector with $[\mathbf{c}_r]_l \ge 0$, and it has the relationship $[\mathbf{C}_r]_{l,:} = [\mathbf{c}_r]_l \mathbf{b}_l^H,~ \forall l = 1, \ldots, L$. Observing \eqref{atom repre}, the dimension of vector $\mathbf{c}_r$, i.e., $L$, can be interpreted as the sparest representation of $\mathbf{R}_1$ in the context of the atomic set ${\mathcal{A}}} \def\cB{{\mathcal{B}}} \def\cC{{\mathcal{C}}} \def\cD{{\mathcal{D}}_r$. Therefore, in order to seek the sparsest representation, after taking the noise in \eqref{atomic observation} into account, the reconstruction problem is formulated by \begin{align} \underset{\mathbf{R}_1}{\mathop{\min }}\,{\left\| \mathbf{R}_1 \right\|}_{\mathcal{A}_r,0}+ \frac{\lambda_1}{2} \| \widetilde{\mathbf{Y}}_1-\mathbf{R}_1\|_F^2, \label{super L0} \end{align} where $\lambda_1>0$ is the penalty parameter, and ${\left\| {{\mathbf{R}}_{1}} \right\|}_{{{\mathcal{A}}_{r}},0}$ is defined as \begin{subequations}\label{L0 atomic} \begin{align} {{\left\| {{\mathbf{R}}_{1}} \right\|}_{{{\mathcal{A}}_{r}},0}}&=\underset{\mathbf{c}_r}{\mathop{\inf }}\,{{\left\| \mathbf{c}_r \right\|}_{0}} \\ \text{subject to }&{{\mathbf{R}}_{1}}=\sum\limits_{l=1}^{L}{{{[\mathbf{c}_r]}_{l}}{{\mathbf{A}}_{r}}({{f}_{r,l}},{{\mathbf{b}}_{l}})}, \\ & \text{ }{{\mathbf{A}}_{r}}({{f}_{r,l}},{{\mathbf{b}}_{l}})\in {{\mathcal{A}}_{r}},{{[\mathbf{c}_r]}_{l}}\ge 0, \end{align} \end{subequations} with $\| \mathbf{R}_1\|_{{\mathcal{A}}} \def\cB{{\mathcal{B}}} \def\cC{{\mathcal{C}}} \def\cD{{\mathcal{D}}_r,0}$ revealing the minimal number of atoms in $\mathbf{R}_1$. When the sparest representation of $\mathbf{R}_1$, i.e., $\{{{{[\mathbf{c}_r]}_{l}}{\mathbf{a}_{r}}({{f}_{r,l}})}\mathbf{b}_{l}^{H}\}_{l=1}^L$, is found by solving \eqref{super L0}, the AoAs $\{f_{r,l}\}_{l=1}^L$ can be obtained from the atomic decomposition in \eqref{atom repre}. However, since the minimization problem in \eqref{L0 atomic} is combinatorial, it is not tractable to calculate the value of ${{\left\| {{\mathbf{R}}_{1}} \right\|}_{{{\mathcal{A}}_{r}},0}}$. To overcome the challenge, the problem in \eqref{super L0} is relaxed as, \begin{align} \underset{\mathbf{R}_1}{\mathop{\min }}\,{\left\| \mathbf{R}_1 \right\|}_{\mathcal{A}_r,1}+ \frac{\lambda_1}{2} \| \widetilde{\mathbf{Y}}_1-\mathbf{R}_1 \|_F^2, \label{robust L1 s1} \end{align} where $ {{\left\| {{\mathbf{R}}_{1}} \right\|}_{{{\mathcal{A}}_{r}},1}}$ is the atomic norm of $\mathbf{R}_1$ defined by \begin{subequations} \label{L1 atomic} \begin{align} {{\left\| {{\mathbf{R}}_{1}} \right\|}_{{{\mathcal{A}}_{r}},1}}&=\underset{\mathbf{c}_r}{\mathop{\inf }}\,{{\left\| \mathbf{c}_r \right\|}_{1}} \\ \text{subject to }&{{\mathbf{R}}_{1}}=\sum\limits_{l=1}^{L}{{{[\mathbf{c}_r]}_{l}}{{\mathbf{A}}_{r}}({{f}_{r,l}},{{\mathbf{b}}_{l}})}, \\ & \text{ }{{\mathbf{A}}_{r}}({{f}_{r,l}},{{\mathbf{b}}_{l}})\in {{\mathcal{A}}_{r}},{{[\mathbf{c}_r]}_{l}}\ge 0. \end{align} \end{subequations} It is noted that in \eqref{L1 atomic}, the atomic norm $\|\mathbf{R}_1 \|_{{\mathcal{A}}} \def\cB{{\mathcal{B}}} \def\cC{{\mathcal{C}}} \def\cD{{\mathcal{D}}_r,1}$ is to minimize the summation of entries in $\mathbf{c}_r$ instead of the number of non-zero elements in \eqref{L0 atomic}. Different from the intractability of \eqref{L0 atomic}, the problem in \eqref{L1 atomic} can be efficiently solved by semi-definite programming \cite{OffGridCS}: \begin{subequations} \label{atom eq} \begin{align} & {{\left\| {{\mathbf{R}}_{1}} \right\|}_{{{\mathcal{A}}_{r}},\text{1}}}= \underset{\mathbf{u},\mathbf{Z}}{\mathop{\inf }}\,\frac{1}{2}\text{tr}\left( \text{Toeplitz}(\mathbf{u}) \right)+\frac{1}{2}\text{tr}(\mathbf{Z}) \\ & \text{subject to } \left[ \begin{matrix} \text{Toeplitz}(\mathbf{u}) & {{\mathbf{R}}_{1}} \\ \mathbf{R}_{1}^H & \mathbf{Z} \\ \end{matrix} \right]\succeq \mathbf{0}, \end{align} \end{subequations} where $\mathbf{u}\in {\mathbf{C}^{N_r\times 1}},\mathbf{Z}\in {\mathbf{C}^{B_{t,1}\times B_{t,1}}}$, and $\text{Toeplitz}(\mathbf{u})\in {\mathbf{C}^{N_r\times N_r}}$ denotes the Hermitian Toeplitz matrix generated by the vector $\bold{u}$. Plugging \eqref{atom eq} into \eqref{robust L1 s1} gives \begin{subequations} \label{robust L1 s1 T} \begin{align} & \underset{\mathbf{u},\mathbf{Z},\mathbf{R}_1}{\mathop{\inf }} ~ \text{tr}\left( \text{Toeplitz}(\mathbf{u}) \right)+\text{tr}(\mathbf{Z}) + {\lambda_1} \| \widetilde{\mathbf{Y}}_1-\mathbf{R}_1 \|_F^2\\ & \text{subject to } \mathbf{X} = \left[ \begin{matrix} \text{Toeplitz}(\mathbf{u}) & {{\mathbf{R}}_{1}} \\ \mathbf{R}_{1}^H & \mathbf{Z} \\ \end{matrix} \right], ~ \mathbf{X} \succeq \mathbf{0}. \end{align} \end{subequations} It is straightforward to find that \eqref{robust L1 s1 T} is convex, where ADMM can be employed to accelerate the computation. The augmented Lagrangian of \eqref{robust L1 s1 T} is expressed as \begin{align} \mathcal{L}(\mathbf{u},\mathbf{Z},\mathbf{R}_1,\mathbf{X}, \bm{\Lambda}) &=\text{tr}\left( \text{Toeplitz}(\mathbf{u}) \right)+\text{tr}(\mathbf{Z}) + {\lambda_1} \| \widetilde{\mathbf{Y}}_1- \mathbf{R}_1 \|_F^2 \nonumber \\ &~~+ \left< \bm{\Lambda}, \mathbf{X}- \left[\begin{matrix} \text{Toeplitz}(\mathbf{u}) & {{\mathbf{R}}_{1}} \\ \mathbf{R}_{1}^H & \mathbf{Z} \\ \end{matrix} \right]\right> + \frac{\rho}{2}\lA \mathbf{X}- \left[\begin{matrix} \text{Toeplitz}(\mathbf{u}) & {{\mathbf{R}}_{1}} \\ \mathbf{R}_{1}^H & \mathbf{Z} \\ \end{matrix} \right] \rA_F^2, \label{aug lag} \end{align} where $\mathbf{X} \in \C^{(N_r + B_{t,1}) \times (N_r + B_{t,1})}$ and $\bm{\Lambda} \in \C^{(N_r + B_{t,1}) \times (N_r + B_{t,1})}$ are Hermitian matrices, and $\rho$ is the Lagrangian multiplier. Then, with $t$ being the iteration index, we iteratively update the variables in \eqref{aug lag} as follows: \begin{align} (\mathbf{u}^{t+1},\mathbf{Z}^{t+1},\mathbf{R}_1^{t+1}) &= \argmin_{\mathbf{u},\mathbf{Z}, \mathbf{R}_1} \mathcal{L}(\mathbf{u}, \mathbf{Z},\mathbf{R}_1,\mathbf{X}^t, \bm{\Lambda}^{t}) ,\label{admm s1}\\ \mathbf{X}^{t+1} &= \argmin_{\mathbf{X}\succeq \mathbf{0}} \mathcal{L}(\mathbf{u}^{t+1} ,\mathbf{Z}^{t+1} ,\mathbf{R}_1^{t+1} ,\mathbf{X} , \bm{\Lambda}^{t}),\label{admm s2}\\ \bm{\Lambda}^{t+1} &= \bm{\Lambda}^{t} + \rho\left( \mathbf{X}^{t+1}- \left[\begin{matrix} \text{Toeplitz}(\mathbf{u}^{t+1}) & {{\mathbf{R}}_{1}^{t+1}} \\ (\mathbf{R}^{t+1}_{1})^H & \mathbf{Z}^{t+1} \end{matrix} \right] \right). \end{align} The solutions of the \eqref{admm s1} and \eqref{admm s2} are respectively \begin{align} [\mathbf{u}^{t+1}]_i &= \begin{cases} \frac{V_i+\rho S_i}{(N_r -t)\rho+N_r}, &i=1\\ \frac {V_i+\rho S_i}{(N_r -t)\rho}, & i=2,\ldots,N_r \end{cases},\text{with } V_i=\sum_{k=1}^{N_r+1-i}[\bm{\Lambda}^t]_{k,k-1+i}, ~S_i=\sum_{k=1}^{N_r+1-i}[\mathbf{X}^t]_{k,k-1+i},\nonumber\\ \mathbf{R}_1^{t+1}& = \frac{1}{ \lambda_1+\rho} (\lambda_1 \widetilde{\mathbf{Y}}_1+ \rho[\mathbf{X}^{t}]_{1:N_r, N_r + 1:\text{end}}+[\bm{\Lambda}^{t}]_{1:N_r ,N_r + 1:\text{end}}),\nonumber\\ \mathbf{Z}^{t+1} &= \frac{1}{\rho}([\bm{\Lambda}^{t}]_{N_r+1:\text{end},N_r+1:\text{end}}+ \rho[\mathbf{X}^{t}]_{N_r+1:\text{end},N_r+1:\text{end}}-\mathbf{I}_{B_{t,1}}),\nonumber\\ \mathbf{X}^{t+1}&= \left[\begin{matrix} \text{Toeplitz}(\mathbf{u}^{t+1}) & {{\mathbf{R}}_{1}^{t+1}} \\ (\mathbf{R}^{t+1}_{1})^H & \mathbf{Z}^{t+1} \\ \end{matrix} \right] -\frac{1}{\rho} \bm{\Lambda}^t. \nonumber \end{align} It is worth noting that in order to guarantee $\mathbf{X} \succeq 0$ as shown in \eqref{admm s2}, we can set the negative eigenvalues of $\mathbf{X}^{t+1}$ to $0$. When the iterative process converges, the result $\text{Toeplitz}(\mathbf{u})$ can be utilized to obtain the estimation of AoAs. Specifically, we can take Vandermonde decomposition \cite{OffGridCS} for $\text{Toeplitz}(\mathbf{u})$, $\text{Toeplitz}(\mathbf{u}) = \mathbf{V} \mathbf{D} \mathbf{V}^H$, where $\mathbf{V}=[\mathbf{a}_r(\hat{f}_{r,1}),\ldots,\mathbf{a}_r(\hat{f}_{r,L})] \in \C^{N_r \times L}$ with $\{\hat{f}_{r,l} \}_{l=1}^L$ being the estimated AoAs and $\mathbf{D} = \diag([d_1,\ldots,d_L])\in \C^{L \times L}$. In practice, it is not necessary to calculate the Vandermonde decomposition of $\text{Toeplitz}(\mathbf{u})$ explicitly. Since the column subspace of $\text{Toeplitz}(\mathbf{u})$ is equal to $\cR(\mathbf{V})$, the set of AoAs can be estimated from $\text{Toeplitz}(\mathbf{u})$ efficiently by spectrum estimation algorithms such as MUSIC or ESPRIT \cite{MmvAtomic,OffGridCS}. \subsection{Super Resolution AoD Estimation} Similarly, the observations for the second stage is given by \begin{align} \mathbf{Y}_2 &= \mathbf{W}_{b,2}^H \mathbf{H} \mathbf{F}_{b,2} + \mathbf{W}_{b,2}^H \mathbf{N}_2 \nonumber \\ & =\mathbf{W}_{b,2}^H{{\mathbf{A}}_{r}}\diag(\mathbf{h})\mathbf{A}_{t}^{H}\mathbf{F}_{b,2}+\mathbf{W}_2^H \mathbf{N}_2 \nonumber\\ & =\mathbf{C}_t \mathbf{A}_t^H \mathbf{F}_{b,2}+\mathbf{W}_{b,2}^H \mathbf{N}_2, \label{music s2} \end{align} where we let $\mathbf{C}_t = \mathbf{W}_{b,2}^H{{\mathbf{A}}_{r}}\diag(\mathbf{h}) \in \C^{L \times L }$. At Stage II, the observation $\mathbf{Y}_2$ in \eqref{music s2} is rewritten as \begin{align} \mathbf{Y}_2^H = \mathbf{F}_{b,2}^H \mathbf{A}_t \mathbf{C}_t^H + \mathbf{N}_2^H\mathbf{W}_{b,2} =\mathbf{R}_2^H +\mathbf{N}_2^H \mathbf{W}_{b,2} , \label{atomic observation 2} \end{align} where we let $\mathbf{R}_2 =\mathbf{F}_{b,2}^H \mathbf{A}_t \mathbf{C}_t^H \in \C^{B_{t,2} \times L}$. Due to the design of $\mathbf{F}_{b,2}$ in \eqref{expression Fb2}, we have \begin{align} \mathbf{F}_{b,2}^H \mathbf{A}_t =\sqrt{p_2} [\mathbf{A}_t]_{1:B_{t,2},:} =\sqrt{p_2}[\mathbf{a}_t({{f}_{t,1}}),\ldots ,\mathbf{a}_t({{f}_{t,L}})]_{1:B_{t,2},:} . \nonumber \end{align} For convenience, we define $\widetilde{\mathbf{a}}_t(f) =[\mathbf{a}_t(f)]_{1:B_{t,2}} \in \C^{B_{t,2} \times 1}$ and $\widetilde{\mathbf{A}}_t = [\widetilde{\mathbf{a}}_t({{f}_{t,1}}),\ldots ,\widetilde{\mathbf{a}}_t({{f}_{t,L}})]\in \C^{B_{t,2} \times L}$. The AoD estimation boils down to extracting $L$ parameters $\left\{ {{f}_{t, l}} \right\}_{l=1}^{L}$ in $\widetilde{\mathbf{A}}_t$. We let ${{\mathbf{A}}_{t}}(f,\mathbf{b})\in {\mathbf{C}^{B_{t,2}\times L }}$ be ${\mathbf{A}_{t}}(f,\mathbf{b})={\widetilde{\mathbf{a}}_{t}}(f){{\mathbf{b}}^{H}}$, where $f\in [0,1)$, $\mathbf{b} \in \C^{L \times 1}$ with ${{\left\| \mathbf{b} \right\|}_{2}}=1$, and the atomic set ${\mathcal{A}}} \def\cB{{\mathcal{B}}} \def\cC{{\mathcal{C}}} \def\cD{{\mathcal{D}}_t$ is defined by $\mathcal{A}{_{t}}= \{{{\mathbf{A}}_{t}}(f,\mathbf{b}): f\in [0,1], {{\left\| \mathbf{b} \right\|}_{2}}=1\}$, Similarly, ${\mathbf{R}}_{2}^H$ in \eqref{atomic observation 2} can be written as the linear combination of the atoms from the set ${\mathcal{A}}_{t}$, \begin{align} {\mathbf{R}}_{2}^H =\sum\limits_{l=1}^{L}{{[\mathbf{c}_t]}_l}{{\mathbf{A}}_{t}}({{f}_{t,l}},{{\mathbf{b}}_{l}})=\sum\limits_{l=1}^{L}{{{[\mathbf{c}_t]}_{l}}{\mathbf{a}_{t}}({{f}_{t,l}})}\mathbf{b}_{l}^{H}, \nonumber \end{align} where $\mathbf{c}_t \in \R^{L \times 1}$ is the coefficient vector with $[\mathbf{c}_t]_l \ge 0$. Therefore, using the similar approach as AoA estimation in \eqref{robust L1 s1}, the AoD estimation problem is given by \begin{align} \underset{\mathbf{R}_2}{\mathop{\min }}\,{\left\| \mathbf{R}_2^H \right\|}_{\mathcal{A}_t,1}+ \frac{\lambda_2}{2} \left\| \mathbf{Y}_2-\mathbf{R}_2 \right\|_F^2, \label{robust L1 s2} \end{align} where $\lambda_2$ is a penalty parameter. The problem in \eqref{robust L1 s2} can also be solved in a similar manner as \eqref{robust L1 s1}, and the estimation for AoDs, i.e., $\{\hat{f}_{t,l}\}_{l=1}^L$, can be obtained. Furthermore, after the AoAs $\{\hat{f}_{r,l}\}_{l=1}^L$ and AoDs $\{\hat{f}_{t,l}\}_{l=1}^L$ are estimated, we can easily calculate the AoA and AoD array response matrix $\widehat{\mathbf{A}}_r$ and $\widehat{\mathbf{A}}_t$. Then, by using the channel estimation technique provided in Section \ref{R estimation}, the final channel estimation result is obtained. \section{Simulation Results} \label{section simulation} In this section, we evaluate the performance of the proposed two-stage AoA and AoD estimation method and two-stage method with super resolution. For comparison, we take the OMP-based mmWave channel estimation method \cite{OMPchannel} as our benchmark. Also, we include the oracle estimator as we discussed in \eqref{oracal estimator}. The parameter settings for evaluation are as follows. We assume throughout the simulation $N_r=20,N_t=64$, and the channel model is given by \eqref{channel model}. We let the dimensions of the angle grids for the proposed \text{two-stage} method and OMP \cite{OMPchannel} be $G_r=sN_r$ and $G_t=sN_t$. The number of paths is $L=4$. The variance of the path gain is $\sigma_l^2 = 1, \forall l$. The number of RF chains is $N=4$. The number of channel uses for the estimation task is $K=50$. The minimum allowed transmit beams at Stage I are $\widetilde{B}_{t,1}=1$. Without loss of generality, for the proposed two-stage framework, the power budget $E = E_1 + E_2$, where $E_1$ and $E_2$ are, respectively, given by the resource allocations in \eqref{resource E1} and \eqref{resource E2} with $\eta_1=\eta_2=0.95$ and SNR$=20$dB. To evaluate the estimation performance, we use three performance metrics: \begin{itemize} \item The first metric is the SRP. The error of the estimated angles are defined as \begin{align} \epsilon =\frac{1}{2L} \sum_{l=1}^{L}\left(| f_{r,l}- \hat{f}_{r,l}|^2+| f_{t,l}- \hat{f}_{t,l}|^2\right). \nonumber \end{align} We declare the reconstruction is successful if $\epsilon \le 10^{-3}$. Precisely, SRP is defined as \begin{align} \text{SRP} = \frac{\text{number of trials with } \epsilon \le 10^{-3}}{\text{number of total trials}}. \nonumber \end{align} \item The second metric is MSE of angle estimation defined as \begin{align*} \text{MSE} = \E\left[\sum_{l=1}^{L}\left(| f_{r,l}- \hat{f}_{r,l}|^2+| f_{t,l}- \hat{f}_{t,l}|^2\right)\right]. \end{align*} \item The third metric is NMSE of channel estimation defined as $$\text{NMSE} = \E[\| \mathbf{H} - \widehat{\mathbf{H}}\|_F^2/\| \mathbf{H} \|_F^2],$$ where $\widehat{\mathbf{H}}$ is the channel estimate. \end{itemize} \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/bench_joint_pro.eps} \caption{SRP vs. SNR (dB) with discrete angles ($N_r = 20,N_t=64, L=4, N=4, K=50,\widetilde{B}_{t,1}=1, s=1$).} \label{bench_pro} \end{figure} \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/bench_NMSE.eps} \caption{NMSE vs. SNR (dB) with discrete angles ($N_r = 20,N_t=64, L=4, N=4, K=50,\widetilde{B}_{t,1}=1, s=1$).} \label{bench_NMSE} \end{figure} \subsection{Channel Estimation Performance of Two-stage Method with Discrete Angles} \label{section_discrete_simulation} For the simulations with discrete angles in Figs. \ref{bench_pro}-\ref{bench_NMSE}, the ${{f}_{t,l}}$ and $f_{r,l}$ in \eqref{channel model} are uniformly distributed on the grids of size $G_t=N_t$ and $G_r=N_r$, respectively. Three methods are compared, which are proposed two-stage SOMP method, one-stage OMP method \cite{OMPchannel}, AMP method \cite{donoho2009message}, and oracle method in \eqref{oracal estimator}. We show the SRP in Fig. \ref{bench_pro} and NMSE in Fig. \ref{bench_NMSE}.\par In Fig. \ref{bench_pro}, considering that oracle method assumes that AoAs and AoDs are known as a priori, we do not illustrate the performance of the oracle method when comparing the SRP. As can be seen in Fig. \ref{bench_pro}, the proposed two-stage SOMP method achieves a higher SRP compared to the benchmarks. It is worth noting that the AMP-based method require the minimal measurements to guarantee the convergence \cite{donoho2009message}. When the number of channel uses is limited, the AMP-based method can not achieve the near one SRP even if the SNR is high. Also, the SRPs of AoA and AoD of the proposed two-stage SOMP method are both higher than those of one-stage OMP method. The improvement of SRP of AoD is because we optimize the sounding beams of the second stage based on the estimated AoA result. For the improvement of SRP of AoA, it is because we allocate more power budget to Stage I according to the proposed resource allocation strategy.\par Similarly, in Fig. \ref{bench_NMSE}, the proposed two-stage SOMP method has lower NMSE than the one-stage OMP and AMP methods. In addition, we can find from Fig. \ref{bench_NMSE} that the proposed two-stage SOMP method converges to the performance of the oracle method as SNR grows. Overall, Figs. \ref{bench_pro}-\ref{bench_NMSE} verify that the proposed two-stage method outperforms the one-stage OMP in the scenario of discrete angles. \subsection{Channel Estimation Performance of Two-stage Method with Continuous Angles} \label{sec_cont_simulation} For this set of simulations in Fig. \ref{bench_CON_Angle_Error}-\ref{bench_CON_H_Error}, we assume the ${{f}_{t,l}}$ and $f_{r,l}$ in \eqref{channel model} are uniformly distributed in $[0,1)$. Four methods are compared, which are the proposed two-stage SOMP method, two-stage method with super resolution, one-stage OMP method \cite{OMPchannel}, and one-stage atomic method \cite{superMM}. When implementing the two-stage SOMP method and one-stage OMP method with the defined angle grids, the estimated angles are located on the defined grids. Fig. \ref{bench_CON_Angle_Error} illustrates the MSE and Fig. \ref{bench_CON_H_Error} illustrates the NMSE of channel estimation.\par In Fig. \ref{bench_CON_Angle_Error}, the proposed two-stage SOMP method and two-stage method with super resolution outperform the one-stage OMP and one-stage atomic method, respectively. Interestingly, the two-stage SOMP method achieves the minimal MSE when SNR is low. This is because when SNR is low, i.e., $\text{SNR}\le 5\text{dB}$, the noise power is higher than that of the quantization error. Therefore, using the quantized model could reduce the complexity of problem and achieve near-optimal performance. When SNR is high, i.e., $\text{SNR}\ge 5\text{dB}$, the two-stage method with super resolution achieves the minimal MSE. This is because when SNR is high, the quantization error will become dominant, which can not be handled by the grid-based methods. Nevertheless, the Fig. \ref{bench_CON_Angle_Error} verifies that by dividing the estimation into two stages, the estimation of AoAs and AoDs is improved compared to the one-stage estimation.\par Likewise, in Fig. \ref{bench_CON_H_Error}, the proposed two-stage SOMP method and two-stage method with super resolution also achieve lower NMSE than the one-stage OMP and one-stage atomic methods. Similarly, when SNR is high, the two-stage method with super resolution shows the minimum NMSE. \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/bench_con_angle_error.eps} \caption{MSE vs. SNR (dB) with continuous angles ($N_r = 20,N_t=64, L=4, N=4, K=50,\widetilde{B}_{t,1}=1,s=2$).} \label{bench_CON_Angle_Error} \end{figure} \begin{figure} \centering \includegraphics[width=.56\textwidth]{figures/bench_con_H_error.eps} \caption{NMSE vs. SNR (dB) with continuous angles ($N_r = 20,N_t=64, L=4, N=4, K=50,\widetilde{B}_{t,1}=1,s=2$).} \label{bench_CON_H_Error} \end{figure} \subsection{Analysis of Computational Complexity} \label{Section Complexity} For two-stage method, the computational complexity for the first stage is $\mathcal{O}(L N_r G_r) = \mathcal{O}(sL N_r^2)$, and the complexity for the second stage is $\mathcal{O}(LB_{t,2} G_t)=\mathcal{O}(sL(K-N_r/N)N_t)=\mathcal{O}(sLKN_t)$ with $K$ being the number of channel uses. Therefore, the total computational complexity for two-stage method is $\mathcal{O}(sL N_r^2)+\mathcal{O}(sLKN_t)= \mathcal{O}(sLKN_t)$. However, for the one-stage OMP method, the computational complexity is $\mathcal{O}(LKNG_tG_r) = \mathcal{O}(s^2LKNN_tN_r)$. It is obvious that the two-stage method has much lower computational complexity compared to the one-stage OMP by $\mathcal{O}(sNN_r)$ times. For the two-stage method with super resolution, in Stage I, the computational complexity of ADMM per iteration is dominated by the eigenvalue decomposition of $\mathbf{X}^{t+1}$, i.e., $\mathcal{O}(N_r^3)$. Similarly, for Stage II, each iteration has the computational complexity of $\mathcal{O}(B_{t,2}^3)=\mathcal{O}((K-N_r/N)^3)=\mathcal{O}(K^3)$. Given the number of iteration $T$ and $K\ge N_r$, the total computational complexity of the super resolution method is $\mathcal{O}(TN_r^3)+\mathcal{O}(TK^3)=\mathcal{O}(TK^3)$. In order to compare the complexities of the two-stage method with super resolution and one-stage OMP, we consider a simple example as follows. In particular, if $N_r=N_t$ and $K=\mathcal{O}(N_r)$, the complexity of the proposed two-stage method with super resolution is $\mathcal{O}(s^2LN/T)$ times lower than that of the one-stage OMP. \section{Conclusion} \label{section conclusion} { In this paper, the two-stage method for the mmWave channel estimation was proposed. By sequentially estimating AoAs and AoDs of large-dimensional antenna arrays, the proposed two-stage method saved the computational complexity as well as channel use overhead compared to the existing methods. Theoretically, we analyzed the SRPs of AoA and AoD of the proposed two-stage method. Based on the analyzed SRP, we designed the resource allocation strategy among two stages to guarantee the accurate AoA and AoD estimation. In addition, to resolve the issue of quantization error, we extended the proposed two-stage method to a version with super resolution. The numerical simulations showed that the proposed two-stage method achieves more accurate channel estimation result than the one-stage method.} \appendices \section{Proof of Lemma \ref{lemma SOMP}} \label{appendix5-1} For an arbitrary random noise matrix $\mathbf{N}$, the SRP of SOMP has been characterized in \cite{zhang2021successful}. This result is general to be extended to the case in Lemma \ref{lemma SOMP}, where the entries in $\mathbf{N}$ are \gls{iid} complex Gaussian. \begin{Theorem}(SRP of SOMP with arbitrary random noise \cite{zhang2021successful}) \label{theorem SOMP random L} Suppose the signal model provided in Lemma \ref{lemma SOMP}. Given the measurement matrix $\bm{\Phi}$ with its MIP constant satisfying $\mu< 1/(2L+1)$ and the cumulative distribution function (CDF) of $\| \mathbf{N}\|_2$ satisfying \begin{align} \text{Pr}(\| \mathbf{N} \|_2 \le x) = F_{N}(x), \label{def prN} \end{align} the SRP of SOMP in Algorithm \ref{alg_SOMP} satisfies \begin{align} \text{Pr}(\cV_S) \ge F_{N}\left(\frac{C_{\text{min}}{{(1-(2L-1)\mu )}} } {2 }\right), \label{SOMPS SRP} \end{align} where $\cV_{S}$ is the event of successful reconstruction of Algorithm \ref{alg_SOMP}, $C_{\min} = \min\limits _{i\in \Omega} \lA [\mathbf{C}]_{i,:} \rA_2$. \end{Theorem} According to the results in Theorem \ref{theorem SOMP random L}, the SRP of SOMP is characterized by the CDF of $\| \mathbf{N}\|_2$. Thus, in order to extend the result provided in Theorem \ref{theorem SOMP random L} to the case in Lemma \ref{lemma SOMP}, the CDF of $\|\mathbf{N}\|_2$ is of interest when the entries of $\mathbf{N} \in \C^{M\times d}$ are \gls{iid} $\cC\cN(0,\sigma^2)$. Fortunately, according to \cite{TW2, TW1}, the CDF of the largest singular value of $\mathbf{N}$ converges in distribution to the Tracy-Widom law as $M,d$ tend to $\infty$, \begin{align} \text{Pr}(\| \mathbf{N} \|_2 \le x) \approx F_2\left(\frac{ {x^2}/{\sigma^2}-\mu_{M,d}}{\sigma_{M,d}} \right), \label{guanssian dis} \end{align} where the function $F_2(\cdot)$ is the CDF of Tracy-Widom law \cite{TW2, TW1}, $\mu_{M,d} =(M^{1/2} + d^{1/2}) ^2$, and $ \sigma_{M,d} = (M^{1/2} + d^{1/2}) (M^{-1/2} + d^{-1/2}) ^{1/3} $. Finally, after plugging the expression in \eqref{guanssian dis} into \eqref{SOMPS SRP} of Theorem \ref{theorem SOMP random L}, we obtain Lemma \ref{lemma SOMP}, which completes the proof. \qed \section{Proof of Proposition \ref{with noise p}} \label{proof prob noise case} One can write the effective noise as $\tilde{\mathbf{N}} = \mathbf{E} + \mathbf{N}$ where the entries in $\mathbf{N}$ are \gls{iid} with $\cC\cN(0, \sigma^2)$. Therefore, we have the following probability bound, \begin{align} \Pr\left( \| {\tilde{\mathbf{N}}}\|_2 \le x\right) &\overset{(a)}{\le} \Pr\left(\lA {\mathbf{E}}\rA_2+\lA {\mathbf{N}}\rA_2 \le x\right) \nonumber\\ &\overset{(b)}{\approx} F_2\left(\frac{{(x-\| \mathbf{E}\|_2)^2}/{{\sigma}^2}-\mu_{M,d}}{\sigma_{M,d}}\right),\label{dis with quantize} \end{align} where the inequality $(a)$ is due to the triangular inequality, and the approximation $(b)$ holds from \eqref{guanssian dis}. Then, according to Theorem \ref{theorem SOMP random L}, plugging the expression \eqref{dis with quantize} into \eqref{SOMPS SRP} leads to \begin{align} \text{Pr}(\cV_S) \ge F_2\left(\frac{{\left((1-(2L-1)\mu) C_{\text{min}}-2\| \mathbf{E}\|_2\right)^2}-4\sigma^2 \mu_{M,d}}{4\sigma^2 \sigma_{M,d}}\right), \nonumber \end{align} where $C_{\min} = \min\limits _{i\in \Omega} \lA [\mathbf{C}]_{i,:} \rA_2$. This concludes the proof. \qed \section{Proof of Theorem \ref{AoDs Prob}} \label{appendix5-3} Plugging RSB in \eqref{expression Wb2} and TSB in \eqref{expression Fb2} into \eqref{ob 2nd} gives $\| [\bm{\Phi}_2]_{:,j}\|_2 =\sqrt{ {p_2 B_{t,2}}/{N_t}}, ~ j=1,\ldots,G_t$, and $ C_{\min} = \min _{t_l \in \Omega_t}\| [\mathbf{C}_2]_{t_l,:} \|_2 =| {{h}_{\min}} | $ with ${{t}_{l}}$ being the index of the $l$th path of $\mathbf{A}_t$ in $\bar{\mathbf{A}}_t$ such that $[\bar{\mathbf{A}}_t]_{:,t_l}=[\mathbf{A}_t]_{:,l}$, $l=1, \ldots, L$. Hence, incorporating the latter $C_{\min}$ and $\| [\bm{\Phi}_2]_{:, j} \|_2$ into Lemma \ref{lemma SOMP} concludes the proof. \qed \bibliographystyle{IEEEtran}
proofpile-arXiv_065-3422
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection*{\textbf{\textup{\textcolor{red} {[#1]}}}} } \newcommand{\term}[1] {\begin{itemize} \item \textcolor{magenta}{#1} \end{itemize} } \newcommand{\fact}[1] {\begin{itemize} \item \textcolor{blue}{#1} \end{itemize} } \newcommand{\edited}[1]{\textcolor{blue}{#1}} \newcommand{\quest}[1] {\begin{itemize} \item \textcolor{cyan}{#1?} \end{itemize} } \newcommand{\infor}[1] {\begin{itemize} \item \textcolor{blue}{$<$#1$>$} \end{itemize} } \newcommand{\todop}[1] {\begin{itemize} \item \textcolor{red}{#1} \end{itemize} } \newcommand{\repltext}[2] { \todo{[}\st{#1} \todo{$\rightarrow$ #2]} } \newcommand{\tcomment}[2] { \todo{[}\textcolor{blue}{#1} \todo{-- #2]} } \usepackage{amsmath} \newcommand{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.03\textwidth]{images/execution.png}}}{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.03\textwidth]{images/execution.png}}} \newcommand{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.03\textwidth]{images/proto-self.png}}}{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.03\textwidth]{images/proto-self.png}}} \newcommand{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.05\textwidth]{images/mirroring.png}}}{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.05\textwidth]{images/mirroring.png}}} \newcommand{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.05\textwidth]{images/augmentation.png}}}{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.05\textwidth]{images/augmentation.png}}} \newcommand{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.05\textwidth]{images/reflexion.png}}}{\hspace{0.2mm}\raisebox{-0.2mm}{\includegraphics[width=0.05\textwidth]{images/reflexion.png}}} \newtheorem{statement}{Statement} \begin{document} \title{Providing Self-Aware Systems with Reflexivity} \author{Alessandro Valitutti\inst{1} \and Giuseppe Trautteur\inst{2}} \institute{University of Bari \and University of Naples Federico II} \maketitle \begin{abstract} We propose a new type of self-aware systems inspired by ideas from higher-order theories of consciousness. First, we discussed the crucial distinction between introspection and reflexion. Then, we focus on computational reflexion as a mechanism by which a computer program can inspect its own code at every stage of the computation. Finally, we provide a formal definition and a proof-of-concept implementation of computational reflexion, viewed as an enriched form of program interpretation and a way to dynamically ``augment" a computational process. \keywords{computational reflexivity, computational augmentation, self-aware systems, self-representation, self-modification, self-monitoring} \end{abstract} \section{Introduction} \label{introduction} Self-aware computing is a recent area of computer science concerning autonomic computing systems capable of capturing knowledge about themselves, maintaining it, and using it to perform self-adaptive behaviors at runtime\cite{Lewis_et_al2015}\cite{Torresen_et_al2015}\cite{Amir_et_al2004}. Almost all self-aware systems share one or more of three properties dealt with extensively in the AI literature: \emph{self-representation}, \emph{self-modification}, and \emph{persistence}. Examples of self-aware behaviors are the introspection and reflection features implemented in some programming languages such as Java. Type introspection is the ability of a program to examine the type or properties of an object at runtime, while reflection\footnote{The term \emph{reflection} should not be confused with the term \emph{reflexion}, which will be discussed in Sections \ref{conscious-reflexivity} and \ref{computational-reflexivity}.} additionally allows a program to manipulate objects and functions at runtime. However, neither of them have all of the above three properties. In fact, introspection implies self-representation but not self-modification. Moreover, reflection is temporally-bound, since it occurs in a small portion of the program execution. Even self-monitoring, considered as a periodic sequence of introspective events, implies persistence but not self-modification. We may wonder if we could have a type of computational self-awareness in which persistent self-representation and self-modification would occur simultaneously and yet being functionally distinct. In this paper, we address this issue and present a computational architecture provided with this property, which we call \emph{computational reflexivity}. Specifically, we propose to introduce introspection and reflection at every step of the execution, enriching the interpretation loop with additional instructions aimed to represent the program at a meta level, combine local and global information, and perform a second-order execution. The enriched interpreter is thus capable of running a program and, concurrently, generating and executing a corresponding modified (or ``augmented'') version. This separation between ``observed'' (or \emph{target}) and ``observing'' (or \emph{augmented}) process allows the system to perform self-modification at a virtual level (i.e., on the augmented process). As a consequence, the system can choose whether and when the modification should be applied to the target process. In addition to the formal definition of computational reflexivity, we provide a proof-of-concept prototype, implemented through the modification of a meta-circular interpreter. It allows us to demonstrate that the proposed mechanism is computationally feasible and even achievable with a small set of instructions. In our definition of computational reflexivity, we have been inspired by several concepts discussed in the literature on consciousness studies. Some of them will be reported in the following sections. Our main source of inspiration is, however, the notion of self-conscious reflexivity, as discussed in higher-order theories of consciousness, and the attempts to describe it in neuroscientific \cite{Damasio1999} and computational \cite{Trautteur2004} terms. The rest of the paper is organized as follows. In the next section, we present an overview of self-awareness, introspection, and reflexion in the context of both computer science and consciousness studies. Section \ref{computational-reflexivity} introduces the formal definitions of computational reflexion, and Section \ref{implementation} introduces the prototype. Finally, we present a short discussion in Section \ref{discussion} and draft possible applications and next research steps in Section \ref{conclusion}. \section{Background} \label{background} \subsection{Procedural Introspection} \label{procedural-introspection} In the context of the present work, we use the term \emph{computational introspection} to indicate a program capable of accessing itself, create a self-representation, and manipulate it. A crucial distinction should be made between the meaning of ``knowledge'' underlying the notion of ``representation'' and ``manipulation''. For this reason, we distinguish between \emph{procedural knowledge} and \emph{declarative knowledge}, the former based on computable functions, and the latter on logical statements. Depending on which meaning of ``knowledge'' is adopted, there are two different ways to define computational introspection, called here \emph{procedural introspection} and \emph{declarative introspection}, respectively. Batali \cite{Batali1983} claims that ``introspection is the process of thinking about one's own thoughts and feelings. [...] To the degree that \emph{thoughts} and \emph{feelings} are computational entities, computational introspection would require the ability of a process to access and manipulate its own program and its current context'' (See Valdemir and Neto \cite{Valdemir_Neto2007} on \emph{self-modifying code}). In other words, computational introspection corresponds to the ability of a program to process its own code as data and modify it\footnote{In this definition, we put together self-representation and self-modification and, thus, the \emph{introspection} and \emph{reflection} features mentioned in Section \ref{introduction}.}. By contrast, in declarative introspection, the \emph{access} corresponds to the generation of a set of logical statements, while their \emph{manipulation} is performed by logical inference \cite{McCarthy1959}\cite{Weyhrauch1980}. Batali \cite{Batali1983} says that ``The general idea is that a computational system (an agent preferably) embodies a theory of reasoning (or acting, or whatever). This is what traditional Al systems are -- each system embodies a theory of reasoning in virtue of being the implementation of a program written to encode the theory.'' As discussed by Cox \cite{Cox2005}, ``From the very early days of AI, researchers have been concerned with the issues of machine self-knowledge and introspective capabilities. Two pioneering researchers, Marvin Minsky and John McCarthy, considered these issues and put them to paper in the mid-to-late 1950’s. [...] Minsky's \cite{Minsky1968} contention was that for a machine to adequately answer questions about the world, including questions about itself in the world, it would have to have an executable model of itself. McCarthy \cite{McCarthy1959} asserted that for a machine to adequately behave intelligently it must declaratively represent its knowledge. [...] Roughly Minsky's proposal was procedural in nature while McCarthy's was declarative.'' On the basis of these ideas, Stein and Barnden performed a more recent work to enable a machine to procedurally simulate itself \cite{Stein_Barnden1995}. Interestingly, Johnson-Laird \cite{JohnsonLaird1983}, inspired by Minsky, proposes a definition of procedural introspection closer to the concept of computable function. He claims that ``Minsky's formulation is equivalent to a Turing machine with an interpreter that consults a complete description of itself (presumably without being able to understand itself), whereas humans consult an imperfect and incomplete mental model that is somehow qualitatively different.'' According to Smith \cite{Smith1982}, ``the program must have access, not only to its program, but to fully articulated descriptions of its state available for inspection and modification.'' [...] Moreover, ``the program must be able to resume its operation with the modified state information, and the continued computation must be appropriately affected by the changes.'' Unlike the use of `procedural' discussed above, actually consisting of a ``declarative'' representation of the ``procedural knowledge'', we employ the term in a more restrictive way. \emph{Procedural introspection} is here limited to program code access and modification, without any logical modeling and inference. In this way, we want to avoid the possible dependence of a particular declarative modeling from the choices of the human designer, instead focusing on aspects connected to program access and modification. \subsection{Introspection in Consciousness Studies} \label{introspection-consciousness} Historically, all the uses of the term `introspection' in computer science have been influenced by the meaning of the same term in philosophy of mind and, later on, neurosciences and cognitive science. In consciousness studies, introspection is often discussed in the context of the so-called \emph{higher-order} (\emph{HO}) theories, based on the assumption that there are different ``levels'' or ``orders'' of mental states. Perceptions, emotions, and thoughts are instances of first-order mental states. Higher-order mental states are mental states about other mental states. For example, a thought about thinking something. Introspection is considered as ``an examination of the content of the first-order states'' \cite{Overgaard_Mogensen2016}. It is not clear, however, if introspection itself is a high-order state or it is involved in the occurrence of first-order states. \subsection{Self-Conscious Reflexivity} \label{conscious-reflexivity} Introspection is not generally considered the main characteristic of conscious states. In contrast, as claimed by Peters \cite{Peters2013}, ``consciousness is reflexivity'', where \emph{reflexion} is the ``awareness that one is perceiving''. Unlike other defining characteristics, such as intentionality, reflexivity is the only one that is considered unique to consciousness. Trautteur remarked that Damasio was the first scientist to describe reflexion in the context of neuroscience \cite{Trautteur2004}. Damasio's definition of reflexion (referred to by the term \emph{core self}) is reported in the following statement: \begin{itemize} \item~It is the process of an organism caught in the act of ``representing its own changing state as it goes about representing something else'' (\cite[p. 170]{Damasio1999}). \end{itemize} This definition is meant to be based on biological (and, thus, physicalist, objective) terms since the term `representation' here denotes specific neural patterns. The next statement expresses the attempt by Trautteur to translate the above ``metaphorical'' definition in computational terms: \begin{itemize} \item~[It] is the process of an agent ``processing its own processing while processing an input\footnote{This statement is extracted from unpublished notes by Trautteur.}.'' \end{itemize} In this version, the \emph{organism} is reformulated as a computational \emph{agent} and \emph{representation} as a computational \emph{process}. Both the above statements present a logical issue. We refer to it as the \emph{identity paradox}. It consists of the fact that the object and the subject of the experience are perceived as the same entity. It is a \emph{violation of the identity principle}, also detectable in other expressions used by the same and other authors such as ``presence o the self to itself'' or ``the identity of the owner (of experience) and the owned'' \cite{Trautteur2004}. \subsection{Elements of Inspiration and Informal Definition of Computational Reflexivity} \label{inspiration} To overcome this logical contradiction, in the present research we moved the focus from \emph{identity} to \emph{simultaneity}. This frame shifting was inspired by Van Gulick \cite{VanGulick2014}, which emphasizes the simultaneity of observed and observer: ``what makes a mental state M a conscious mental state is the fact that it is accompanied by a simultaneous and non-inferential higher-order (i.e., meta-mental) state whose content is that one is now in M''. The above statement triggered the insight that reflexion can be seen as the simultaneous occurrence of two \emph{distinct} and \emph{synchronized} \emph{processes}. It implies three underlying assumptions in it: \emph{temporal extension} (i.e., `state' means that we are dealing with \emph{processes}), \emph{distinction} (i.e., we have \emph{two} separate processes), and \emph{synchronicity} (i.e., the two processes are \emph{simultaneous}). Because of the temporal extension, the term `simultaneity' is employed here in the sense of \emph{interval simultaneity}, which refers to sequences of events \cite{Jammer2006}. Interval simultaneity does not necessarily imply, here, the simultaneity of the single events. Our assumption of synchronicity requires that each step in one of the two processes must occur only after a corresponding step in the other one. As shown in the next section, each pair of steps are part of the same interpretation loop. Using the second statement as a reference, we informally define \emph{computational reflexion} as the concurrent (i.e. at every step of the interpretation loop) and synchronized execution of a computer program and manipulation of its code. Correspondingly, an interpreter capable of performing computational reflexion is said to be provided with \emph{computational reflexivity}. This definition implies that computational reflexivity is a characteristic of a particular class of universal machines. \section{Formal Definition of Computational Reflexion} \label{computational-reflexivity} In this section, we provide, a step a time, all building blocks for the formal definition of computational reflexion. We assume reflexivity as a property applicable to the execution of any computer program, instead of a property of a single program. For this reason, it must rely on a particular type of program interpretation. From the point of view of an interpreter, the execution of a program can be reduced to a number of iterations of the same \emph{interpretation loop}. We use the term \emph{step} to denote a single occurrence of the interpretation loop, despite its internal complexity. We unravel below the definition of computational reflexivity as a sequence of incremental enrichments of the interpretation loop. Each enrichment, referred by both a textual symbol and a graphic mark, is meant to induce a corresponding modification at the process level. \vspace{3mm} \noindent \textbf{1. Lower Step and Standard Execution} \hspace{1mm} The original computational step (i.e., the unmodified interpretation loop) is called here \emph{lower step}, indicated by the symbol $(S_L)$ and the graphic mark \includegraphics[width=0.03\textwidth]{images/execution.png}. At the process level, we call \emph{target process} the overall program execution. \vspace{3mm} \noindent \textbf{2. Single Introspection and Tracing} \hspace{1mm} In this modified step, the interpreter executes a \emph{local procedural introspection} on the current step, returning the code of the current instruction. It is called \emph{single introspection}, indicated by the symbol $(S_L, I_S)$ and the graphic mark of the interpretation loop is \includegraphics[width=0.03\textwidth]{images/proto-self.png}. At the process level, the system generates a trace of execution, similar to the one produced by a debugger. \vspace{3mm} \noindent \textbf{3. Single Upper Step and Mirroring} \hspace{1mm} The interpreter executes the instruction just extracted by introspection. We call it \emph{upper step}, denoted by $(S_L, I_S, S_{SU})$. The overall loop is graphically represented as \includegraphics[width=0.04\textwidth]{images/mirroring.png}. At the process level, we have two identical programs simultaneously executed. We use the term \emph{mirroring} to indicate this real-time duplication of the target process. \vspace{3mm} \noindent \textbf{4. Double Upper Step and Augmentation} \hspace{1mm} Here the interpretation loop is enriched with two further operations: the modification of the current step of the ``mirrored program'' by introduction of an additional instructions, and the next step execution\footnote{Although a more general class of code modification is conceivable, we limit the focus on the modification by instruction insertion. As explained in the next point, the aim is to enrich the second process with information about the target process.}. The term \emph{double upper step}, with the symbol $(S_L, I_S, S_{DU})$, indicates the execution of the ``mirrored'' instruction and the additional one. The overall loop is graphically represented as \includegraphics[width=0.04\textwidth]{images/augmentation.png}. We call \emph{computational augmentation} the modification of the interpretation loop performed so far. Correspondingly, we have two simultaneous processes: the target process and the augmented process. The latter one is based on the former but modified \emph{at the step level}. \vspace{3mm} \noindent \textbf{5. Double Introspection and Reflexion} \hspace{1mm} Now, we consider a particular type of computational augmentation, in which the additional instruction of the \emph{double upper step} is a further operation of \emph{global procedural introspection}. While the local introspection returns the code of the current instruction of the target program (i.e., the lower step defined above), the global introspection returns the code of the entire target program or a subset of it. In this case, the upper step consists of an execution of the mirrored instructions of the target program \emph{plus} additional \emph{global} instructions about it. We call \emph{double introspection} this type of double upper step, and denote it by the symbol $(S_L, I_D, S_{DU})$. The overall loop is represented by the graphical mark \includegraphics[width=0.04\textwidth]{images/reflexion.png}. Finally, we define \emph{computational reflexion} as the process generated by the loop composed by \emph{lower step}, \emph{double introspection} and \emph{double upper step}. \vspace{3mm} Table \ref{tab:symbols} summarizes the schema of all components. Each row reports the symbolic representation, the graphical mark, and the corresponding terminology at both step and process level. In summary, the addition of specific groups of instructions to the interpretation loop underlies the generation of different processes, each built on the previous one: \emph{standard execution}, \emph{tracing}, \emph{mirroring}, \emph{augmentation}, and \emph{reflexion}. Given a target process, the enriched interpreter executed the related program and a concurrent version executed, at every step, with its own code. Our definition of computational reflexion is thus a formal specification of the informal one reported in Section \ref{inspiration}. \begin{table}[ht] \begin{center} \scalebox{0.98}{ \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Symbol} & \textbf{Step Components} & \textbf{Process Creation} & \textbf{Process} \\ \hline\hline $(S_L)$ \includegraphics[width=0.04\textwidth]{images/execution.png} & \emph{Lower Step} & \emph{Standard Execution} & \emph{Target Process} \\ \hline $(S_L, I_S)$ \includegraphics[width=0.04\textwidth]{images/proto-self.png} & + \emph{Single Introspection} & \emph{Tracing} & \emph{Execution Trace} \\ \hline $(S_L, I_S, S_{SU})$ \includegraphics[width=0.06\textwidth]{images/mirroring.png} & + \emph{Single Upper Step} & \emph{Mirroring} & \emph{Mirror Process} \\ \hline $(S_L, I_S, S_{DU})$ \includegraphics[width=0.06\textwidth]{images/augmentation.png} & $\rightarrow$ \emph{Double Upper Step} & \emph{Augmentation} & \emph{Augmented Process} \\ \hline $(S_L, I_D, S_{DU})$ \includegraphics[width=0.06\textwidth]{images/reflexion.png} & $\rightarrow$ \emph{Double Introspection} & \emph{Reflexion} & \emph{Reflexive Process} \\ \hline \end{tabular} } \end{center} \caption{Different versions of the interpretation loop, with the addition of step components, and related computational processes.} \label{tab:symbols} \end{table} \section{Prototypical Implementation} \label{implementation} As a proof of concept of the feasibility to implement computational introspection, as defined in the previous section, we developed a prototypical version. Specifically, we employed and modified the code of a Lisp meta-circular interpreter \cite{Landauer_Bellman2001}\cite{Graham2002} (i.e., an interpreter of the Lisp programming language, implemented in the same language), called here \emph{Lisp in Lisp}. The main reason for using \emph{Lisp in Lisp} is that it is one of the simplest ways to implement a general-purpose interpreter. Indeed, it is a specific model of computation based on Church's Lambda Calculus \cite{Church1941, McCarthy1960}. As reported by McCarthy \cite{McCarthy1978}, ``Another way to show that Lisp was neater than Turing machines was to write a universal Lisp function and show that it is briefer and more comprehensible than the description of a universal Turing machine. This was the Lisp function eval [...]'' The program is just a few lines of code and the definition of its main function, \emph{\texttt{eval}}, is based on the composition of a few primitive operators. The \emph{\texttt{eval}} function is what is performing the interpretation (or \emph{evaluation}) process. In this case, we call \emph{computational step} (and, equivalently, interpretation loop) the \emph{Lisp in Lisp} execution between two next calls of the \emph{\texttt{eval}} function. Therefore, using the sequence of steps described in the previous section, we modified the definition of \emph{\texttt{eval}} adding additional function calls. For example, the single introspection event correspond to a call of the function \emph{\texttt{quote}}, which returns the code of the argument (i.e. the instruction under execution). The complete code of the program and applied examples of executions are free available for research purpos \section{Discussion} \label{discussion} The intuitions formalized in this paper are aimed to envision a new type of self-aware systems. While almost all state-of-the-art systems are based on introspection, we propose to consider reflexion as the main aspect of self-awareness. We could intuitively define computational reflexion as \emph{``a mechanism for making a computational process continuously self-informed''}. The expression \emph{``mechanism of making''} expresses the fact that reflexion is defined as a particular type of interpreter. Indeed, we focused on the interpretation loop and modified it. Reflexion is not the property of a specific class of computer programs but, instead, something that can be provided to any executable programs through this form of interpretation. Through reflexion, the standard program execution (i.e., the \emph{target process}) is dynamically ``reflected'' into the execution of its augmented counterpart (i.e., the \emph{reflexive process}). As explained in Section \ref{computational-reflexivity}, each instruction of the target program is executed twice: the first time (as sequence of \emph{lower steps}) to achieve the standard execution (and generate the target process), and the second time (as sequence of \emph{upper steps}) as part of the reflexive process. In the above definition, the term \emph{``self''} is not referring to a single entity but to a couple of mutually interactive entities. This \emph{duality} between the two processes is the way we theoretically address the \emph{identity paradox} mentioned in Section \ref{conscious-reflexivity}. \section{Possible Applications and Future Work} \label{conclusion} The properties identified in the previous section allow us to conceive some interesting uses of the reflexive augmentation of program execution. For example, we could see the execution of the target program and the corresponding reflexive augmentation as performed by two separate but synchronized devices. Specifically, we could have an autonomous agent (e.g. a robot in a physical environment) and an interfaced web service implementing reflexion. Therefore, computational reflexion could be used as a way to provide a system with a temporary ``streaming of self-awareness''. The aimed next steps of our research are focused on the following aspects. Firstly, we intend to further develop the proposed formalization and achieve possible interesting implications as formal theorems. Secondly, we aim to study the degree to which the reflexive process should give feedback to the target process and modify the related program. In other words, we would like to investigate aspects of run-time ``virtual'' self-modification, not yet taken into account, at this stage of the research, in our prototype. A crucial issue is about efficiency. We need to investigate to what degree the combination of step-level local and global introspection and corresponding execution can be feasible performed. If the target program is sufficiently complex, there is a limitation in the number of instructions capable of being executed along the duration of the interpretation loop. In this case, the procedural modeling of the target process should be optimized. Finally, we intend to investigate the extent to which computational reflexivity could be employed to achieve a form of self organization, using the information gathered by the step-level introspective acts to train a self-reinforcement system.
proofpile-arXiv_065-3466
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The simplest and most popular mechanism to accommodate the evidence for neutrino masses and mixings~\cite{Tortola:2013voa,Forero:2014bxa,Gonzalez-Garcia:2014bfa,Gonzalez-Garcia:2015qrr,Capozzi:2016rtj,Esteban:2016qun} and to naturally explain their extreme smallness, calls upon the introduction of right-handed neutrinos through the celebrated Seesaw mechanism~\cite{Minkowski:1977sc,Ramond:1979py,GellMann:1980vs,Yanagida:1979as,Mohapatra:1979ia,Schechter:1980gr}. Its appeal stems from the simplicity of its particle content, consisting only of the right-handed neutrinos otherwise conspicuously missing from the Standard Model (SM) ingredients. In the Seesaw mechanism, the smallness of neutrino masses is explained through the ratio of their Dirac masses and the Majorana mass term of the extra fermion singlets. Unfortunately, this very same ratio suppresses any phenomenological probe of the existence of this mechanism. Indeed, either the right-handed neutrino masses would be too large to be reached by our highest energy colliders, or the Dirac masses, and hence the Yukawa interactions that mediate the right-handed neutrino phenomenology, would be too small for even our more accurate precision probes through flavour and precision electroweak observables. However, a large hierarchy of scales is not the only possibility to naturally explain the smallness of neutrino masses. Indeed, neutrino masses are protected by the $B-L$ (Baryon minus Lepton number) global symmetry, otherwise exact in the SM. Thus, if this symmetry is only mildly broken, neutrino masses will be necessarily suppressed by the small $B-L$-breaking parameters. Conversely, the production and detection of the extra right-handed neutrinos at colliders as well as their indirect effects in flavour and precision electroweak observables are not protected by the $B-L$ symmetry and therefore not necessarily suppressed, leading to a much richer and interesting phenomenology. This is the rationale behind the popular Inverse Seesaw Mechanism~\cite{Mohapatra:1986bd} (ISS) as well as the Linear~\cite{Akhmedov:1995vm,Malinsky:2005bi} and Double Seesaw~\cite{Mohapatra:1986bd,Mohapatra:1986aw,Roncadelli:1983ty,Roy:1983be} variants. In the presence of right-handed neutrinos, $B-L$ is the only flavour-universal SM quantum number that is not anomalous, besides hypercharge. Therefore, just like the addition of right-handed neutrinos, a very natural plausible SM extension is the gauging of this symmetry. In this work these two elements are combined to explore a possible dynamical origin of the ISS pattern from the spontaneous breaking of the gauged $B-L$ symmetry. Previous models in the literature have been constructed using the ISS idea or gauging $B-L$ to explain the smallness of the neutrino masses, see e.g.~\cite{Klasen:2016qux,Wang:2015saa,Okada:2016gsh,Okada:2016tci,Bandyopadhyay:2017bgh,Cai:2014hka}. A minimal model in which the ISS is realised dynamically and where the smallness of the Lepton Number Violating (LNV) term is generated at the two-loop level was studied in~\cite{Bazzocchi:2010dt}. Concerning $U(1)_{B-L}$ extensions of the SM with an ISS generation of neutrino masses, several models have been investigated~\cite{Khalil:2010iu,Basso:2012ti,Ma:2014qra,Ma:2015raa}. A common origin of both sterile neutrinos and Dark Matter (DM) has been proposed in~\cite{Escudero:2016tzx,Escudero:2016ksa}. An ISS model which incorporates a keV sterile neutrino as a DM candidate was constructed in e.g.~\cite{Abada:2014zra}. Neutrino masses break $B-L$, if this symmetry is not gauged and dynamically broken, a massless Goldstone boson, the Majoron, appears in the spectrum. Such models have been investigated for example in~\cite{Escudero:2016tzx,Rojas:2017sih}. Interestingly, since the ISS mechanism requires a chiral pattern in the neutrino sector, the gauging of $B-L$ predicts the existence of extra fermion singlets with non-trivial charges so as to cancel the anomalies. We find that these extra states may play the role of DM candidates as thermally produced Weakly Interacting Massive Particles (WIMPs) (see for instance~\cite{Bertone:2004pz,Bertone:2010zza} for a review). Indeed, the extra states would form a \textit{dark sector}, only connected to the SM via the $Z'$ gauge boson associated to the $B-L$ symmetry and, more indirectly, through the mixing of the scalar responsible for the spontaneous symmetry breaking of $B-L$ with the Higgs boson. For the simplest charge assignment, this dark sector would be constituted by one heavy Dirac and one massless Weyl fermion with large $B-L$ charges. These large charges make the $Z'$ couple preferentially to the dark sector rather than to the SM, making it particularly \textit{elusive}. In this work the phenomenology associated with this dark sector and the elusive $Z'$ is investigated. We find that the heavy Dirac fermion of the dark sector can be a viable DM candidate with its relic abundance mediated by the elusive $Z'$. Conversely, the massless Weyl fermion can be probed through measurements of the relativistic degrees of freedom in the early Universe. The collider phenomenology of the elusive $Z'$ is also investigated and the LHC bounds are derived. The paper is structured as follows. In Sec.~\ref{sec:model} we describe the features of the model, namely its Lagrangian and particle content. In Sec.~\ref{sec:DM} we analyse the phenomenology of the DM candidate and its viability. The collider phenomenology of the $Z'$ boson is discussed in Sec.~\ref{sec:colliders}. Finally, in Secs.~\ref{sec:results} and \ref{sec:conclusions} we summarise our results and conclude. \section{The model} \label{sec:model} The usual ISS model consists of the addition of a pair of right-handed SM singlet fermions (right-handed neutrinos) for each massive active neutrino~\cite{Mohapatra:1986bd,Wyler:1982dd, Valle:1982yw, Valle:1983dk}. These extra fermion copies, say $N_R$ and $N_R'$, carry a global Lepton Number (LN) of $+1$ and $-1$, respectively, and this leads to the following mass Lagrangian \begin{equation} - \mathcal{L}_{\rm ISS} = \bar L Y_\nu \widetilde{H} N_R + \overline{N_R^c} M_N N_R' + \overline{N_R'^c} \mu\, N_R' + {\rm h.c.}, \end{equation} where $Y_\nu$ is the neutrino Yukawa coupling matrix, $\widetilde{H}=i\sigma_2 H^*$ ($H$ being the SM Higgs doublet) and $L$ is the SM lepton doublet. Moreover, $M_N$ is a LN conserving matrix, while the mass matrix $\mu$ breaks LN explicitly by 2 units. The right-handed neutrinos can be integrated out, leading to the Weinberg operator~\cite{Weinberg:1979sa} which generates masses for the light, active neutrinos of the form: \begin{equation} m_\nu\sim v^2 Y_\nu M_N^{-1}\mu (M_N^T)^{-1} Y^T_\nu. \end{equation} Having TeV-scale right-handed neutrinos (e.g. motivated by naturalness~\cite{Casas:2004gh,Vissani:1997ys}) and $\mathcal{O}(1)$ Yukawa couplings would require $\mu\sim\mathcal{O}({\rm keV})$. In the original ISS formulation~\cite{Mohapatra:1986bd}, the smallness of this LNV parameter arises from a superstring inspired E6 scenario. Alternative explanations call upon other extensions of the SM such as Supersymmetry and Grand Unified Theories (see for instance~\cite{Malinsky:2005bi,Bazzocchi:2009kc}). Here a dynamical origin for $\mu$ will be instead explored. The $\mu$ parameter is technically natural: since it is the only parameter that breaks LN, its running is multiplicative and thus once chosen to be small, it will remain small at all energy scales. To promote the LN breaking parameter $\mu$ in the ISS scenario to a dynamical quantity, we choose to gauge the $B-L$ number~\cite{Mohapatra:1980qe}. The spontaneous breaking of this symmetry will convey LN breaking, generate neutrino masses via a scalar vev, and give rise to a massive vector boson, dubbed here $Z'$. $B-L$ is an accidental symmetry of the SM, and it is well motivated in theories in which quarks and leptons are unified~\cite{Marshak:1979fm,Pati:1974yy,Georgi:1974my,Fritzsch:1974nn}. In unified theories, the chiral anomalies cancel within each family, provided that SM fermion singlets with charge $+1$ are included. In the usual ISS framework, this is not the case due to the presence of right-handed neutrinos with charges $+1$ and $-1$. The triangle anomalies that do not cancel are those involving three $U(1)_{B-L}$ vertices, as well as one $U(1)_{B-L}$ vertex and gravity. Therefore, to achieve anomaly cancellation for gauged $B-L$ we have to include additional chiral content to the model with charges that satisfy \begin{align} &\sum Q_i=0\Rightarrow\sum Q_{iL}-\sum Q_{iR}=0,\\ &\sum Q_i^3=0\Rightarrow\sum Q_{iL}^3-\sum Q_{iR}^3=0, \end{align} where the first and second equation refer to the mixed gravity-$U(1)_{B-L}$ and $U(1)_{B-L}^3$ anomalies, respectively. The index $i$ runs through all fermions of the model. In the following subsections we will discuss the fermion and the scalar sectors of the model in more detail. \subsection{The fermion sector} Besides the anomaly constraint, the ISS mechanism can only work with a certain number of $N_R$ and $N_R'$ fields (see, e.g., Ref.~\cite{Abada:2014vea}). We find a phenomenologically interesting and viable scenario which consists of the following copies of SM fermion singlets and their respective $B-L$ charges: 3 $N_R$ with charge $-1$; 3 $N_R'$ with charge $+1$; 1 $\chi_R$ with charge $+5$; 1 $\chi_L$ with charge $+4$ and 1 $\omega$ with charge $+4$\footnote{Introducing 2 $N_R$ and 3 $N_R'$ as for example in \cite{Abada:2014zra} leads to a keV sterile neutrino as a potentially interesting warm DM candidate~\cite{Adhikari:2016bei} in the spectrum due to the mismatch between the number of $N_R$ and $N_R'$. However, the relic abundance of this sterile neutrino, if thermally produced via freeze out, is an order of magnitude too large. Thus, in order to avoid its thermalisation, very small Yukawa couplings and mixings must be adopted instead. } Some of these right-handed neutrinos allow for a mass term, namely, $M_N \overline{N_R^c} N_R'$, but to lift the mass of the other sterile fermions and to generate SM neutrino masses, two extra scalars are introduced. Thus, besides the Higgs doublet $H$, the scalar fields $\phi_1$ with $B-L$ charge $+1$ and $\phi_2$ with charge $+2$ are considered. The SM leptons have $B-L$ charge $-1$, while the quarks have charge $1/3$. The scalar and fermion content of the model, related to neutrino mass generation, is summarised in Table~\ref{tab:particles}. The most general Lagrangian in the neutrino sector is then given by\footnote{Notice that a coupling $\phi_1^* {\overline{\omega}} Y_\omega \chi_R$, while allowed, can always be reabsorbed into $\phi_1^* {\overline{\chi_L}} Y_\chi \chi_R$ through a rotation between $\omega$ and $\chi_L$.} \begin{align} - \mathcal{L}_\nu &= \bar L Y_\nu \widetilde{H} N_R + {\overline{N_R^c}} M_N N_R' + \phi_2 \overline{N_R^c} Y_N N_R + \phi_2^* \overline{(N_R')^c}\, Y'_N N_R' +\phi_1^* {\overline{\chi_L}} \, Y_\chi \chi_R + {\rm h.c.}, \end{align} where the capitalised variables are to be understood as matrices (the indices were omitted). \begin{table} \centering \begin{tabular}{| c| c| c| c| c| c| c| c| c|} \hline Particle & $ \phi_1$ & $ \phi_2$ & $ \nu_L $& $ N_R$ & $ N'_R$ & $ \chi_R$ & $\chi_L$& $\omega$\\ \hline $U(1)_{B-L}$ charge & $ +1$ & $+2$ &$-1 $& $-1$ & $+1$ & $+5$ & $+4$& $+4$ \\ \hline Multiplicity & $1$ & $1$ &$ 3 $& $3$ & $3$ & $1$ & $1$ & $1$\\ \hline \end{tabular} \caption{Neutral fermions and singlet scalars with their $U(1)_{B-L}$ charge and their multiplicity. $\phi_{1,2}$ are SM singlet scalars while $N_R$, $N'_R$ and $\chi_R$ are right-handed and $ \chi_L$ and $\omega$ are left-handed SM singlet fermions respectively.} \label{tab:particles} \end{table} The singlet fermion spectrum splits into two parts, an ISS sector composed by $\nu_L$, $N_R$, and $N'_R$, and a dark sector with $\chi_L$ and $\chi_R$, as can be seen in the following mass matrix written in the basis $(\nu_L^c, N_R,N_R',\chi^c_L,\chi_R)$: \begin{equation} M=\left( \begin{array}{c c c| c c} 0&Y_\nu\widetilde{H}&0&0&0\\ Y_\nu^T\widetilde{H}^\dagger&Y_N\phi_2&M_N&0&0\\ 0&M_N^T&Y_N'\phi_2^*&0&0\\ \hline 0&0&0&0& Y_\chi \phi_1^*\\ 0&0&0&Y_\chi^T\phi_1&0 \end{array}\right). \end{equation} The dynamical equivalent of the $\mu$ parameter can be identified with $Y_N' \phi_2^*$\footnote{The analogous term $Y_N\phi_2$ - also dynamically generated - contributes to neutrino masses only at the one-loop level and is therefore typically sub-leading.}. After $\phi_1$ develops a vacuum expectation value (vev) a Dirac fermion $\chi=(\chi_L,\chi_R)$ and a massless fermion $\omega$ are formed in the dark sector. Although the cosmological impact of this extra relativistic degree of freedom may seem worrisome at first, we will show later that the contribution to $N_{\rm eff}$ is suppressed as this sector is well secluded from the SM. To recover a TeV-scale ISS scenario with the correct neutrino masses and $\mathcal{O}(1)$ Yukawa couplings, $v_2\equiv\vev{\phi_2}\sim {\rm keV}\ll v$ (where $v=\vev{H}=246~{\rm GeV}$ is the electroweak vev) and $M_R\sim{\rm TeV}$ are needed. Moreover, the mass of the $B-L$ gauge boson will be linked to the vevs of $\phi_{1}$ and $\phi_{2}$, and hence to lift its mass above the electroweak scale will require $v_1\equiv\vev{\phi_1}\gtrsim~{\rm TeV}$. In particular, we will show that a triple scalar coupling $\eta\phi_1^2\phi_2^*$ can induce a small $v_2$ even when $v_1$ is large, similar to what occurs in the type-II seesaw~\cite{Magg:1980ut, Lazarides:1980nt, Mohapatra:1980yp, Schechter:1980gr, Cheng:1980qt}. After the spontaneous symmetry breaking, the particle spectrum would then consist of a $B-L$ gauge boson, 3 pseudo-Dirac neutrino pairs and a Dirac dark fermion at the TeV scale, as well as a massless dark fermion. The SM neutrinos would in turn develop small masses via the ISS in the usual way. Interestingly, both dark fermions only interact with the SM via the new gauge boson $Z'$ and via the suppressed mixing of $\phi_1$ with the Higgs. They are also stable and thus the heavy dark fermion is a natural WIMP DM candidate. Since all new fermions carry $B-L$ charge, they all couple to the $Z'$, but specially the ones in the dark sector which have larger $B-L$ charge. \subsection{The scalar sector} The scalar potential of the model can be written as \begin{align} V&=\frac{m_H^2}{2} H^\dagger H+\frac{\lambda_H}{2} (H^\dagger H)^2 + \frac{m_1^2}{2} \phi_1^*\phi_1 + \frac{m_2^2}{2} \phi_2^*\phi_2 + \frac{\lambda_1}{2}(\phi_1^*\phi_1)^2 + \frac{\lambda_2}{2}(\phi_2^*\phi_2)^2 \\ &\quad+ \frac{\lambda_{12}}{2}(\phi_1^*\phi_1)(\phi_2^*\phi_2)+\frac{\lambda_{1H}}{2}(\phi_1^*\phi_1)(H^\dagger H)+\frac{\lambda_{2H}}{2}(\phi_2^*\phi_2)(H^\dagger H)-\eta(\phi_1^2\phi_2^*+\phi_1^{*2}\phi_2).\nonumber \end{align} Both $m_H^2$ and $m_1^2$ are negative, but $m_2^2$ is positive and large. Then, for suitable values of the quartic couplings, the vev of $\phi_2$, $v_2$, is only induced by the vev of $\phi_1$, $v_1$, through $\eta$ and thus it can be made small. With the convention $\phi_j=(v_j+\varphi_j+i\, a_j)/\sqrt{2}$ and the neutral component of the complex Higgs field given by $H^0=(v+h+i G_Z)/\sqrt{2}$ (where $G_Z$ is the Goldstone associated with the $Z$ boson mass), the minimisation of the potential yields \begin{align} m_H^2 &= -\frac{1}{2}\left(\lambda_{1H}v_1^2+\lambda_{2H}v_2^2+2\lambda_H v^2\right)\simeq-\frac{1}{2}\left(\lambda_{1H}v_1^2+2\lambda_H v^2\right),\\ m_1^2 &= -\frac{1}{2}\left(2\lambda_1 v_1^2 + \lambda_{1H}v^2-4\sqrt{2}\eta v_2+\lambda_{12}v_2^2\right)\simeq-\frac{1}{2}\left(2\lambda_1 v_1^2 + \lambda_{1H}v^2\right),\\ m_2^2 &= \left(\frac{\sqrt{2}\eta}{v_2}-\frac{\lambda_{12}}{2}\right)v_1^2 -\lambda_2 v_2^2 - \frac{\lambda_{2H}}{2}v^2\simeq \frac{\sqrt{2}\eta v_1^2}{v_2}, \end{align} or, equivalently, \begin{equation} v_2\simeq \frac{\sqrt{2}\eta v_1^2}{m_2^2}~. \end{equation} Clearly, when $\eta\to0$ or $m_2^2\to\infty$, the vev of $\phi_2$ goes to zero. For example, to obtain $v_2\sim\mathcal{O}({\rm keV})$, one could have $m_2\sim 10~ {\rm TeV}$, $v_1\sim 10~ {\rm TeV}$, and $\eta\sim 10^{-5}~{\rm GeV}$. The neutral scalar mass matrix is then given by \begin{equation} M_0^2\simeq\left( \begin{array}{c c c} \lambda_H v^2 & \lambda_{1H}v_1 v/2&0\\ \lambda_{1H}v_1 v/2&\lambda_1 v_1^2&-\sqrt{2}\eta v_1\\ 0&-\sqrt{2}\eta v_1& \eta v_1^2/\sqrt{2} v_2 \end{array}\right). \end{equation} Higgs data constrain the mixing angle between ${\rm Re}(H^0)$ and ${\rm Re}(\phi_1^0)$ to be below $\sim30\%$~\cite{Robens:2016xkb}. Moreover, since $\eta\ll m_2,v_1$, the mixing between the new scalars is also small. Thus, the masses of the physical scalars $h$, $\varphi_1$ and $\varphi_2$ are approximately \begin{equation} m_h^2=\lambda_H v^2,\quad m_{\varphi_1}^2=\lambda_1 v_1^2,\quad{\rm and}\quad m_{\varphi_2}^2=m_2^2/2, \end{equation} while the mixing angles $\alpha_1$ and $\alpha_2$ between $h-\varphi_1$ and $\varphi_1-\varphi_2$, respectively, are \begin{equation} \tan\alpha_1\simeq \frac{\lambda_{1H}}{\lambda_1}\frac{v}{2v_1},\quad{\rm and}\quad \tan\alpha_2\simeq2\frac{v_2}{v_1}. \label{eq:tanalpha} \end{equation} If $v_1\sim{\rm TeV}$ and the quartics $\lambda_1$ and $\lambda_{1H}$ are $\mathcal{O}(1)$, the mixing $\alpha_1$ is expected to be small but non-negligible. A mixing between the Higgs doublet and a scalar singlet can only diminish the Higgs couplings to SM particles. Concretely, the couplings of the Higgs to gauge bosons and fermions, relative to the SM couplings, are \begin{equation} \kappa_F=\kappa_V=\cos\alpha_1, \end{equation} which is constrained to be $\cos{\alpha_1}>0.92$ (or equivalently $\sin{\alpha_1}<0.39$)~\cite{Khachatryan:2016vau}. Since the massless fermion does not couple to any scalar, and all other extra particles in the model are heavy, the modifications to the SM Higgs couplings are the only phenomenological impact of the model on Higgs physics. The other mixing angle, $\alpha_2$, is very small since it is proportional to the LN breaking vev and thus is related to neutrino masses. Its presence will induce a mixing between the Higgs and $\varphi_2$, but for the parameters of interest here it is unobservable.\\ Besides Higgs physics, the direct production of $\varphi_1$ at LHC via its mixing with the Higgs would be possible if it is light enough. Otherwise, loop effects that would change the $W$ mass bound can also test this scenario imposing $\sin\alpha_1\lesssim 0.2$ for $m_{\varphi_1}=800~$GeV~\cite{Robens:2016xkb}. Apart from that, the only physical pseudoscalar degree of freedom is \begin{equation} A=\frac{1}{\sqrt{v_1^2 + 4 v_2^2}}\left[2 v_2 a_1-v_1 a_2\right] \end{equation} and its mass is degenerate with the heavy scalar mass, $m_A\simeq m_{\varphi_2}$.\\ We have built this model in {\tt SARAH} 4.9~\cite{Staub:2012pb,Staub:2013tta,Staub:2015kfa,Vicente:2015zba}. This Mathematica package produces the model files for {\tt SPheno} 3.3.8~\cite{Porod:2003um,Porod:2011nf} and {\tt CalcHep}~\cite{Belyaev:2012qa} which are then used to study the DM phenomenology with {\tt Micromegas} 4.3~\cite{Belanger:2014vza}. We have used these packages to compute the results presented in the following sections. Moreover, we will present analytical estimations to further interpret the numerical results. \section{Dark matter phenomenology} \label{sec:DM} As discussed in the previous section, in this dynamical realisation of the ISS mechanism we have two stable fermions. One of them is a Dirac fermion, $\chi=(\chi_L,\chi_R)$, which acquires a mass from $\phi_1$, and therefore is manifest at the TeV scale. The other, $\omega$, is massless and will contribute to the number of relativistic species in the early Universe. First we analyse if $\chi$ can yield the observed DM abundance of the Universe.\\ \subsection{Relic density} In the early Universe, $\chi$ is in thermal equilibrium with the plasma due to its gauge interaction with $Z'$. The relevant part of the Lagrangian is \begin{equation} \mathcal{L}_{DM} = - g_{\rm BL} \bar\chi \gamma^\mu ( 5P_R+4 P_L) \chi Z'_\mu + \frac{1}{2}M_{Z'}^2Z'_\mu Z^{\prime \mu} - m_\chi \bar\chi\chi, \end{equation} where \begin{equation} M_{Z'}= g_{\rm BL} \sqrt{v_1^2+4v_2^2}\simeq g_{\rm BL} v_1,~~{\rm and}~~m_\chi = Y_\chi v_1/\sqrt{2}, \end{equation} and $P_{R,L}$ are the chirality projectors. \begin{figure} \centering \includegraphics[scale=0.4]{DManni4.pdf}\vspace{1mm} \includegraphics[scale=0.4]{DManni3.pdf}\vspace{2mm} \includegraphics[scale=0.4]{DManni1.pdf} \includegraphics[scale=0.4]{DManni2.pdf} \caption{\label{fig:DManni} DM annihilation channels $\chi\bar{\chi}\to f \bar f$ via the $Z'$ boson and $\chi\bar{\chi}\to Z' Z'$. The $\chi\bar{\chi}\to Z' Z'$ channel opens up when $M_{Z'}^2 < m_\chi^2$. Since the process $\chi\bar{\chi}\to \varphi_1 \to Z' Z'$ is velocity suppressed this diagram is typically subleading.} \end{figure} The main annihilation channels of $\chi$ are $\chi \bar{\chi}\to f \bar f$ via the $Z'$ boson exchange and $\chi\bar{\chi}\to Z' Z'$ - if kinematically allowed (see fig.~\ref{fig:DManni}). The annihilation cross section to a fermion species $f$, at leading order in $v$, reads: \begin{equation} \vev{\sigma \mathrm{v}}_{ff} \simeq n_c (q_{\chi_{L}}+q_{\chi_{R}})^2~\frac{q^2_{f_L}+q^2_{f_R}}{8\pi}\frac{g_{\rm BL}^4 m_\chi^2}{(4m_\chi^2-M_{Z'}^2)^2+\Gamma^2_{Z'}M_{Z'}^2} + \mathcal{O}\left(v^2 \right), \label{eq:fermions} \end{equation} see e.g.~\cite{Alves:2015mua,Lindner:2010rr}, where $n_c$ is the color factor of the final state fermion (=1 for leptons), $q_{\chi_{L}}=4$ and $q_{\chi_{R}}=5$ and $q_{f_{L,R}}$ are the $B-L$ charges of the left- and right-handed components of the DM candidate $\chi$ and of the fermion $f$, respectively. Moreover, the partial decay width of the $Z'$ into a pair of fermions (including the DM, for which $f=\chi$) is given by \begin{equation} \Gamma_{Z'}^{ff} = n_c~ g_{\rm BL}^2 \frac{\left(6 q_{f_L} q_{f_R} m^2_f + \left( q^2_{f_L}+q^2_{f_R} \right) \left(M_{Z'}^2 - m_f^2 \right) \right) \sqrt{M^2_{Z'} - 4 m_f^2}}{24 \pi M^2_{Z'}}\,. \label{eq:width} \end{equation} When $M_{Z'}^2 < m_\chi^2$, the annihilation channel $\chi\bar{\chi}\to Z'Z'$ is also available. The cross section for this process (lower diagrams in fig.~\ref{fig:DManni}) is given by (to leading order in the relative velocity) \cite{Alves:2015mua} \begin{align} \vev{\sigma \mathrm{v}}_{Z'Z'}&\simeq\frac{1}{256\pi m_\chi^2 M_{Z'}^2} \left(1-\frac{M_{Z'}^2}{m_\chi^2}\right)^{3/2}\left(1-\frac{M_{Z'}^2}{2 m_\chi^2}\right)^{-2} \nonumber\\ &\left(8 g_{\rm{BL}}^4 (q_{\chi_{R}}+q_{\chi_{L}})^2 (q_{\chi_{R}}-q_{\chi_{L}})^2 m_\chi^2+\left( (q_{\chi_{R}}-q_{\chi_{L}})^4+ (q_{\chi_{R}}+q_{\chi_{L}})^4 \right. \right. \nonumber \\ &\left.\left.-6 (q_{\chi_{R}}-q_{\chi_{L}})^2 (q_{\chi_{R}}+q_{\chi_{L}})^2\right) g_{\rm{BL}}^4 M_{Z'}^2 \right)~, \label{eq:Zs} \end{align} The $\chi\bar{\chi}\to\varphi_1\to Z{'} Z{'} $ (upper right diagram in fig.~\ref{fig:DManni}) channel is velocity suppressed and hence typically subleading. Further decay channels like $\chi\bar{\chi}\to \varphi_1 \varphi_1$ and $\chi\bar{\chi}\to Z' \varphi_1$ open when $2 m_\chi>m_{\varphi_1}+m_{\varphi_1} (m_{\varphi_1}+m_{Z^{'}})$. With $m_\chi= Y_\chi/\sqrt{2}v_1,$ $m_{\varphi_1}=\sqrt{\lambda_1}v_1,$ $m_{Z^{'}}=g_{\rm BL}v_1$ and the additional constraint from perturbativity $Y_\chi\leq 1$ we get only small kinematically allowed regions which play a subleading role for the relic abundance. The cross section for the annihilation channel $\chi\bar{\chi}\to Z' h^0$ is also subleading due to the mixing angle $\alpha_1$ between $\varphi_1 - h^0$ which is small although non-negligible (cf. Eq.~\eqref{eq:tanalpha}). The relic density of $\chi$ has been computed numerically with {\tt Micromegas} obtaining also, for several points of the parameter space, the DM freeze-out temperature at which the annihilation rate becomes smaller than the Hubble rate $\vev{\sigma \mathrm{v}} n_{\chi} \lesssim H$. Given the freeze-out temperature and the annihilation cross sections of Eqs.~\eqref{eq:fermions} and~\eqref{eq:Zs}, the DM relic density can thus be estimated by~\cite{Kolb:1990vq}: \begin{equation} \Omega_\chi h^2 = \frac{2.5\cdot 10^{28} m_\chi}{T^{\rm f.o.}_\chi M^2_{Pl}\sqrt{g_\star}\vev{\sigma \mathrm{v}}}, \end{equation} where $g_\star$ is the number of degrees of freedom in radiation at the temperature of freeze-out of the DM ($T^{\rm f.o.}_\chi$), $\vev{\sigma \mathrm{v}}$ is its thermally averaged annihilation cross section and $M_{Pl} = 1.2 \cdot 10^{19}$ GeV is the Planck mass. In Sec.~\ref{sec:results} we will use this estimation of $\Omega_\chi h^2$ together with its constraint $\Omega_\chi h^2 \simeq 0.1186 \pm 0.0020$~\cite{Ade:2015xua,Olive:2016xmw} to explore the regions of the parameter space for which the correct DM relic abundance is obtained. \subsection{Direct Detection} The same $Z'$ couplings that contribute to the relic abundance can give rise to signals in DM direct detection experiments. The DM-SM interactions in the model via the $Z'$ are either vector-vector or axial-vector interactions. Indeed, the $Z'$- SM interactions are vectorial (with the exception of the couplings to neutrinos) while $\chi$ has different left- and right-handed charges. The axial-vector interaction does not lead to a signal in direct detection and the vector-vector interaction leads to a spin-independent cross section \cite{Cheung:2012gi}. The cross section for coherent elastic scattering on a nucleon is \begin{align} \sigma^{\rm DD}_{\chi}=\frac{\mu_{\chi \rm N}^2}{\pi}\left(\frac{9}{2}\frac{g_{\rm BL}^2}{M_{Z'}^2}\right)^2 \label{sigdd} \end{align} where $\mu_{\chi \rm N}$ is the reduced mass of the DM-nucleon system. The strongest bounds on the spin-independent scattering cross section come from LUX~\cite{Akerib:2016vxi} and XENON1T~\cite{Aprile:2017iyp}. The constraint on the DM-nucleon scattering cross section is $\sigma^{\rm DD}_{\chi}<10^{-9}$~pb for $m_{\chi}=1$~TeV and $\sigma^{\rm DD}_{\chi}<10^{-8}$~pb for $m_\chi=10$~TeV. The experimental bound on the spin-independent cross section (Eq.~\eqref{sigdd}) allows to derive a lower bound on the vev of $\phi_1$: \begin{equation} v_1 ~\text{[GeV]}> \left(\frac{2.2\cdot 10^9}{\sigma^{\rm DD}_{\chi}~\text{[pb]}} \right)^{1/4}~. \end{equation} This bound pushes the DM mass to be $m_\chi \gtrsim$ TeV. For instance, for $g_{\rm BL} = 0.25$ and $m_{Z'} = 10$ TeV, a DM mass $m_\chi = 3.8$ TeV is required to have $\sigma^{\rm DD}_{\chi} ~\sim 9 \times 10^{-10}$ pb. In turn, this bound translates into a lower limit on the vev of $\phi_1$: $v_1 \gtrsim 40$ TeV (with $Y_\chi \gtrsim 0.1$). Next generation experiments such as XENON1T~\cite{Aprile:2015uzo} and LZ~\cite{Akerib:2015cja} are expected to improve the current bounds by an order of magnitude and could test the parameter space of this model, as it will be discussed in Sec.~\ref{sec:results}. \subsection{Indirect Detection} In full generality, the annihilation of $\chi$ today could lead also to indirect detection signatures, in the form of charged cosmic rays, neutrinos and gamma rays. However, since the main annihilation channel of $\chi$ is via the $Z'$ which couples dominantly to the dark sector, the bounds from indirect detection searches turn out to be subdominant. The strongest experimental bounds come from gamma rays produced through direct emission from the annihilation of $\chi$ into $\tau^+ \tau^-$. Both the constraints from the Fermi-LAT Space Telescope (6-year observation of gamma rays from dwarf spheroidal galaxies)~\cite{Ackermann:2015zua} and H.E.S.S. (10-year observation of gamma rays from the Galactic Center)~\cite{Abramowski:2011hc} are not very stringent for the range of DM masses considered here. Indeed, the current experimental bounds on the velocity-weighted annihilation cross section $<\sigma v> (\chi \bar{\chi}\to \tau^+\tau^-)$ range from $10^{-25}~\text{cm}^3 \text{s}^{-1}$ to $10^{-22}~\text{cm}^3 \text{s}^{-1}$ for DM masses between 1 and 10~TeV. These values are more than two orders of magnitude above the values obtained for the regions of the parameter space in which we obtain the correct relic abundance (notice that the branching ratio of the DM annihilation to $\chi$ into $\tau^+ \tau^-$ is only about $5\%$). Future experiments like CTA~\cite{Wood:2013taa} could be suited to sensitively address DM masses in the range of interest of this model ($m_\chi \gtrsim 1$ TeV).\\ \subsection{Effective number of neutrino species, $\boldmath{N_{\rm eff}}$} The presence of the massless fermion $\omega$ implies a contribution to the number of relativistic degrees of freedom in the early Universe. In the following, we discuss its contribution to the effective number of neutrino species, $N_{\rm eff}$, which has been measured to be $N_{\rm eff}^{exp}=3.04\pm 0.33$ \cite{Ade:2015xua}. Since the massless $\omega$ only interacts with the SM via the $Z'$, its contribution to $N_{\rm eff}$ will be washed out through entropy injection to the thermal bath by the number of relativistic degrees of freedom $g_\star(T)$ at the time of its decoupling: \begin{align} \Delta N_{\rm eff}=\left(\frac{T^{\rm f.o.}_\omega}{T_\nu}\right)^4 ~= \left(\frac{11}{2 g_\star(T^{\rm f.o.}_\omega)}\right)^{4/3}~, \end{align} where $T^{\rm f.o.}_\omega$ is the freeze-out temperature of $\omega$ and $T_\nu$ is the temperature of the neutrino background. The freeze-out temperature can be estimated when the Hubble expansion rate of the Universe $H = 1.66 \sqrt{g_\star} T^2/M_{Pl}$ overcomes the $\omega$ interaction rate $\Gamma = <\sigma v> n_\omega$ leading to: \begin{align} (T^{\rm f.o.}_\omega)^3 \sim \frac{2.16 \sqrt{g_\star}M^4_{Z'}}{M_{Pl} g_{\rm BL}^4 \sum_f (q^2_{f_L} + q^2_{f_R})}~. \end{align} With the typical values that satisfy the correct DM relic abundance: $m_{Z'}\sim\mathcal{O}(10$ TeV) and $g_{\rm BL}\sim\mathcal{O}$(0.1) $\omega$ would therefore freeze out at $T^{\rm f.o.}_\omega \sim 4$~GeV, before the QCD phase transition. Thus, the SM bath will heat significantly after $\omega$ decouples and the contribution of the latter to the number of degrees of freedom in radiation will be suppressed: \begin{align} \Delta N_{\rm eff} \approx 0.026 \end{align} which is one order of magnitude smaller than the current uncertainty on $N_{\rm eff}$. For gauge boson masses between 1-50 TeV and gauge couplings between 0.01 and 0.5, $\Delta N_{\rm eff}\in[0.02,0.04]$. Nevertheless, this deviation from $N_{\rm eff}$ matches the sensitivity expected from a EUCLID-like survey \cite{Basse:2013zua, Amendola:2016saw} and would be an interesting probe of the model in the future. \section{Collider phenomenology} \label{sec:colliders} The new gauge boson can lead to resonant signals at the LHC. Dissimilarly from the widely studied case of a sequential $Z'$ boson, where the new boson decays dominantly to dijets, the elusive $Z'$ couples more strongly to leptons than to quarks (due to the $B-L$ number). Furthermore, it has large couplings to the SM singlets, specially $\chi$ and $\omega$ which carry large $B-L$ charges. Thus, typical branching ratios are $\sim$70\% invisible (i.e. into SM neutrinos and $\omega$), $\sim$12\% to quarks and $\sim$18\% to charged leptons.\footnote{If the decay channels to the other SM singlets are kinematically accessible, specially into $\chi$ and into the $N_R, N'_R$ pseudo-Dirac pairs, the invisible branching ratio can go up to $\sim 87\%$, making the $Z'$ even more elusive and rendering these collider constraints irrelevant with respect to direct DM searches.} LHC $Z'\to e^+e^-,\mu^+\mu^-$ resonant searches~\cite{Khachatryan:2016zqb, ATLAS:2016cyf} can be easily recast into constraints on the elusive $Z'$. The production cross section times branching ratio to dileptons is given by \begin{equation} \sigma(pp\to Z'\to\ell \bar \ell)=\sum_q\frac{C_{qq}}{s M_{Z'}}\Gamma(Z'\to q \bar q){\rm BR}(Z'\to\ell \bar \ell), \end{equation} where $s$ is the center of mass energy, $\Gamma(Z'\to q \bar q)$ is the partial width to $q \bar q$ pair given by Eq.~\eqref{eq:width}, and $C_{qq}$ is the $q \bar q$ luminosity function obtained here using the parton distribution function MSTW2008NLO~\cite{Martin:2009iq}. To have some insight on what to expect, we compare our $Z'$ with the usual sequential standard model (SSM) $Z'$, in which all couplings to fermions are equal to the $Z$ couplings. The dominant production mode is again $q \bar q\to Z'$ though the coupling in our case is mostly vectorial. The main dissimilarity arrives from the branching ratio to dileptons, as there are many additional fermions charged under the new gauge group. In summary, only $\mathcal{O}(1)$ differences in the gauge coupling bounds are expected, between the SSM $Z'$ and our elusive $Z'$. \begin{figure} \centering \includegraphics[scale=0.5]{Plotfinal1.pdf} \includegraphics[scale=0.5]{Plotfinal5.pdf}\vspace{2mm} \includegraphics[scale=0.5]{Plotfinal10.pdf} \includegraphics[scale=0.5]{Plotfinal20.pdf} \caption{\label{fig:g_vs_mz} Summary plots of our results. The red region to the left is excluded by LHC constraints on the $Z'$ (see text for details), the region above $g_{\rm BL}>0.5$ is non-perturbative due to $g_{\rm BL}\cdot q_{\rm max}\leq\sqrt{2\pi}$. In the blue shaded region DM is overabundant. The orange coloured region is already excluded by direct detection constraints from LUX~\cite{Akerib:2016vxi}, the short-dashed line indicates the future constraints from XENON1T~\cite{Aprile:2015uzo} (projected sensitivity assuming $2t \cdot y$), the long-dashed line the future constraints from LZ~\cite{Akerib:2015cja} (projected sensitivity for 1000d of data taking).} \end{figure} \vspace{0.5cm} \section{Results} \label{sec:results} We now combine in fig.~\ref{fig:g_vs_mz} the constraints coming from DM relic abundance, DM direct detection experiments and collider searches. We can clearly see the synergy between these different observables. Since the DM candidate in our model is a thermal WIMP, the relic abundance constraint puts a lower bound on the gauge coupling, excluding the blue shaded region in the panels of fig.~\ref{fig:g_vs_mz}. On the other hand, LHC resonant searches essentially put a lower bound on the mass of the $Z'$ (red shaded region), while the LUX direct detection experiment constrains the product $g_{\rm BL} \cdot M_{Z'}$ from above (orange shaded region). For reference, we also show the prospects for future direct detection experiments, namely, XENON1T (orange short-dashed line, projected sensitivity assuming $2t \cdot y$) and LZ (orange long-dashed line, projected sensitivity for 1000d of data taking). Finally, if the gauge coupling is too large, perturbativity will be lost. To estimate this region we adopt the constraint $g_{\rm BL}\cdot q_{\rm max}\leq\sqrt{2\pi}$ and being the largest $B-L$ charge $q_{\rm max}=5$, we obtain $g_{\rm BL}>0.5$ for the non-perturbative region. The white region in these panels represents the allowed region. We present four different DM masses so as to exemplify the dependence on $m_\chi$. First, we see that for DM masses at 1~TeV (upper left panel), there is only a tiny allowed region in which the relic abundance is set via resonant $\chi \bar{\chi}\to Z'\to f \bar f$ annihilation. For larger masses, the allowed region grows but some amount of enhancement is in any case needed so that the $Z'$ mass needs to be around twice the DM mass in order to obtain the correct relic abundance. For $m_\chi$ above 20~TeV (lower right panel), the allowed parameter space cannot be fully probed even with generation-2 DM direct detection experiments. On top of the DM and collider phenomenology discussed here, this model allows for a rich phenomenology in other sectors. In full analogy to the standard ISS model, the dynamical ISS mechanism here considered is also capable of generating a large CP asymmetry in the lepton sector at the TeV scale, thus allowing for a possible explanation of the baryon asymmetry of the Universe via {\textit{leptogenesis}}~\cite{Dev:2009aw,Blanchet:2010kw,Abada:2015rta,Hernandez:2015wna}. \\ Moreover, the heavy sterile states typically introduced in ISS scenarios, namely the three pseudo-Dirac pairs from the states $N_R$ and $N_R^{'} $ can lead to new contributions to a wide array of observables~\cite{Shrock:1980vy,Schechter:1980gr,Shrock:1980ct,Shrock:1981wq,Langacker:1988ur,Bilenky:1992wv,Nardi:1994iv,Tommasini:1995ii,Antusch:2006vwa,Antusch:2008tz,Biggio:2008in,Forero:2011pc,Abdallah:2011ew,Alonso:2012ji,Boucenna:2014zba,Abada:2014nwa,Abada:2014cca,Arganda:2014dta,Abada:2015trh,Abada:2016awd,Abada:2015oba,Abada:2015zea,Fernandez-Martinez:2015hxa,DeRomeri:2016gum,Abada:2016vzu} such as weak universality, lepton flavour violating or precision electroweak observables, which allow to constrain the mixing of the SM neutrinos with the extra heavy pseudo-Dirac pairs to the level of $10^{-2}$ or even better for some elements~\cite{Antusch:2014woa,Fernandez-Martinez:2016lgt}. \\ \section{Conclusions} \label{sec:conclusions} The simplest extension to the SM particle content so as to accommodate the experimental evidence for neutrino masses and mixings is the addition of right-handed neutrinos, making the neutrino sector more symmetric to its charged lepton and quark counterparts. In this context, the popular Seesaw mechanism also gives a rationale for the extreme smallness of these neutrino masses as compared to the rest of the SM fermions through a hierarchy between two different energy scales: the electroweak scale -- at which Dirac neutrino masses are induced -- and a much larger energy scale tantalizingly close to the Grand Unification scale at which Lepton Number is explicitly broken by the Majorana mass of the right-handed neutrinos. On the other hand, this very natural option to explain the smallness of neutrino masses automatically makes the mass of the Higgs extremely unnatural, given the hierarchy problem that is hence introduced between the electroweak scale and the heavy Seesaw scale. The ISS mechanism provides an elegant solution to this tension by lowering the Seesaw scale close to the electroweak scale, thus avoiding the Higgs hierarchy problem altogether. In the ISS the smallness of neutrino masses is thus not explained by a strong hierarchy between these scales but rather by a symmetry argument. Since neutrino masses are protected by the Lepton Number symmetry, or rather $B-L$ in its non-anomalous version, if this symmetry is only mildly broken, neutrino masses will be naturally suppressed by the small parameters breaking this symmetry. In this work, the possibility of breaking this gauged symmetry dynamically has been explored. Since the ISS mechanism requires a chiral structure of the extra right-handed neutrinos under the $B-L$ symmetry, some extra states are predicted for this symmetry to be gauged due to anomaly cancellation. The minimal such extension requires the addition of three new fields with large non-trivial $B-L$ charges. Upon the spontaneous breaking of the $B-L$ symmetry, two of these extra fields become a massive heavy fermion around the TeV scale while the third remains massless. Given their large charges, the $Z'$ gauge boson mediating the $B-L$ symmetry couples preferentially to this new \textit{dark sector} and much more weakly to the SM leptons and particularly to quarks, making it rather \textit{elusive}. The phenomenology of this new dark sector and the elusive $Z'$ has been investigated. We find that the heavy Dirac fermion is a viable DM candidate in some regions of the parameter space. While the elusive nature of the heavy $Z'$ makes its search rather challenging at the LHC, it would also mediate spin-independent direct detection cross sections for the DM candidate, which place very stringent constraints in the scenario. Given its preference to couple to the dark sector and its suppressed couplings to quarks, the strong tension between direct detection searches and the correct relic abundance for $Z'$ mediated DM is mildly alleviated and some parts of the parameter space, not far from the resonance, survive present constraints. Future DM searches by XENON1T and LZ will be able to constrain this possibility even further. Finally, the massless dark fermion will contribute to the amount of relativistic degrees of freedom in the early Universe. While its contribution to the effective number of neutrinos is too small to be constrained with present data, future EUCLID-like surveys could reach a sensitivity close to their expected contribution, making this alternative probe a promising complementary way to test this scenario. \section*{Acknowledgements} VDR would like to thank A. Vicente for valuable assistance on SARAH and SPheno. JG would like to thank Fermilab for kind hospitality during the final stages of this project. This work is supported in part by the EU grants H2020-MSCA-ITN-2015/674896-Elusives and H2020-MSCA-2015-690575-InvisiblesPlus. VDR acknowledges support by the Spanish grant SEV-2014-0398 (MINECO) and partial support by the Spanish grants FPA2014-58183-P, Multidark CSD2009-00064 and PROMETEOII/2014/084 (Generalitat Valenciana). EFM acknowledges support from the EU FP7 Marie Curie Actions CIG NeuProbes (PCIG11-GA-2012-321582), "Spanish Agencia Estatal de Investigaci\'on" (AEI) and the EU "Fondo Europeo de Desarrollo Regional" (FEDER) through the project FPA2016-78645-P and the Spanish MINECO through the ``Ram\'on y Cajal'' programme (RYC2011-07710) and through the Centro de Excelencia Severo Ochoa Program under grant SEV-2012-0249 and the HPC-Hydra cluster at IFT. The work of VN was supported by the SFB-Transregio TR33 ``The Dark Universe". This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. \bibliographystyle{JHEP}
proofpile-arXiv_065-3473
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Many computer vision algorithms (\emph{e.g}\bmvaOneDot photometric stereo~\cite{Petrov.shape.CRA}, photometric invariants~\cite{GeversFS04}, shadow removal~\cite{shadow16}, and colour constancy~\cite{Koberooni2}) assume that the captured RGBs in images are linearly related to the actual scene radiance. However, the imaging pipeline in a digital camera is necessarily non-linear in order to produce perceptually-pleasing photos rather than its physically-meaningful counterparts. In this paper, we present a new rank-based radiometric calibration method which solves for the bi-directional mappings between the camera's RAW responses and the rendered RGBs produced by digital camera. There is prior art in this field which models the pipeline with a large number of parameters (up to several thousand~\cite{chak}) which both means a large corpus of data is required to uncover the pipeline and that there is at least tacitly the premise that the underlying pipeline is quite complex. The key insight in our approach is that post-colour correction (a $3 \times 3$ matrix correction) the linear corrected raw RGBs are to the greatest extent in the same rank order as the final rendered RGBs. Building on this insight, we develop a simple rank-based radiometric calibration model that ``solves for'' the camera pipeline with many fewer parameters and concomitantly needs much less training data. In Fig.~\ref{fig:RC-PP}, we illustrate a conventional image reproduction pipeline that holds for many cameras~\cite{BrownPami12}. An exemplar raw image, Fig.~\ref{fig:RC-PP}a, is mapped by a $3 \times 3$ colour correction matrix to give the image shown in Fig.~\ref{fig:RC-PP}b. The colour correction matrix implements several processing steps (\emph{e.g}\bmvaOneDot illumination correction~\cite{PIPELINE,ChakBMVC}, display RGB mapping \cite{srgb}, and colour preference adjustments~\cite{PIPELINE}). It is well-known that a display device cannot display all captured image colours that some RGBs will fall outside the RGB cube after mapping (\emph{e.g}\bmvaOneDot the pixels marked in purple in Fig.~\ref{fig:RC-PP}b). We therefore need gamut mapping, \emph{e.g}\bmvaOneDot \cite{BrownPami12,chak,ZollikerGM}, to bring the colours back inside the cube as shown in Fig.~\ref{fig:RC-PP}c. Finally, the gamut mapped image is tone mapped to arrive at the final rendered output \cite{PIPELINE,ChakBMVC,BrownPami12} shown in Fig.~\ref{fig:RC-PP}d. Tone mapping accounts for the display non-linearity~\cite{srgb}, dynamic range compression and some aspects of preference~\cite{DRC}. \begin{figure}[htb] \begin{center} \includegraphics[width=\linewidth]{RC-PL} \end{center} \caption{a) a RAW input image is colour corrected to give image b). Non-displayable colours are highlighted in purple pseudo colour. Gamut mapping, in step c), brings colours within gamut. Finally, in d), a tone mapping step results in the final rendered image.} \label{fig:RC-PP} \end{figure} The colour processing pipeline -- for cameras, in general, can be written as Eqn.~\ref{eq:base_model}. \begin{equation} \begin{array}{cccccc} \mb{P} =\ & \underbrace{ {f}(\Gamma (M\mb{\rho})) } & =& \underbrace{ \Gamma({f}(M\mb{\rho}))} & \approx & \underbrace{\text{LUT}(\mb{\rho})} \\ & (\text{1a}) & & (\text{1b}) & & (\text{1c}) \end{array} \label{eq:base_model} \end{equation} Here and throughout this paper, $\mb{\rho}$ denotes a camera RAW and $\mb{P}$ refers to its rendered RGB counterpart. Respectively, the $3 \times 3$ correction matrix, gamut mapping and tone mapping are denoted by the matrix $M$ and the functions $\Gamma()$ and ${f}()$. The function $f()$ can implement a single or three per-channel tone curves. Because gamut mapping only implements a small change in comparison with colour and tone mapping steps, the order of gamut mapping and tone mapping may be switched (Eqn.~\ref{eq:base_model}b \& c), a property that we exploit in this paper. Equally, we can also roll all three processing steps into one and directly solve for a 3D LUT (Look-Up-Table) that maps RAW to rendered counterparts. This LUT function is denoted $\text{LUT}()$~\cite{BrownECCVLattice12} in Eqn.~\ref{eq:base_model}c. Readers may refer to the top row of Fig.~\ref{fig:RC-PP} to link each mathematical function to our example processed image. In radiometric calibration, given a set of $\mb{\rho}$ and $\mb{P}$, one seeks to solve for the parametrised pipeline parts (\emph{e.g}\bmvaOneDot $M$, $\Gamma()$, $f()$ and $\text{LUT}()$). A disadvantage of the current best performing methods is that a great deal of data may be required to fit their assumed models. In Eqns.~\ref{eq:base_model}a and \ref{eq:base_model}b, the gamut mapping step could be modelled by 1000s of Radial Basis functions~\cite{BrownPami12,BrownECCVLattice12,chak} and in Eqn.~\ref{eq:base_model}c, the deployed LUT could also have several thousand control points. Our proposed method begins with the simple observation~\cite{finlayson2016rank} that, assuming the gamut mapping step makes small changes to image colours, we expect the rank ordering of the rendered $\mb{P}$ to be the same as $\mb{\rho}$ multiplied by the correction matrix $M$ (because a tone curves are always monotonically increasing). Suppose that two rendered (JPEG) responses -- in the 1$^\text{st}$ red colour channel -- are denoted $P^a_1$ and $P_1^b$; and that $P^a_1>P^b_1$. The rank order of two corresponding raw red channel measurements post colour correction is written as $M_1\mb{\rho}^a>M_1\mb{\rho}^b$ (where $M_1$ denotes the first row of $M$ and $\mb{\rho}^a$ and $\mb{\rho}^b$ are a pair of raw RGBs). Rewriting: this implies that $M_1(\mb{\rho}^a-\mb{\rho}^b)>0$ which mathematically defines a half-space constraint. If we visualise the row vector $M_1$ as a point in 3-space then this inequality -- which we call a ranking constraint -- forces the point to be located in one half of 3-space but not the other. Because we have multiple pixels, each pair of pixels (2 raw and 2 JPEG RGBs) generates a half space constraint and intersecting all these constraints delimits the region in which $M_1$ must lie. Our experiments demonstrates that a small numbers of patches suffices to estimate $M$ accurately. Once we have $M$ we then find the best rank preserving tone curves $f()$. At this stage, only using $M$ and $f()$ we have a tolerable approximation of the pipeline. Indeed, we argue that our construction of $M$ and $f()$ also incorporates, to a first order, gamut mapping. Now we adopt (Eqn~\ref{eq:base_model}c) and find a 125-parameter LUT to ``mop up'' any remaining errors due to gamut mapping (higher order terms). \section{Related work} Using the pipeline form Eqn~\ref{eq:base_model}b, Chakrabarti \emph{et al}\bmvaOneDot~\cite{chak} first solve for $M$ and $f()$ (in a least-squares sense) in iteration and then solve directly for $\Gamma()$. In their approach, $f()$ is constrained to be a 7$^\text{th}$ order monotonic polynomial. They model $\Gamma()$ with the radial basis function (RBF) method of \cite{BrownPami12} where several thousands of RBFs are potentially used. A restriction of the above calibration is presented in \cite{ChakBMVC} where the gamut mapping $\Gamma()$ is ignored. This less general model works tolerably well on many real pairs of raw and rendered images and this is a point we will return to later in this paper. In either version (\cite{chak} or \cite{ChakBMVC}), the coupled nature of the minimization means that a global minimum is not guaranteed to be found. Thus, a random start search is incorporated -- multiple minimisations are carried out -- in order to find their best parameters. Kim \emph{et al}\bmvaOneDot \cite{BrownPami12} solve for the pipeline in the form of Eqn.~\ref{eq:base_model}a and makes additional assumptions to decouple the optimization. They assume that images of the same scene are captured with respect to two or more exposures and their $\Gamma()$ is a multi-thousand set of RBFs. Regarding solving for $f()$, Debevec \emph{et al}\bmvaOneDot~\cite{Debevec97} showed how relating corresponding pixels under known exposure differences suffices to solve for $f()$ (assuming there is no gamut mapping step). Importantly, in \cite{BrownPami12}, it was argued that for the set of desaturated pixels (\emph{i.e}\bmvaOneDot RAWs far from the RGB cube boundary) the gamut mapping step has little or no effect and can be ignored. Relative to this assumption, $f()$ can be solved using the Debevec method. Given $f()$ then the colour correction matrix $M$ can be found (again using desaturated pixels). Though, for typical capture conditions, e.g. for most mobile phones, multiple exposures are not available and so the requirement that multiple exposures are needed is a weakness in this method. Finally, in \cite{BrownPami12} a gamut mapping RBF network is ``trained''. Of course, if a large number of radial basis functions are used to model gamut mapping (as proposed in \cite{BrownPami12} or \cite{chak}) then solving for $\Gamma()$ requires a large corpus of data. Further the application of gamut mapping is expensive and its parametrisation is large. In~\cite{BrownECCVLattice12} it was shown that is possible to ignore the underlying structure of the colour processing pipeline and directly solve for the best 3D surjective function -- implemented as a LUT that maps the RAWs to rendered RGBs (Eqn.~\ref{eq:base_model}c). Finally, in \cite{RadiometricEdge}, a method is presented for solving for $f()$ by examining the edge distribution in an image. This method has the advantage that the method works for a single image (no need for multiple exposures) but the disadvantage that the method is sensitive to processing steps such as image sharpening which is used extensively in mobile phone image processing. \section{The rank-based method} As the reader shall see, to make the rank-based method work we need to assume that the gamut mapping step $\Gamma()$ only makes small adjustments to colour. In fact our assumption is more nuanced. We assume that -- to a first order -- gamut mapping can mostly be implemented as an affine transform and that this affine transform can be folded into the colour correction matrix $M$ and the monotonically increasing tone mapping functions $f()$. \subsection{Gamut Mapping as an Affine Transform} In Eqn.~\ref{eq:base_model}b, gamut mapping is applied when, after colour correction, colours are mapped outside the colour cube and become non-displayable. Let us use a Taylor expansion to model $\Gamma()$ around a point $\mb{a}$ inside the gamut: \begin{equation} \label{eq:gamma} \Gamma(M\mb{\rho})\approx \Gamma(\mb{a})+J(\mb{a})(\mb{\rho}-\mb{a}) \end{equation} where $J$ is the $3 \times 3$ Jacobian (matrix of derivatives of $\Gamma$). Not only does Eqn.~\ref{eq:gamma} show that, to a first approximation, gamut mapping is an affine transform it is also one of the gamut mapping algorithms proposed in \cite{ZollikerGM}. In particular, \cite{ZollikerGM} solves, with good results, for the best affine transform that maps image colours inside the gamut and which are, simultaneously, close to the non-gamut mapped colours: \begin{equation} \min_{T,\mb{o}} \Sigma_i ||TM\mb{\rho}_i+\mb{o}-M\mb{\rho}||^2\;\;s.t.\;\mb{0}\leq TM\mb{\rho}_i+\mb{o}\leq\mb{1} \label{eq:gamut_min} \end{equation} In Eqn.~\ref{eq:gamut_min}, $T$ and $\mb{o}$ are respectively a $3 \times 3$ matrix and $3 \times 1$ offset vector defining the affine gamut mapping algorithm. The 3-vectors of 0s and 1s are denoted $\mb{0}$ and $\mb{1}.$ Eqn.~\ref{eq:gamut_min} is solved directly by Quadratic Programming~\cite{Gill81}. The gamut mapping shown in Fig.~\ref{fig:RC-PP}c is the result of solving Eqn.~\ref{eq:gamut_min}. Here, we make two important remarks about affine gamut mapping: 1) Gamut mapping and colour correction combined can be represented by the single affine transform: $3 \times 3$ matrix $TM$ and offset $\mb{o}$; 2) It follows that the rank-based method presented in the next section will actually solve for $TM$. The offset term can be incorporated directly in $f()$ (since an offset does not change ranking). \subsection{Rank-based estimation for colour correction} Let us denote the $k^{\text{th}}$ row of $M$ as $M_k$, let us assume that given two colour corrected RAWs, $M_k\mb{\rho}^a$ and $M_k\mb{\rho}^b$ that the rank order is the same as for the corresponding rendered RGBs: \begin{equation} {P}^a_k>{P}^b_k\;\Rightarrow\;M_k\mb{\rho}^a>M_k\mb{\rho}^b \Rightarrow\; M_k(\mb{\rho}^a-\mb{\rho}^b)>0 \end{equation} Defining the difference vector $\mb{d}^{j}=\mb{\rho}^a-\mb{\rho}^b$: \begin{equation} M_k\mb{d}^{j}>0 \label{eq:cc_3} \end{equation} where it is understood the superscript $^j$ denotes the difference vector from the $j^\text{th}$ of $\binom{n}{2}$ pairs of $n$ image pixel values. Suppose that we have a vector $M_k$ where Eqn.~\ref{eq:cc_3} holds, then the inequality cannot be true for $-M_k$. That is Eqn.~\ref{eq:cc_3} defines a half plane constraint~\cite{finlayson2016rank,ComputationalGeometry}. The vector $\mb{d}^{j}$ is perpendicular to the half-plane: any $M_k$ less than 90 degrees to $\mb{d}^{j}$ is a possible solution. Given multiple difference vectors then we have multiple half-plane constraints which taken together delimit a region in 3-space where $M_k$ must lie. Denoting the half-plane as ${\cal H}(\mb{d}^j)$, $M_k$ must satisfy: \begin{equation} M_k\in \bigcap_j {\cal H}(\mb{d}^j) \label{eq:cc_4} \end{equation} Let us visualise the computation of $M_k$ using ranking. Without loss of generality let us assume that $M_{k,3}=1$. We rewrite Eqn.~\ref{eq:cc_3} as \begin{equation} M_{k,1}d_1^{j}+M_{k,2}d_2^{j}+d_3^{j}>0 \label{eq:cc_5} \end{equation} If $[a\;b\;c]$ is a solution to Eqn.~\ref{eq:cc_4}, then $[a/c\;b/c\;c/c]$ for Eqn.~\ref{eq:cc_5} is also true since $M_{k,1}=a/c$ and $M_{k,2}=b/c$. Solutions for $[M_{k,1},M_{k,2}]$ lie on one side of the line, \emph{i.e}\bmvaOneDot the 3D half-space constraints maps directly to a 2D half-plane constraint. Or, if we consider the whole set of collations, the cone in 3D, defined by Eqn.~\ref{eq:cc_4}, maps to a 2D convex region~\cite{FINLAYSON.PAMI.96}. Denoting half-planes as ${\cal P}(\mb{d}^j)$ we, equivalently, solve for \begin{equation} [M_{k,1},M_{k,2}]\in \bigcap_j {\cal P}(\mb{d}^j) \label{eq:cc_6} \end{equation} The intersection problem of Eqn.~\ref{eq:cc_6} is easily visualised. In Fig.~\ref{fig:half_plane}a we show the intersection of 4 half plane constraints and indicate the solution set where $M_k$ must lie. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\linewidth]{RB.pdf} \end{center} \caption{a) The region where 4 half-plane constraints intersect delimit the region where $[M_{k,1},M_{k,2}]$ must lie where the black point is a feasible solution. b) On an unit sphere, each vector represented by the origin and a blue surface point is a probe for a possible solution (\emph{e.g}\bmvaOneDot the black arrow). All 3D points and constraints are projected to a 2D plane $M_{k,3} = 1$.} \label{fig:half_plane} \end{figure} We solve for $M$ one sensor channel at a time. Empirically, we have to be careful not to generate too many half planes. In our experiment, we generate half-planes from all pairs of up to 50 randomly selected unique RAWs, generating $2450$ half-planes. Due to noise or small deviations in real camera data, it is likely that no common intersection can be found that satisfy every half-planes constraint. To solve this problem, we generate 100,000 unit length vectors that are uniformly distributed on the surface of the unit sphere~\cite{SphericalSampling}, which is visualised in Fig.~\ref{fig:half_plane}b. With respect to this sampling, the furthest distance between any point and its nearest neighbour is less than 1.15 degrees. So, the orientation of the rows of $M$ are found to this accuracy. For each point on the sphere (\emph{i.e}\bmvaOneDot a possible row of $M_k$), we count how many half-space constraints are satisfied. The point on the unit sphere that satisfies most half-plane constraints -- or the median of multiple points if there is a tie -- defines $M_k$. To make our approach robust, we find randomly select 50 colours 25 times and for each trial find the best $M_k$. Overall, we find the $M$ that places all the corresponding raw and rendered image RGBs in the most similar rank order. That is, if we plot the mapped raw red responses, for example, against the rendered red JPEG corresponding values then the graph should be a monotonically increasing function. How well a monotonically increasing function fits our data can be used to judge the efficacy of each $M$. Ranking can only estimate $M$ up to an unknown scaling of its rows. Suppose for a rendered achromatic RGB $\mb{P}^A=[0.5\;0.5\;0.5]^\intercal$ and the corresponding raw $\mb{\rho}^A=[a\;b\;c]^\intercal$, we apply: $0.5\textbf{diag}(M\mb{\rho}^A)^{-1}M\mb{\rho}^A=[0.5\;0.5\;0.5]^\intercal$ where $\textbf{diag}()$ places a vector along the diagonal of a diagonal matrix. After this step, $M\leftarrow 0.5\textbf{diag}(M\mb{\rho}^A)^{-1}M$ maps achromatic colours correctly. Because $M\leftarrow DM$ (where in the example, $D=0.5\textbf{diag}(M\mb{\rho}^A)^{-1}$) we might also solve for $D$ in a least-squares sense by including all colours indexed $^i$ close to the achromatic axis: $\min_D \sum_i ||DM\underline{\rho}^i-\underline{P}^i||$ (our experiment does not include this additional step). \subsection{Rank-preserving optimization of tone curves} We now solve for the optimal per-channel tone curves which map colour corrected RAWs to corresponding rendered RGBs. Let us denote the $i^\text{th}$ colour corrected RAW and rendered RGB pixel pairs for the $k^\text{th}$ channel as $(\rho_{k,i},P_{k,i})$. Then, the $k^\text{th}$-channel rank-preserving tone curve $f_k()$ is optimised as a $7^\text{th}$ order monotonic and smooth polynomial function as follows: \begin{equation} \min_{f_k()}\Sigma_i ||f_k (\rho_{k,i})-P_{k,i}||^2 + \lambda \int_{t}||f_k''(t)||^2 dt \;\;\text{s.t.} \; f_k'()\ge0 \label{eq:im_3}. \end{equation} where the first term is for data fitness, the second term is for curve smoothness and $\lambda$ is a small weight (\emph{e.g}\bmvaOneDot$10^{-5}$). This polynomial fitting is solved by Quadratic Programming~\cite{Gill81}. In this paper, we further denote the combination of all 3-channel mappings $f_{1-3}()$ as $f()$. \subsection{Gamut correction step} As argued previously, we propose that $f(M\underline{\rho})$ has the expressive power to implement colour correction, tone correction and gamut mapping (to the first order in a Taylor expansion). However, we wish to add a further gamut mapping step for the higher order terms. But, since our hypothesis is that much of the gamut mapping will have been accounted for we are going to adopt a simple small parameter solution. Further, this additional correction is going to be carried out at the end of the process, we adopt Eqn.~\ref{eq:base_model}b. Specifically, we find a $5\times5\times5$ LUT by using lattice regression~\cite{Lattice} that minimises $\min_{LUT()}\Sigma_i ||\text{LUT}(f(M\mb{\rho}_i))-\mb{P}_i||^2$. \subsection{Rank-based recovery of raw} Suppose we wish to map rendered RGBs to RAWs. Using the method presented in Section 3.2, $M$ has already been solved in the RAW-to-JPEG forward estimation phrase. Now, in a least-squares optimal way, we use the same polynomial fitting method (Eqn.~\ref{eq:im_3}) to find $f^{-1}$ by optimising $\min_{f^{-1}()} \Sigma_i ||f^{-1}(\mb{P}_i)-M\mb{\rho}_i||$. Finally, we solve for the backward $\text{LUT}()$ by optimising ${\min_{LUT()} \Sigma_i ||\text{LUT}(M^{-1}f^{-1}(\mb{P}_i))-\mb{\rho}_i ||}$ where the LUT is fitted by a $5\times5\times5$ lattice regression~\cite{Lattice}. \subsection{Parameter counting} Assuming we solve for 3 independent tone curves then our method requires 9 (for $M$) + $8 \times 3$ (for $f()$) + $125\times 3$ (for $\Gamma()$) = 408 parameters which is significantly less (even an order of magnitude less) than \cite{chak,BrownPami12,BrownECCVLattice12}. \section{Evaluation} Our evaluation is based on the most challenging dataset that we have encountered: ~\cite{chak} which contains the RAW/JPEG intensity pairs of 140 colour checker patches viewed under multiple viewing conditions. Specifically, the colour chart is captured by 8 cameras (3 of which are JPEG-only) and under 16 illuminants across many different exposures. Below, we carried out the same experiment described in \cite{chak}. We are interested in validating whether our method, with much reduced number of parameters can produce, similar or even better results compared with \cite{chak}. We evaluate both RAW-to-JPEG and JPEG-to-RAW. The dataset~\cite{chak} captures a sort of ``worst-case'' viewing conditios. Normally, when we capture a picture there is a single prevailing illuminant colour. In the dataset of Chakrabarti \emph{et al}\bmvaOneDot, all camera processing parameters are turned off and then the same reflectances are viewed under multiple coloured lights. As Forsyth observed~\cite{forsyth1990novel}, the reddest red camera response cannot be observed under a blue light. And, then he exploited this observation to solve for the colour of the light. Yet, in this dataset the reddest red, the greenest green and the bluest blue can all appear simultaneously in the same image. Practically, we believe the need to divine a pipeline for the all lights all surfaces case means the prior art pipelines are probably more complex than they need to be. As described in \cite{chak}, for each camera, we estimate the parameters of a calibration model using different subsets of available RAW-JPEG pairs. For each subset and a selected camera, the root mean-squared error (RMSE) between the prediction and ground truth is validated by using all available RAW-JPEG pairs. Table~\ref{table:result}a shows the RAW-to-JPEG mapping error (where pixel intensities are coded as integers in the interval $[0,255]$. In the table, {\it Prob} denotes the Chakrabarti method (with several thousands parameters) and {\it RB} the rank-based method with 408 parameters. We found that our forward errors are close to the results of \cite{chak}, especially for the condition of less than 3 illuminants which are more likely to occur in the real world. Evidently, for the many illuminant case the prior art has a small advantage. Remembering that JPEGs are coded as integers in [0,255] the RMSE is typically 1 or less (for {\it RB} compared to {\it Prob}). Practically, when the ``fits'' are viewed visually (by looking at images) it has hard to see the difference between the two methods. For computer vision, we are more interested in the performance of JPEG-to-RAW mapping which is shown in Table~\ref{table:result}b and Table~\ref{table:result}c. In \cite{chak}, a probabilistic framework for mapping rendered RGB to raw was presented. Here we take their mean estimates as the most likely RAW predictions. We found that our method generally reduces the errors of \cite{chak} by $\sim 34\%$. Our supplementary material also includes the additional experiment results compared with ``\cite{ChakBMVC} + our LUT'' for interested readers. The reader might be interested why our simple method seems to work so well going from rendered to raw (better than \cite{chak}) but not quite as well as the prior art in the forward direction (albeit visually almost indistinguishable). Our hypothesis here is that the LUT in the forward direction is applied post the tone curve. This curve (at least for dark values) has a very high slope and, consequently, the coarsely quantised $5\times5\times5$ LUT cannot capture gamut mapping well. Yet, in the reverse direction (JPEG to RAW) the LUT is applied in linear raw where a course uniform quantisation is more justified. \begin{table} \small \centering \begin{tabular}{lcccccccc} \toprule \textbf{a) RAW-to-JPEG}& \multicolumn{2}{c}{Uniform 8K} & \multicolumn{2}{c}{10 Exp. 1 Illu.} & \multicolumn{2}{c}{10 Exp. 2 Illu.} & \multicolumn{2}{c}{4 Exp. 4 Illu.} \\ \cmidrule{2-9} Camera & Prob & RB & Prob & RB & Prob & RB & Prob & RB\\ \cmidrule(r){1-1} \cmidrule(r){2-3} \cmidrule(r){4-5}\cmidrule(r){6-7}\cmidrule(r){8-9} Canon\_EOS\_40D & 1.84 & 2.56 & 9.79 & 10.10 & 7.53 & 4.13 & 4.06 & 5.87 \\ Canon\_G9 & 2.17 & 3.70 & 6.51 & 6.20 & 3.41 & 5.48 & 3.09 & 4.79 \\ Canon\_PowerShot\_S90 & 2.44 & 3.24 & 4.88 & 4.52 & 3.58 & 4.34 & 3.40 & 4.04 \\ Nikon\_D7000 & 1.72 & 4.03 & 8.05 & 10.03 & 3.32 & 5.39 & 26.06 & 6.48 \\ Panasonic\_DMC-LX3 & 1.65 & 3.65 & 7.33 & 8.70 & 5.25 & 4.56 & 3.05 & 7.98 \\ \midrule & \multicolumn{2}{c}{} & \multicolumn{2}{c}{8 Exp. 4 Illu.} & \multicolumn{2}{c}{4 Exp. 6 Illu.} & \multicolumn{2}{c}{8 Exp. 6 Illu.}\\ \cmidrule{4-9} Camera & & & Prob & RB & Prob & RB & Prob & RB\\ \cmidrule(r){1-1} \cmidrule(r){4-5}\cmidrule(r){6-7}\cmidrule(r){8-9} Canon\_EOS\_40D & & & 2.91 & 4.13 & 3.60 & 4.11 & 2.25 & 3.61 \\ Canon\_G9 & & & 2.79 & 5.48 & 3.12 & 4.74 & 2.77 & 4.67 \\ Canon\_PowerShot\_S90 & & & 2.95 & 4.34 & 3.27 & 3.70 & 2.75 & 3.93 \\ Nikon\_D7000 & & & 2.41 & 5.39 & 2.77 & 5.04 & 1.92 & 4.95 \\ Panasonic\_DMC-LX3 & & & 2.77 & 4.56 & 2.94 & 4.26 & 2.33 & 4.14 \\ \toprule \textbf{b) JPEG-to-RAW} & \multicolumn{2}{c}{Uniform 8K} & \multicolumn{2}{c}{10 Exp. 1 Illu.} & \multicolumn{2}{c}{10 Exp. 2 Illu.} & \multicolumn{2}{c}{4 Exp. 4 Illu.} \\ \cmidrule{2-9} Camera & Prob & RB & Prob & RB & Prob & RB & Prob & RB\\ \cmidrule(r){1-1} \cmidrule(r){2-3} \cmidrule(r){4-5}\cmidrule(r){6-7}\cmidrule(r){8-9} Canon\_EOS\_40D & 0.079 & 0.060 & 0.085 & 0.072 & 0.080 & 0.064 & 0.075 & 0.072 \\ Canon\_PowerShot\_G9 & 0.126 & 0.075 & 0.143 & 0.104 & 0.120 & 0.079 & 0.120 & 0.082 \\ Canon\_PowerShot\_S90 & 0.065 & 0.052 & 0.073 & 0.058 & 0.069 & 0.074 & 0.066 & 0.057 \\ Nikon\_D7000 & 0.143 & 0.090 & 0.543 & 0.123 & 0.140 & 0.098 & 0.229 & 0.108 \\ Panasonic\_DMC-LX3 & 0.082 & 0.058 & 0.090 & 0.072 & 0.082 & 0.063 & 0.073 & 0.071 \\ \midrule & \multicolumn{2}{c}{} & \multicolumn{2}{c}{8 Exp. 4 Illu.} & \multicolumn{2}{c}{4 Exp. 6 Illu.} & \multicolumn{2}{c}{8 Exp. 6 Illu.}\\ \cmidrule{4-9} Camera & & & Prob & RB & Prob & RB & Prob & RB\\ \cmidrule(r){1-1} \cmidrule(r){4-5}\cmidrule(r){6-7}\cmidrule(r){8-9} Canon\_EOS\_40D & & & 0.071 & 0.064 & 0.077 & 0.065 & 0.069 & 0.063 \\ Canon\_PowerShot\_G9 & & & 0.121 & 0.079 & 0.126 & 0.076 & 0.126 & 0.080 \\ Canon\_PowerShot\_S90 & & & 0.069 & 0.074 & 0.063 & 0.059 & 0.066 & 0.058 \\ Nikon\_D7000 & & & 0.144 & 0.098 & 0.147 & 0.094 & 0.143 & 0.101 \\ Panasonic\_DMC-LX3 & & & 0.077 & 0.063 & 0.074 & 0.060 & 0.077 & 0.064 \\ \bottomrule \multicolumn{5}{l}{\textbf{c) Uniform 8K (JPEG-Only Camera Test)} } & \multicolumn{2}{c}{RAW-to-JPEG} & \multicolumn{2}{c}{JPEG-to-RAW} \\ \cmidrule{6-9} Camera &\multicolumn{4}{l}{Raw Proxy} & Prob & RB & Prob & RB\\ \cmidrule(r){1-1} \cmidrule(r){2-5} \cmidrule(r){6-7}\cmidrule(r){8-9} FUJIFILM\_J10 & \multicolumn{4}{l}{Panasonic\_DMC-LX3} & 10.43 & 11.51 & 0.279 & 0.077\\ Galaxy\_S\_III &\multicolumn{4}{l}{Nikon\_D7000} & 11.34 & 13.13 & 0.114 & 0.074 \\ Panasonic\_DMC\_LZ8 & \multicolumn{4}{l}{Canon\_PowerShot\_G9} & 8.85 & 12.23 & 0.146 & 0.085 \\ \bottomrule \end{tabular} \caption{RMSE between ground truth and prediction for bidirectional RAW and JPEG conversions: Prob denotes \cite{chak} and RB is our rank-based method. ``Exp.'' and ``Illu.'' are respectively short for ``Exposure'' and ``Illuminant''. ``Raw Proxy'' is the camera used to capture raw for the camera which does not support raw image capturing.} \label{table:result} \end{table} \section{Calibration with small numbers of parameters} We wished to visually validate our claim that we can calibrate with few parameters. We took 4 RAW+JPEG pairs (for different cameras) from~\cite{ChakBMVC}. We then uniformly selected 140 corresponding pixels from the RAW and JPEG. We solved for all parameters in our rank-based method. We then applied our model to the rest of the image. The result of this experiment for 4 images (JPEG-to-RAW) is shown in Fig.~\ref{fig:one-shot}. \begin{figure}[htb] \begin{center} \includegraphics[width=\linewidth]{vis.pdf} \end{center} \caption{Visualisation of one-shot radiometric calibration through a simulated 140-patch colour checker, shown at the top-right corner of each Rendered JPEG image. The error maps in the 4$^\text{th}$ and 5$^\text{th}$ columns respectively visualise the per pixel RMSE for our rank-based method with \& without the gamut mapping LUT. The RMSE of each whole image is shown at the top-right corner of each error map. All raw images are shown with a 0.5 gamma.} \label{fig:one-shot} \vspace{-20pt} \end{figure} \section{Conclusion} In this paper we have shown how the rank order of image responses is a powerful tool for solving for the individual steps in a camera processing pipeline (colour correction, gamut and tone mapping). A simple ranking argument, relating colour corrected RAWs to corresponding rendered RGBs suffices to solve for the colour correction matrix. Then, the rank-preserving tone map is found and, finally, a simple gamut correction step is derived. Compared with the prior art, our rank-based method requires the fewest assumptions and delivers state-of-the-art radiometric calibration results.
proofpile-arXiv_065-3477
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In thin disc accretion theory, the constraints of angular momentum and mass conservation may be combined into a single evolutionary equation for the disc surface density, a classic result first emphasised and discussed at length by Lynden-Bell \& Pringle (1974) (see also the review of Pringle 1981). In its original implementation, the equation takes the form of a diffusion equation, with a diffusion coefficient proportional to an {\em ad hoc} turbulent viscosity. Balbus \& Papaloizou (1999) later showed how the same evolution equation emerges without the need to introduce an explicit viscosity. By writing the velocity field as a sum of a mean plus a fluctuation (with vanishing mean), an effective diffusion coefficient emerges which is proportional to the correlation in the radial and azimuthal velocity fluctuations\footnote{The presence of a magnetic field can be incorporated into this formalism, with the product of the radial and azimuthal Alfven velocities subtracted from the kinetic velocity fluctuations in the diffusion coefficient (resulting in an {additional} {\em positive} stress).}. The evolutionary equation has heretofore been used in the regime of Newtonian gravity (e.g. Pringle 1981). Solutions of the equation show that matter in accretion discs drifts inward, while angular momentum is transported outward, sustained by a vanishingly small mass fraction of the disc. The extension of the evolutionary equation to include general relativistic gravity has not yet been done, and it is not without interest. It is the purpose of this paper to derive and analyse the general relativisitic version of the thin disc evolutionary equation. We present a very general global asymptotic analysis (assuming small stress and/or rapid modal time scales), which can be equally well applied in the Newtonian limit. The inner regions of neutron star and black hole discs are dynamically complex. It would be naive to apply simple thin disc dynamics uncritically. The formal thin disc problem is nevertheless quite interesting, first as an illustration of how the diffusion dynamics breaks down at the innermost stable circular orbit (ISCO) of the disc, second of how the diffusion equation extends to ISCO-free Kerr orbits in general relativity, and third as a useful analytical tool for understanding numerical simulations. It is especially noteworthy that while the effective diffusion coefficient of the disc equation becomes singular at the ISCO, the solution is nevertheless mathematically well-behaved. The global normal modes include exponentially growing modes confined to the zone within the ISCO, which completely disrupt the interior disc structure, leaving the outer disc intact. This is in accord with numerical simulations. The plan of this paper is as follows. In \S 2 we first derive the form of the disc evolution equation that follows from the conservation of particle number and the azimuthal component of the stress energy tensor. This reduces to the Lynden-Bell---Pringle (1974) equation in the Newtonian limit. A solution of the general equation is presented in \S3 for modes with exponential time dependence, using WKB, local analysis, and matched asymptotic expansions. The cases of both finite and vanishing stress at the location of the ISCO are presented, and we argue that thin discs will evolve to a state at which the vanishing stress boundary condition is achieved. We use the modal solutions to construct a general Green's function solution. Finally, in \S4 we summarise the presentation. This scope of this paper is to present a mathematical treatment of the equation. Astrophysical applications will be explored in a separate study. We observe the following conventions. {\em The speed of light is set to unity throughout this work.} Greek indices $\alpha, \beta, \gamma...$ generally denote spacetime coodinates. The exception is $\phi$, which is always the azimuthal angular coordinate. The time coordinate is labelled $0$. The metric in local inertial coordinates is $ g_{\alpha\beta} \rightarrow \eta_{\alpha\beta} = {\rm diag\ }(-1,1,1,1). $ Other notation is standard: $G$ is the gravitational constant, $M$ the central black hole mass, and $r_g=GM$ the gravitational radius. \section{Fundamental equations} \subsection{Conserved fluxes} The two conserved quantities of interest are the particle number current $nU^\mu$, where $n$ is the rest frame number density and $U^\mu$ the contravariant 4-velocity, and the azimuthal component of the stress energy tensor $T^\mu_{\ \phi}$. Following Page \& Thorne (1974), we will work in ``cylindrical Boyer-Lundquist'' $r, \phi, z$ coordinates in the Kerr metric, ultimately using the equations in their height-integrated form. This involves ignoring the higher order curvature terms of order $z^2/r^2$ near the equatorial plane. We assume that the disc is axisymmetric and thin. The conservation equation of particle number before height integration is simply \begin{equation} (nU^\mu)_{;\mu} = 0, \end{equation} where the semi-colon denotes a covariant derivative. If we now integrate over $z$, and assume that the velocities are independent of height, the remaining 4-velocity components are $U^0, U^r$, and $U^\phi$. With rest mass per particle $m$, and $\Sigma$ the integrated column density $$ \Sigma =m\int n\, dz, $$ the particle conservation equation for an axisymmetric disk is \begin{equation}\label{partcon} {1\over \sqrt{g}}\partial_\mu \left( \sqrt{g}\,\Sigma U^\mu\right) = U^0 \partial_t\Sigma +{1\over \sqrt{g}}\partial_r \left( \sqrt{g}\,\Sigma U^r\right) = 0 \end{equation} where $g=|\rm{det}\,g_{\mu\nu}|$, the absolute value of the metric tensor determinant. The $\phi$ equation for conservation of the stress-energy tensor is $T^\mu_{\ \phi;\mu}=0$, with $T_{\mu\nu}$ taking the form of an ideal fluid plus a contribution from the radiation field, denoted $\tau_{\mu\nu}$: \begin{equation} T_{\mu\nu} = g_{\mu\nu} P +(\rho+P) U_\mu U_\nu +\tau_{\mu\nu}. \end{equation} The radiation stress $\tau_{\mu\nu}$ is given by \begin{equation} \tau_{\mu\nu} = q_\mu U_\nu + q_\nu U_\mu, \end{equation} where $q_\mu$ is the radiative energy flow vector, which satisifies $q_\mu U^\mu = 0$ (Page \& Thorne 1974). We ignore (for the moment) the contribution of any additional stress tensor that may be present. The angular momentum carried by radiated photons is not negligible when rotational velocities are of order the speed of light (Novikov \& Thorne 1973). Then, assuming axisymmetry $\partial_\phi = 0$, and defining \begin{equation} \sigma_{\mu\nu} = (\rho + P)U_\mu U_\nu + \tau_{\mu\nu}, \end{equation} the conservation equation becomes \begin{equation}\label{three} 0={1\over \sqrt{g}}\partial_\mu(\sqrt{g} \sigma^\mu_{\ \phi})-\Gamma^\lambda_{\mu\phi}\sigma^\mu_{\ \lambda}. \end{equation} Here $\rho$ is the rest energy density (including in principle a thermal contribution, which is however ignored in the thin disk limit), $P$ is the thermal pressure (which shall likewise be ignored), and $\Gamma^\lambda_{\mu\phi}$ is the affine connection. For axisymmetric $\partial_\phi=0$ metrics this is: \begin{equation}\label{gam} \Gamma^\lambda_{\mu\phi}= {1\over 2}g^{\lambda\alpha}( \partial_\mu g_{\alpha\phi}- \partial_\alpha g_{\mu\phi}). \end{equation} Therefore, for {\em any} symmetric tensor $\sigma^{\mu\nu}$, the combination $\Gamma^\lambda_{\nu\phi}\sigma_{\ \lambda}^\nu$ is \begin{equation}\label{gammi} {1\over 2}g^{\lambda\alpha}( \partial_\mu g_{\alpha\phi}- \partial_\alpha g_{\mu\phi})\sigma_{\ \lambda}^\mu= {1\over 2}( \partial_\mu g_{\alpha\phi}- \partial_\alpha g_{\mu\phi})\sigma^{\mu\alpha} =0, \end{equation} since the metric derivatives are antisymmetric in $\alpha$ and $\mu$ while $\sigma^{\mu\alpha}$ is symmetric\footnote{Knowledgeable readers will recognise in equation (\ref{gammi}) a Killing vector calculation. I thank C.\ Gammie for drawing my attention to this point.}. By contrast, $g_{\mu\nu}$ is {\em not} independent of $r$, so that \begin{equation}\label{req} \Gamma^{\lambda}_{\mu r}\sigma^{\mu}_{\ \lambda} = {1\over 2}\sigma^{\alpha\mu}\partial_rg_{\alpha\mu}, \end{equation} a result we use below. Equation (\ref{three}) now reduces to: \begin{equation}\label{am0} {1\over \sqrt{g}}\partial_\mu(\sqrt{g} \sigma^\mu_{\ \phi})=0. \end{equation} The disc turbulence is represented by writing the 4-velocity $U^\mu$ as a mean flow $\bar U^\mu$ plus a fluctuation $\delta U^\mu$ with vanishing mean, $\overline{\delta U^\mu}=0$. In particular, \begin{equation} \overline{U^r U_\phi} = \bar U^r \bar U_\phi + \overline{ \delta U^r\, \delta U_\phi} \equiv \bar U^r \bar U_\phi +\W \end{equation} The asymptotic scalings of the fluctuations satisfy: \begin{equation} \delta U_\phi \ll \bar U_\phi, \quad \bar U^r \ll \delta U^r \sim \delta U_\phi/r \ll r \bar U^\phi, \end{equation} i.e., the orbital velocity and angular momentum are much larger than their associated fluctuations, and the inward mean radial drift velocity is yet an asymptotic order smaller than the fluctuations in either the radial velocity or orbital velocity. The two $\delta U$ fluctuations are assumed to be of comparable order, suitably dimensionalised. In common with Newtonian theory (Balbus \& Papaloizou 1999), we expect $\bar U^r \bar U_\phi$ (the product of a zeroth order rotational velocity and a second order radial drift) to be of the same asymptotic order as $\W$ (the product of two first order fluctuations). As always, it is important to distinguish contravariant $U^\phi$ (angular 4-velocity) from covariant $U_\phi$ (angular 4-momentum): $$ U_\phi = g_{\phi 0} U^0 + g_{\phi\phi}U^\phi = g_{\phi 0} {dt\over d\tau}+ g_{\phi\phi}{d\phi\over d\tau}, $$ where we have ignored $U^r$ as negligibly small. \subsection{``Stress by strain'' and radiation} \subsubsection {Equilibrium models} For the equilibrium models under consideration, Page \& Thorne (1974) present a relationship between the disc shear, a tensor coupling like viscosity, and the energy radiated from its surface. In our notation, this relation reads: \begin{equation}\label{10b} -\Sigma \W\bar U^0{d\Omega\over dr} = 2{\cal F}, \end{equation} where \begin{equation} \Omega = {d\phi\over dt}={d\phi\over d\tau}{d\tau\over dt}={\bar U^\phi\over \bar U^0} \end{equation} is the angular velocity measured by an observer at infinity, and ${\cal F}$ is the radiated energy flux in the local rest frame. In essence, this states that the energy extracted from differential rotation and put into turbulent fluctuations is locally radiated away at the same rate. We will make use of this relation in \S 2.3 below, which also holds in our case because of the assumption that the thermal timescale is more rapid than the evolutionary timescale. It is of some technical interest to revisit this important relationship in more detail in an out-of-equilibrium context, which we do in the following section (see also Balbus \& Hawley 1998, Balbus \& Papaloizou 1999). The reader willing to adopt equation (\ref{10b}) directly may wish to skip directly to \S 2.3 below, without loss of continuity. \subsubsection{Free energy from shear} The radial $T^\mu_{\ r}$ conservation equation is given, with the help of equation (\ref{req}) and particle number conservation, by \begin{equation}\label{10} {\delta U^r\over \sqrt{g}}\partial_\mu(\sqrt{g}\rho U^\mu U_r+\sqrt{g}\tau^\mu_{\ r}) -{\rho \delta U^r\over 2} U^\alpha U^\mu\partial_r g_{\alpha \mu} = 0. \end{equation} We have multiplied by $\delta U^r$ with the aim of assembling a fluctuation energy equation; the radial component $\tau^\mu_{\ r}$ of the radiation stress is small, but retained here to maintain a covariant formulation. Next, we write the $U$ velocities as a mean $\bar U$ plus fluctuating $\delta U$. The largest contributions from the final term of equation (\ref{10}) comprise the equilibrium solution and are not of interest; they cancel out. Retaining the next largest group of terms, our equation becomes \begin{equation}\label{12} {\delta U^r\over \sqrt{g}}\partial_\mu[\sqrt{g}\rho U^\mu (\bar U_r+\delta U_r)+\sqrt{g}\tau^\mu_{\ r}] -{\rho \delta U^r} \delta U^\alpha \bar U^\mu\partial_r g_{\alpha \mu} = 0, \end{equation} where, in the final term, we have used $g_{\alpha\mu}=g_{\mu\alpha}$ symmetry. It is convenient for now to retain $U^\mu$ in the $\partial_\mu$ divergence term without separating its mean and fluctuation. (The radiation stress $\tau^\mu_{\ \nu}$ will likewise contain a fluctuating $\delta U$ component, which is not shown explicitly.) The term involving $\bar U_r$ is an asymptotic order smaller than the others, and may be dropped. We arrive at: \begin{equation} {\delta U^r\over \sqrt{g}}\partial_\mu(\sqrt{g}\rho U^\mu \delta U_r+\sqrt{g}\tau^\mu_{\ r}) -{\rho \delta U^r} \delta U^\alpha \bar U^\mu\partial_r g_{\alpha \mu} = 0, \end{equation} Next, the equation for $T^\mu_{\ \phi}$ (angular momentum conservation) is \begin{equation}\label{13} {\delta U^\phi\over \sqrt{g}}\partial_\mu[\sqrt{g}\rho U^\mu (\bar U_\phi +\delta U_\phi)+\sqrt{g}\tau^\mu_{\ \phi}] = 0. \end{equation} Using particle number conservation and remembering that $\bar U_\phi$ depends only upon $r$, this becomes \begin{equation}\label{133} {\delta U^\phi\over \sqrt{g}}\partial_\mu[\sqrt{g}(\rho U^\mu\delta U_\phi+\tau^\mu_{\ \phi})] +\rho \delta U^\phi \delta U^r\partial_r\bar U_\phi =0. \end{equation} Following the thin disc Novikov and Thorne (1973) models, the radiation flux $\tau^\mu_\phi$ is assumed to be dominated by its vertical $\tau^z_{\ \phi}$ component, and in particular by the $q^z U_\phi$ term. We rewrite the last term to obtain: \begin{equation}\label{14} {\delta U^\phi\over \sqrt{g}}\partial_\mu[\sqrt{g}(\rho U^\mu\delta U_\phi+\tau^\mu_{\ \phi})] +\rho \delta U^\phi \delta U^r\partial_r (g_{\phi\alpha}\bar U^\alpha) =0. \end{equation} The $T^\mu_{\ 0}$ equation is handled similarly: \begin{equation} \delta U^0{1\over \sqrt{g}}\partial_\mu\sqrt{g}[\rho U^\mu(\bar U_0+\delta U_0)+\tau^\mu_{\ 0}]=0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation} which expands to $$ \delta U^0{1\over \sqrt{g}}\partial_\mu\sqrt{g}(\rho U^\mu\delta U_0+\tau^\mu_{\ 0} ) +\rho\delta U^0\delta U^r\partial_r\bar U_0 =0, $$ or \begin{equation}\label{16} \delta U^0{1\over \sqrt{g}}\partial_\mu\sqrt{g}(\rho U^\mu\delta U_0+\tau^\mu_{\ 0}) +\rho\delta U^0\delta U^r\partial_r(g_{\alpha 0}\bar U^\alpha) =0. \end{equation} The final $T^\mu_{\ z}$ equation is simple and straightforward, as there is by assumption no mean $z$ flow: \begin{equation}\label{20} \delta U^z {1\over \sqrt{g}}\partial_\mu\sqrt{g}(\rho U^\mu\delta U_z+\tau^\mu_{\ z}) =0. \end{equation} We now sum over equations (\ref{12}), (\ref{14}), (\ref{16}) and (\ref{20}) to obtain, after some algebra and index shifting: \begin{equation}\label{sum} {1\over \sqrt{g}} \delta U^\nu\partial_\mu[\sqrt{g}( \rho U^\mu \delta U_\nu +\tau^\mu_{\ \nu})]= -\rho \delta U^r(\delta U_0\partial_r\bar U^0 +\delta U_\phi\partial_r\bar U^\phi). \end{equation} The final step is to use $\delta ( U^\mu U_\mu) = 0$, which gives $$ \delta U_0 = -{\bar U^\phi \over \bar U^0} \delta U_\phi. $$ Using this in (\ref{sum}), averaging $\delta U^r \delta U_\phi$ to form $\W$ and collecting terms, we obtain \begin{equation}\label{fluc} {1\over \sqrt{g}}\delta U^\nu\partial_\mu[\sqrt{g}\, \rho U^\mu \delta U_\nu + \tau^\mu_{\ \nu} ) ]= -\rho \W \bar U^0\Omega' , \end{equation} where \begin{equation} \Omega' = \partial_r(\bar U^\phi/\bar U^0) \end{equation} is just the relativistic analogue of the shear gradient (e.g. Page and Thorne 1974). Equation (\ref{fluc}) is a relationship for the rate at which stress extracts energy from the shear, involving via $\W$ the first-order correlated velocities that are residual fluctuations from circular motion. As written, however, there appears at first to be a gross mismatch: the left side of the equation is smaller than the right by a factor of order the ratio of the drift velocity to the rotation velocity. But this assumes that the length scales associated with the gradients on either side of the equation are comparable. Because the extracted free energy is in fact locally {\em dissipated,} and dissipation is dominated by the smallest scales, the gradient length scale on the left side provides the balance from the input of the right side by being, in effect, tiny. The analysis of nonrelativistic discs presented in Balbus \& Hawley (1998) shows that when explicit dissipation terms are included in the energy fluctuation equation from the start, the balance struck is between the right side of equation (\ref{fluc}) and explicit viscous (or resistive) dissipation. These energy loss terms are unimportant for large scale transport (and can be ignored for this purpose), but they represent the ``thermal processor'' between the extracted large-scale mechanical free energy and the disc's radiative energy losses. As we have already noted, the thermal timescale over which this occurs is assumed to be rapid compared with the evolutionary time scale of the disc. This implies that the height-integrated, volume specific source term on the right side of (\ref{fluc}) satisfies \begin{equation}\label{strad} -\Sigma \W \bar U^0\Omega' = -\Sigma \W \bar U^\phi (\ln \Omega)'=2{\cal F}, \end{equation} i.e. the total energy extracted over the local disk thickness is equal to the energy radiated through the upper and lower surfaces. \subsection{Large scale evolution} Henceforth, we drop the bars on the $\bar U$ 4-velocities, and take these non-$\delta$ quantities to be understood as time-averaged means. If we integrate over height, assume axisymmetry, and ignore the pressure contributions, the equation of angular momentum conservation (\ref{am0}) now expands to: \begin{equation}\label{25b} 0= U^0U_\phi\partial_t\Sigma +{1\over\sqrt{g}}\partial_r \left[{\sqrt{g}}\Sigma\left(U^rU_\phi +\W\right)\right] +2U_\phi {\cal F}. \end{equation} In the first term of (\ref{25b}), we have assumed that $U^0$ and $U_\phi$ are prescribed functions of $r$ only. The final term is obtained by integrating $\partial_z \tau^z_{\ \phi}$ over height, which is now the {\em angular momentum} radiated from each side of the disc (Page \& Thorne 1974). Using equation (\ref{partcon}) for $U^0\partial_t\Sigma$ and simplifying, we obtain: \begin{equation}\label{mdot} U'_\phi \sqrt{g}\Sigma U^r +\partial_r\left( \sqrt{g} \Sigma \W\right) +2\sqrt{g}{\cal F}{U_\phi}=0, \end{equation} where $U'_{\phi}=dU_\phi/dr$. Using now (\ref{mdot}) back in equation (\ref{partcon}), we find: \begin{equation}\label{thex} {\partial\Sigma\over \partial t} = {1\over \sqrt{g}U^0}{\partial\ \over \partial r}{1\over U'_\phi} \left[ {\partial\ \over \partial r}\left(\sqrt{g}\Sigma \W\right)+ 2\sqrt{g}{\cal F}{U_\phi} \right]. \end{equation} The final step is to use equation (\ref{10b}) for ${\cal F}$. With \begin{equation}\label{yy} Y\equiv \sqrt{g}\Sigma \W, \end{equation} this brings us to our governing equation: \begin{equation}\label{fund} {\partial Y\over \partial t} = {\W\over U^0}{\partial\ \over \partial r}{1\over U'_\phi} \left[ {\partial Y \over \partial r}- U_\phi U^\phi (\ln\Omega)' Y \right]. \end{equation} This is the equation we have been seeking. The first term on the right in square brackets is a straight translation from the Newtonian equation, while the second term is a relativistic correction stemming from the photon angular momentum. A final point. We have been assuming that $\W$ is a specified function of $r$. If, however, $\W$ has functional dependence upon $\Sigma$, then $\W$ would be implicitly time-dependent. In that case, equation (\ref{fund}) should be modified to: \begin{equation}\label{fund2} {\partial (Y/\W)\over \partial t} = {1\over U^0}{\partial\ \over \partial r}{1\over U'_\phi} \left[ {\partial Y \over \partial r}- U_\phi U^\phi (\ln\Omega)' Y \right], \end{equation} a form that holds more generally. \section {Solution of the evolutionary equation} \subsection{Preliminaries} Let us introduce a more compact formulation. Define $Q$ by \begin{equation} {dQ\over dr} = - U_\phi U^\phi (\ln\Omega)'. \end{equation} Equation (\ref{fund}) becomes \begin{equation}\label{27q} {\partial (Ye^Q)\over \partial t} = {e^Q \W\over U^0}{\partial\ \over \partial r}{e^{-Q}\over U'_\phi} \left[ {\partial (Ye^Q) \over \partial r}\right]. \end{equation} Next, with \begin{equation}\label{33b} dH\equiv e^QU'_\phi dr, \quad \zeta =Ye^Q, \end{equation} our governing equation takes the form of a pure diffusion equation \begin{equation}\label{pure} {\partial \zeta \over \partial t} = {e^{2Q} \W U'_\phi \over U^0}\ {\partial^2\zeta\over \partial H^2}. \end{equation} This has a steady-state solution of $\zeta\propto H$ . Equations (\ref{mdot}) and (\ref{strad}) together imply \begin{equation} {d\zeta\over dH} =-{\sqrt g\Sigma U^r}\equiv {\dot m\over 2\pi}, \quad {\rm or}\quad \zeta = {\dot m H\over 2\pi}, \end{equation} where $\dot m$ is the time-steady accretion rate and $H$ contains an additive constant boundary condition embodying the vanishing stess location (conventionally the ISCO radius). This is, in fact, the Novikov \& Thorne (1973) solution in its entirety! This reader may wish to verify this for the relatively simple Schwarzschild limit with $\Omega^2=r_g/r^3$ and $$ U^0=e^{-Q}=(1-3r_g/r)^{-1/2}, \quad U'_\phi = {\Omega\over 2} {r-6r_g\over (1-3r_g/r)^{3/2}}. $$ \subsection {Modal solution} \subsubsection{Global WKB} We seek time-dependent solutions of the form $e^{st}$. Then equation (\ref{pure}) becomes \begin{equation}\label{U2} {d^2\zeta\over dH^2} = {s e^{-2Q} U^0\over \W U'_\phi }\zeta. \end{equation} When $\W$ is small (a not unphysical choice) or $s$ sufficiently large, equation (\ref{U2}) has the formal (unnormalised) WKB solution (Bender \& Orszag 1978): \begin{equation}\label{wkb1} \zeta = \left(e^{2Q} \W U'_\phi \over U^0\right)^{1/4}\exp\left[ \pm \int \left( s e^{-2Q} U^0\over \W U'_\phi\right)^{1/2}dH\right]. \end{equation} When $s/U'_\phi<0$ we should of course interpret this in terms of trigonometric functions. Returning to $r$ in preference to $H$, and $Y$ in preference to $\zeta$, we obtain \begin{equation}\label{wkb2} Y = e^{-Q/2} \left(\W U'_\phi \over U^0\right)^{1/4}\exp\left[ \pm \int^r \left( s U^0U'_\phi \over \W \right)^{1/2}dr \right]. \end{equation} Consider first an unstable mode, $s>0$. At the ISCO location $r=r_I$, our WKB solution formally breaks down, but as we shall see, it is still valid rather close to it. Let $x=r-r_I$, so that positive and negative $x$ define regions of stable and unstable circular orbits, with $U'_\phi >0$ and $U'_\phi<0$ respectively. On physical grounds we certainly should not expect much of a disc-like structure to prevail for $x<0$, but it is of interest to see how the equation discovers this on its own. For $x>0$, we have $U'_\phi >0$ and the solution that is well-behaved as $x\rightarrow \infty$ takes the form \begin{equation}\label{wkb3} Y = \left(\W U'_\phi \over e^{2Q}U^0\right)^{1/4}\exp\left[ - \int^r_{r_I} \left( s U^0U'_\phi \over \W\right)^{1/2}dr \right]. \end{equation} (We have chosen for later convenience a lower limit of integration to be $r_I$. For a convergent integral, this simply amounts to setting the normalisation factor.) In the WKB limit, this is a sharply cut-off function in the bulk of the disc $x>0$. For $x<0$, $U'_\phi<0$ and we may write down a formal solution: \begin{equation}\label{wkb4} Y = A \left(\W U'_\phi \over e^{2Q}U^0\right)^{1/4}\,\,\, \sin \left[ \int^r_{r_I} \left( -s U^0U'_\phi \over \W\right)^{1/2}dr \,+\, \Phi\right]. \end{equation} Here, the amplitude $A$ and phase $\Phi$ are determined by the requirement that the $x<0$ solution join smoothly onto the exponentially cut-off solution for $x>0$. This is already enough to see that unstable modes have significant amplitudes only inside of the ISCO, a physically very sensible result. For stable ($s<0$) solutions, it is clear from our general solution (\ref{wkb2}) that the exponential cut-off behaviour now occurs inside the ISCO, $x<0$, while for $x>0$, the bulk of the disk hosts a spectrum of spatially-oscillatory, temporally-decaying modes. \subsubsection{Local ISCO structure for nanvanishing ${\W}$} The ISCO is an apparent singularlity of our equation, which can, however, be treated rigorously. In the local vicinity of the ISCO $r=r_I$ equation (\ref{27q}) may be written \begin{equation} sY = \left(W^r_{\ \phi}\over U^0U''_\phi\right)_{\!I}{d\over dx}\left( {1\over x}{dY\over dx}\right), \end{equation} where $x=r-r_I$, $U'_\phi(r)=U''_\phi (r_I)x$, and the notation $()_I$ means that $W^r_{\ \phi}$, $U^0$, and $U''_\phi$ are all evaluated at the ISCO $r=r_I$. Note that the $Q$ term is subdominant and has actually disappeared from the local ISCO-centred equation. (It will reappear as part of a locally determined normalisation factor.) We shall first assume that $\W(r_I)$ does not vanish, with the ultimate intent of showing the opposite: on physical grounds, it must vanish in a thin disc. With finite $\W(r_I)$, the (unnormalised) solution to this equation for $s>0$ is \begin{equation}\label{ai} Y={\rm Ai}'(k x), \quad k\equiv \left(sU^0 U''_\phi\over W^r_{\ \phi}\right)^{1/3}_{\!I}, \end{equation} where ${\rm Ai}'$ is the derivative of the Airy function. As in our WKB solution (\ref{wkb3}), positive values of the argument correspond to exponentially cut-off behaviour (the solution not chosen, Bi$'$, rises exponentially), whereas negative values correspond to oscillatory behaviour. (See figure [1].) The ``dispersion relation'' we have found, the $k$-definition of equation (\ref{ai}), may be written \begin{equation}\label{disp} s= \left( W^r_{\ \phi}\over U^0U''_\phi\right)_I k^3, \end{equation} and exhibits violent instabilites on the smallest scales. This is a compelling reason to seek physically viable solutions with the ISCO boundary condition $W^r_{\ \phi}=0$. Before we do, however, we note a point of some mathematical consequence. The WKB solution (\ref{wkb1}) depends upon large $|sU^0U'_\phi /W^r_{\ \phi}|$ for its validity, whereas the local solution merely requires $x\ll r_I$. {\it These are not mutually exclusive restrictions.} There is no reason why they both cannot be valid in an overlapping domain. In this shared asymptotic regime, the two solutions must take one and the same form. To verify this is indeed so, note that the large argument expansion of the Ai$'$ function is (up to an overall normalisation): \begin{equation}\label{41a} {\rm Ai}'(kx)\rightarrow x^{1/4} \exp\left[-{2\over 3}(kx)^{3/2}\right], \end{equation} which is exactly the same form as equation (\ref{wkb3}) in the limit $r\rightarrow r_I$, $U'_\phi\rightarrow U''_\phi x$ (once again up to an overall normalisation): $$ \left( W^r_{\ \phi}\ U'_\phi\over e^{2Q} U^0\right)^{1/4} \exp \left[ -\int_0^x\left( sU^0U'_\phi \over W^r_{\ \phi } \right)^{1/2}\, dx\right]\rightarrow $$ \begin{equation}\label{42b} {\rm constant}\ \times\ x^{1/4}\exp\left[-{2\over 3}(k x)^{3/2}\right]. \end{equation} Equations (\ref{41a}) and (\ref{42b}) have exactly the same functional form, as was sought. \subsubsection {A uniformly valid solution} The agreement between the two solutions in an overlapping asymptotic zone suggests the possibility that there may be a single analytic formula that is valid everywhere. Such a solution is known to exist for a certain class of ``one-turning-point problems,'' in quantum mechanical solutions of the Schr\"odinger equation (Bender \& Orszag 1978). Rather than derive this function, it is simplest just to write it down, and then verify that it reduces to each of our asymptotic forms in the appropriate limits. Define $X$ by \begin{equation}\label{XX} X= \left[ {3\over 2} \int_{r_I}^r \left( sU^0U'_\phi\over \W\right)^{1/2}dr\right]^{2/3}. \end{equation} Then, our (unnormalised) uniformly valid solution is \begin{equation}\label{uni} Y = e^{-Q/2}\left( \W U'_\phi\over XU^0\right)^{1/4} {\rm Ai}'(X). \end{equation} To verify this, we assume first that $s>0$. In the limit $X\gg 1$, Ai$'(X)$ has the asymptotic form $$ {X^{1/4} \over 2\sqrt{\pi}} \exp \left(-{2\over 3} X^{3/2}\right) = \qquad\qquad\qquad\qquad\qquad \ \ \ \ \ \ \ \ \ \ \ $$ \begin{equation} \qquad\qquad \qquad \ \ \ \ \ \ \ \ {X^{1/4}\over2\sqrt{\pi}}\exp \left[ -\int_{r_I}^r \left( sU^0U'_\phi \over \W \right)^{1/2}\, dr\right], \end{equation} so the $X^{1/4}$ factors cancel in (\ref{uni}), and we are led directly to equation (\ref{wkb3}) for $Y$. Next, when $r\rightarrow r_I$ and $U'_\phi>0$, we expand $U'_\phi=xU''_\phi$, and $X$ becomes $$ X=\left[ {3\over 2} \left( sU^0U''_\phi\over \W\right)_{\!I}^{1/2}\int_0^x x^{1/2}\,dx\right]^{2/3} = kx, $$ and $(\ref{uni})$ then reduces to equation (\ref{ai}): $Y\sim {\rm Ai}'(kx)$, since $U'_\phi/X$ is locally equal to the constant $U''_\phi/k$. Finally, when $x<0$ away from the ISCO, then $U'_\phi<0$. Multiply $U'_\phi$ by unity, written as $-e^{i\pi}$. Then, $$ X= e^{i\pi/3}\left[ {3\over 2} \int_{r_I}^r \left(-{ sU^0U'_\phi\over \W}\right)^{1/2}dr\right]^{2/3}. $$ Switching the limits $r$ and $r_I$, this is the same as $$ X = -\left[ {3\over 2} \int_r^{r_I} \left( - sU^0U'_\phi\over \W\right)^{1/2}dr\right]^{2/3}<0, $$ i.e, $X$ is a purely real negative quantity, despite all of the complex-valued exponents and nested fractional powers. Then, use the standard large negative argument for Ai$'(X)$ (Bender \& Orszag 1978): \begin{equation} {\rm Ai}'(X)\rightarrow {(-X)^{1/4}\over \sqrt{\pi}}\sin\left[ {2\over 3}(-X)^{3/2} + {\pi\over 4}\right]. \end{equation} It is easy to see that with equation (\ref{uni}), by adjusting $A$ and $\Phi$ the above leads to a precise match with equation (\ref{wkb4}). Equation (\ref{uni}) is therefore a uniformly valid solution to equation (\ref{fund}). Figure (1) shows an explicit solution for Schwarzschild geometry with $\W\propto r^{1/2}$, chosen for ease of analytics. (Recall that $\W$ correlates angular momentum and radial velocity fluctuations, so there is some sense for it to increase slowly with $r$.) In this case, the needed integral over $(U^0U'_\phi/\W)^{1/2}$ is (see the end of \S 3.1): $$ \int {\sqrt{r'-6}\over {r'-3}}dr' =2\sqrt{r'-6} - 2\sqrt{3}\tan^{-1}\left(r'-6\over 3\right)^{1/2} $$ where $r'$ is $r/r_g$. We conclude with a final formula for the disc surface density $\Sigma(r,t)$: \begin{equation} \Sigma = e^{-Q/2}\, \left( U'_\phi\over g^2(\W)^3 XU^0\right)^{1/4} {\rm Ai}'(X)\, \exp(st). \end{equation} with $X$ given by (\ref{XX}). \begin{figure} \centering \includegraphics[width=6cm,clip=true, trim=0cm 0cm 0cm 0cm]{Figure1.pdf} \caption{Plot of the function $Y$, an unstable mode near the ISCO, located at $x=0$ in the figure. The mode shown corresponds to $\W\propto r^{1/2}$, chosen for ease of calculation. The spatial response is confined almost entirely to the region $x<0$, where the angular momentum $U_\phi$ increases inward. Note the singular behaviour near the innermost photon orbit at $x=-3$.} \label{fig1} \end{figure} \subsection{Modal solutions for vanishing $\W(r_I)$} \subsubsection{ Exterior region, $r>r_I$} If there is any finite stress at the the ISCO, then there are extremely unstable modes present on small scales. This is a compelling argument in favour of the customary boundary condition of setting $\W=0$ for $x\le0$. Let us see how this removes the unstable behaviour. We shall assume that $\W(r_I)$ vanishes. In equilibrium, $\zeta\propto H$. From the definitions of $H$ and $\zeta$ in (\ref{33b}), it follows that $\W \propto x^2$ . The question then is what are the solutions of (\ref{fund}) near the ISCO with this behaviour for $\W$? Set the local stress $\W=\W''x^2/2$. Then, the local ISCO equation is \begin{equation} sY = \left({\W}''\over 2U^0U''_\phi\right)_{\!I}x^2{d\over dx}\left( {1\over x}{dY\over dx}\right), \end{equation} or \begin{equation} Y''-{Y'\over x} + \beta {Y\over x}=0, \qquad \beta=- \left(2sU^0U''_\phi\over \W''\right)_{\!I}. \end{equation} By equation (\ref{yy}), $Y$ itself must now vanish at $x=0$. If $s>0$, then there are two formal solutions to this equation in the region $x\ge 0$: \begin{equation} xI_2(2\sqrt{|\beta| x}) , \quad xK_2(2\sqrt{|\beta |x}). \end{equation} But the $I_2$ solution is not well-behaved for large $x$, and the $K_2$ solution does not vanish at $x=0$, so in fact there are no solutions compatible with the boundary conditions. In other words, {\it there are no unstable $s>0$ solutions for $x\ge 0$.} Consider next $s<0$. Then, the solution satisfying the vanishing $Y$ boundary condition at $x=0$ is \begin{equation}\label{J2} Y=xJ_2(2\sqrt{\beta x}), \end{equation} where $J_2$ is the Bessel function of order $2$. The corresponding solution with the $Y_2$ Bessel function does not vanish at $x=0$. Hence, there is a well-determined stable set of modal solutions with vanishing $\W$ at the ISCO. Once again, there is an overlap zone near the ISCO in which the WKB solution is valid together with the small $x$ local solution. The large argument expansion of (\ref{J2}) is \begin{equation}\label{largj} Y\rightarrow -\left({2x^3\over \pi^2\beta}\right)^{1/4} \sin \left(2\sqrt{\beta x}+{\pi\over 4}\right). \end{equation} The WKB solution follows from (\ref{wkb4}), now {\it outside} the ISCO: \begin{equation}\label{sinus2} Y= A\left(- U'_\phi \W\over e^{2Q} s U^0\right)^{1/4}\! \sin\left[\int^{r}_{r_I} \left(- sU^0U'_\phi \over \W \right)^{1/2}\! dr \!+\! \Phi\right], \end{equation} where $A$ and $\Phi$ are once again arbitrary. In the limit $r\rightarrow r_I$, we require $U'_\phi\rightarrow U''_\phi x$ and $\W\rightarrow\W''x^2/2$. The integral in (\ref{sinus2}) is then precisely $2\sqrt{\beta x}$. With the proper choice of $A$ and $\Phi$, there is a complete agreement of functional form between (\ref{sinus2}) and (\ref{largj}). Finally, there is once again a simple, uniformly valid solution. With $X$ now defined by \begin{equation} X = \int^{r}_{r_I} \left(U^0U'_\phi \over \W \right)^{1/2}\! dr, \end{equation} the desired $r>r_I$ solution is \begin{equation}\label{YYY} Y = e^{-Q/2}\left({U_\phi' X^2\W\over U^0}\right)^{1/4}J_2(\sqrt{-s}X). \end{equation} To verify this, simply expand the above: first for large $\sqrt{-s}X$ (recovering [\ref{sinus2}]), then for small $x$ (recovering [\ref{J2}]), and then simultaneously for large $X$ and small $x$ (recovering [\ref{largj}]). It is readily seen that the function (\ref{YYY}) reduces to all proper asymptotic forms. \subsubsection{Interior region, $r<r_I$} For $r<r_I$, there are no {\it stable} solutions that are well-behaved. The unstable $s>0$, but spatially well-behaved, interior solution is now easy to construct, since it has precisely the same mathematical form as the exterior solution. Moreover, vanishing at $x=0$ together with its first derivative, this solution seems to join smoothly onto the stable exterior solution. The smoothness is maintained even though the growth rate is different on either side of $x=0$! How does it make sense to have a global ``mode'' with two different growth rates, one with positive $s$, the other with negative $s$, in different regions? Of course a single mode cannot have different growth rates in different disk regions. What we have been discussing is in reality a superposition of two modes. This points to the resolution of our problem. The location $x=0$ is a branching singularity of the governing equation, and lacks a unique prescription for traversing it. It is an ``improper node.'' All modal solutions have vanishing $Y$ and $dY/dx$ at $x=0$. In particular, a smooth solution to our problem is that equation (\ref{YYY}) holds for $x>0$, and $Y=0$ for $x<0$, a stable mode that lives entirely in the exterior bulk of the disk. Similar considerations hold for its ``dual,'' unstable solution, in this case with vanishing $Y$ for $x\ge 0$. Thus the answer to the question posed at the end of the previous paragraph is that the peculiar global solution described is not, in fact, one mode, but a superposition of two. What is perhaps unusual is that each mode vanishes identically where the other is present! The stable exterior mode is unique, and for astrophysical purposes, {\em the} mode of interest. \subsection{Green's function solution} With $Y$ given by equation (\ref{YYY}), we may construct more general solutions by superposing modes. Formally, we may write \begin{equation}\label{G1} Y(r,t) = \int A(s)\, Y(\sqrt{s})e^{-st}\, ds \end{equation} where $A(s)$ is whatever appropriate function we choose. (For convenience, we have replaced $s$ by $-s$ in this role as a dummy variable, and the explicit $s$ dependence is exhibited in $Y$.) Consider next the integral (Gradshteyn \& Ryzhik 2014): $$ \int^\infty_0 J_p(\sqrt{s}X)J_p(\sqrt{s}X_0) e^{-st}\, ds = $$ \begin{equation}\label{G2} {1\over t}\exp\left(-X^2-X_0^2\over 4t\right)I_p\left(XX_0\over 2t\right), \end{equation} where $J_p$ is the Bessel function of order $p$ and $I_p$ the corresponding modified Bessel function. In the limit $t\rightarrow 0$, this integral represents a delta function type of concentration at $X=X_0$ which then spreads as $t$ increases. This behaviour, together with $XX_0$ symmetry, is what we seek for a Green's function response, initially concentrated at $X=X_0$. We are assuming that our global WKB solution holds over all $s$ in the integral, an assumption that must break down at $s=0$. But in the limit of small $\W$, this will affect only the detailed behaviour at very late times; the small $s$ contribution to the integral is otherwise negligible. Combining the results of equations (\ref{YYY}), (\ref{G1}), and (\ref{G2}) allows us to write down the (unnormalised) Green's function solution to our equation. With $X=X(x)$ and $X_0=X(x_0),$ $$ G(x, t;x_0) = \left({U_\phi' X^2\W\over e^{2Q} U^0}\right)_{x=x_0}^{1/4} \left({U_\phi' X^2\W\over e^{2Q} U^0}\right)^{1/4} \times $$ \begin{equation}\label{Gf} {1\over t} \exp -\left[(X-X_0)^2\over 4t\right]e^{-XX_0/2t}\, I_2\left(XX_0\over 2t\right) \end{equation} At early times $t\rightarrow 0$, the asymptotic behaviour of the terms on the final line of (\ref{Gf}) simplifes to: \begin{equation} \rightarrow {1\over (\pi t XX_0)^{1/2} } \exp -\left[(X-X_0)^2\over 4t\right]. \end{equation} This takes on the classic form for the diffusion of an initially very localised concentration. The {\it local-frame} surface emissivity is then given directly by (\ref{10b}): \begin{equation} {\cal F} = -{1\over 2\sqrt{g}} G(x,t;x_0) U^0\Omega'. \end{equation} \section{Discussion} In this paper, we have derived a form of the thin disc diffusion equation that is valid for general relativisitc spacetimes. We assume only that the metric tensor is axisymmetric, so that our equation is suitable for both the Schwarzschild and Kerr geometries. Remarkably, the metric itself enters into the calculation only in the form of a determinant, which is then absorbed as a multiplicative factor of our surface density variable. It then disappears entirely from the calculation. The physics of an evolving thin relativistic disc compels unstable modes trapped inside $r=r_I$ to rapidly destroy their host in this ``Rayleigh-unstable'' zone. Quasi-stable equilibrium circular orbits are mathematical fantasies here: without a retaining potential, the orbits simply plunge. The exterior modal solutions, by contrast, are always stable. On the other hand, by imposing the boundary condition of a vansihing stress tensor at $x=0$ ($\W \sim x^2$), stable modes exist {\it exclusively} in the stable region, vanishing together with their first derivatives as $x\rightarrow 0$ from positive values. Modes with finite $\W$ at $x=0$ must penetrate, at least exponentially, into the plunging region, and in thin disc models would not be supported. By using a combination of WKB techniques, local analysis and matched asymptotic expansions, it is possible to solve very generally the disc diffusion equation in terms of quadratures. This is the scope of the current paper. Using these methods, we have been able to calculate the Green's function solution, which is expected to be valid up until very late times when a quasi-equilibrium is reached. These findings may be useful as numerical diagnostics, but only if the thin disc condition is well-satisfied, a limit that has yet to be convincingly simulated. The most interesting astrophysical application of this work is likely to be to black hole transients. These include dramatic state changes in which the inner regions of the disc are thought to disappear and then reform, and tidal disruption events in which a disc forms and subesquently accretes from the debris of a mangled star. In principle, these events may be modelled by the one-dimensional evolution equation (\ref{fund}) with appropriate boundary conditions, and the time-dependent surface emission calculated in the observer's reference frame. These interesting possibilities will be the subject of future investigations. \section*{Acknowledgements} It is a pleasure to acknowledge important discussions with W. Potter during the early formative stages of this work, and very constructive comments from C. McKee, M. Rees and C. Gammie on an earlier manuscipt. I am grateful for support from the Royal Society in the form of a Wolfson Research Merit Award, and from STFC (grant number ST/N000919/1).
proofpile-arXiv_065-3478
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} Despite over 10 years of controversy, the rooting trick\cite{Bernard:2006qt} proposed for reducing the taste structure of staggered fermions\cite{Kogut:1974ag,Susskind:1976jm,Sharatchandra:1981si} is not valid\cite{Creutz:2007yg,Creutz:2007rk}. The issue is that the exact chiral symmetries inherent in the formulation are incompatible with the well known chiral anomaly. As the algorithm remains popular and these issues are somewhat subtle, this talk concentrates on two of the more blatant issues that arise when the $SU(3)$ flavor symmetry is broken via unequal quark masses. First, different elements of the usual $SU(3)$ pseudo-scalar octet appear with different taste degeneracies. Second, a large number of spurious pseudo-scalars appear far from any physical particle masses. Section \ref{sec-1} reviews the standard sigma model picture of the pseudo-scalar mass dependence on the underlying quark masses. Section \ref{sec-2} augments that argument to include the taste degeneracies inherent in the staggered approach. Section \ref{sec-3} briefly discusses how rooting can work with multiple copies, or replicas, of a valid lattice fermion formulation, such as Wilson fermions\cite{Wilson:1975id}. Section \ref{sec-4} contrasts that argument with staggered fermions, which are not replicas of equivalent fermions. Here the inherent propagator structure forces the spurious states of the unrooted theory to survive. Finally Section \ref{sec-5} reiterates the two main points mentioned above. \section{Quark masses and the pseudo-scalar spectrum}\label{sec-1} Consider three flavor QCD and the usual pseudo-scalar octet \begin{equation} \begin{matrix} & K_0 & & K_+ & \cr \cr \pi_- && \pi_0,\eta &&\pi_+\cr \cr & K_- & & \overline K_0 & \cr \end{matrix}. \end{equation} This section reviews the lowest-order non-linear sigma model prediction for the mass dependence of these particles on the underlying quark masses. For simplicity, ignore the $\eta^\prime$ on the grounds that it acquires a large mass through the anomaly. Begin with the usual picture of spontaneous chiral symmetry breaking giving a quark condensate \begin{equation} \langle\overline\psi\psi\rangle=v \end{equation} where $\psi$ denotes the quark fields. The non-linear sigma model is an effective theory for fluctuations around this expectation value \begin{equation} \overline\psi_L^j\psi_R^k \sim v\ \Sigma^{jk}. \end{equation} Here the roman superscripts denote the quark flavors, which run from 1 to 3, or equivalently lie in $\{ u,d,s\}$. Ignoring radial fluctuations, the matrix $\Sigma$ is taken to lie in the group $SU(3)$. Introduce the eight Gell-Mann matrices $\lambda_\alpha$ normalized \begin{equation} {\rm Tr} \lambda_\alpha \lambda_\beta = 2\delta_{\alpha\beta}. \end{equation} Contact with the pseudo-scalar fields follows from \begin{equation} \Sigma=\exp(i\pi_\alpha \lambda_\alpha/f_\pi). \end{equation} Here $f_\pi$ is a phenomenological constant of about 93 MeV. The kinetic term for the effective field $\Sigma$ takes the form \begin{equation} L_0={f_\pi^2\over 4}{\rm Tr}(\partial_\mu \Sigma^\dagger \partial_\mu \Sigma). \end{equation} Expanding this to second order in the pion fields gives their effective kinetic term \begin{equation} L_0={\rm const}+{1\over 2} \partial_\mu\pi_\alpha \partial_\mu\pi_\alpha+\ldots \end{equation} \begin{figure}[thb] \centering \includegraphics[width=.7\hsize,clip]{spectrum.eps} \caption{The pseudo-scalar spectrum for fixed quark masses as predicted by the effective sigma model. The quark masses are indicated on the x axis.} \label{fig-1 \end{figure} When the quark masses vanish, the starting theory has two chiral symmetries under the rotations \begin{equation} \begin{matrix} &\psi_L\rightarrow \psi_L\ g_L\cr &\psi_R\rightarrow \psi_R\ g_R\cr \end{matrix} \end{equation} where $g_L$ and $g_R$ are independent global elements of $SU(3)$. In the effective theory this symmetry takes the form \begin{equation} \Sigma\rightarrow g_L^\dagger\ \Sigma\ g_R. \end{equation} The spontaneous breaking of this symmetry gives the usual octet of pseudo-scalars. Quark masses break both the flavor $SU(3)$ and the above chiral symmetries.\footnote{Electromagnetic effects are ignored here.} For the effective theory add a mass term and consider \begin{equation} L=L_0- {vf_\pi^2\over 4} {\rm Re\ Tr}(m\ \Sigma) \end{equation} with $m$ a 3 by 3 matrix. Chiral rotations of form $m\rightarrow g_R^\dagger\ m\ g_L$ allows $m$ to be put in diagonal form \begin{equation} m=\begin{pmatrix} m_u & 0 & 0 \cr 0 & m_d & 0 \cr 0 & 0 & m_s \cr \end{pmatrix}. \end{equation} Expanding $L$ to quadratic order in the pion fields gives \begin{equation} L={\rm const}+{1\over 2}\partial_\mu \pi_\alpha\partial_\mu \pi_\alpha +{1\over 2}\pi_\alpha M_{\alpha\beta}\pi_\beta \end{equation} where the 8 by 8 meson mass matrix $M$ takes the form \begin{equation} M_{\alpha\beta} = {\rm Re\ Tr}\ \lambda_\alpha m \lambda_\beta. \end{equation} From this it is elementary algebra to obtain the octet masses \begin{equation} \begin{matrix} &M_{\pi_+}^2= \ M_{\pi_-}^2\propto {m_u+m_d \over 2}\hfill\cr &M_{K_+}^2= \ M_{K_-}^2\propto {m_u+m_s \over 2} \hfill\cr &M_{K_0}^2= \ M_{\overline K_0}^2\propto { m_d+m_s \over 2} \hfill \cr & M_{\pi_0}^2 \propto\ {1\over 3} \bigg(m_u+m_d+m_s -\sqrt{m_u^2+m_d^2+m_s^2-m_um_d-m_um_s-m_dm_s}\bigg)\cr &M_{\eta}^2 \propto \ {1\over 3} \bigg(m_u+m_d+m_s +\sqrt{m_u^2+m_d^2+m_s^2-m_um_d-m_um_s-m_dm_s}\bigg)\cr \end{matrix}. \end{equation} Note that this involves solving a quadratic equation for the $\pi_0$ $\eta$ mixing arising because $M_{38}$ does not vanish. This spectrum is qualitatively sketched in Fig. \ref{fig-1}. \section{Including taste degeneracy}\label{sec-2} Motivated by the four tastes inherent with staggered fermions, introduce a factor of $N_t=4$ degeneracy for each quark flavor. This leaves us with 12 distinct quark species\cite{Lee:1999zxa}. To keep the algebra simple, assume that this ``taste'' symmetry is exact for each of the original three ``flavors.'' Before the breaking of flavor by the quark masses, the chiral symmetry becomes $SU(12)\otimes SU(12)$. Thus there should be $143=12^2-1$ pseudo-Goldstone bosons. The 8 by 8 meson mass matrix becomes 143 by 143, which will now be diagonalized.\footnote{As the goal is an eventual reduction to the normal 3 flavor theory, ignore the possibility of the confining theory reverting to a conformal one.} This diagonalization is simplified since there are actually three distinct $SU(4)$ taste groups, one for each of the flavors $u,\ d,\ s$. This makes it possible to classify the pseudo-scalar mesons in terms of their representations under each of these groups. The relevant $SU(4)$ representations are the $1,4, \overline 4, 15$ in analogy to the $SU(3)$ representations $1,3,\overline 3,8$. The kaons and charged pions are particularly easy to treat. Each involves two distinct flavors, $q$ and $q^\prime$. The mesons appear in the representation $(4_q, \overline 4_{{\overline q}^\prime})$. Thus they form multiplets of 16 mesons each. The meson masses squared are proportional to the average of their constituent masses, {\em i.e.} $M^2\propto {1\over 2}(m_q+m_{q^\prime})$. With four kaons and two charged pions, this accounts for $6\times16=96$ of our total 143 expected pseudo-Goldstone particles. Now turn to the neutral mesons, those for which a quark is combined with its anti-quark. For this use the breakdown $ 4\otimes \overline 4 \rightarrow 1 \oplus 15$. For each flavor, begin with a taste 15 plus a taste singlet. Remarkably, the taste 15 combinations for each of the flavors cannot mix. This is because each flavor has its own taste group. This gives \begin{itemize} \item a taste 15 of {$i\overline u \gamma_5 u$} states with $M^2\propto m_u$, \item a taste 15 of {$i\overline d \gamma_5 d$} states with $M^2\propto m_d$, \item a taste 15 of {$i\overline s \gamma_5 s$} states with $M^2\propto m_s$. \end{itemize} None of these has any analogue in the spectrum of Fig. \ref{fig-1}. \begin{figure}[thb] \centering \includegraphics[width=.7\hsize,clip]{spectrum2.eps} \caption{The pseudo-scalar spectrum for the multi-taste theory. The degeneracies of the various states are indicated below the line. The 15-plets, denoted by open circles, do not appear in the single taste theory. Note that the mesons with physical masses do not have a common taste multiplicity.} \label{fig-2 \end{figure} Finally consider the taste singlet combinations $\overline uu,\overline dd, \overline ss$. From these three, the flavor singlet combination is the $\eta^\prime$. As stated earlier, this is heavy and ignored here. The remaining two combinations have the identical mixing matrix as discussed earlier for the single taste theory. These give rise to the masses quoted earlier \begin{equation} \begin{matrix} & M_{\pi_0}^2 \propto\ {1\over 3} \bigg(m_u+m_d+m_s -\sqrt{m_u^2+m_d^2+m_s^2-m_um_d-m_um_s-m_dm_s}\bigg)\cr &M_{\eta}^2 \propto \ {1\over 3} \bigg(m_u+m_d+m_s +\sqrt{m_u^2+m_d^2+m_s^2-m_um_d-m_um_s-m_dm_s}\bigg).\cr \end{matrix} \end{equation} This relation for the $\eta$ mass in the multi-taste theory appears in Ref. \cite{Aubin:2003mg}. All anticipated states are now identified, {\em i.e.} $143=16\times 6+15\times 3+2$. The resulting spectrum is sketched in Fig. \ref{fig-2}. The important points to note in comparing this with Fig. \ref{fig-1} are that the taste degeneracies of the physical states depend on which element of the octet one is observing, and there are three multiplets of 15 particles each that have no correspondence in the single taste theory. \section{Rooting replicas}\label{sec-3} Consider a theory with 4 replicas of a valid fermion formulation, such as Wilson fermions. A reduction of the four taste theory down to one taste with the standard rooting trick \begin{equation} |D|\longrightarrow |D^4|^{1/4} \end{equation} is indeed valid. Once the cutoff is in place, $D$ is a finite matrix and this a mathematical identity.\footnote{This technically requires $|D|$ real and non-negative. Interesting cases where this is not true are not considered here.\cite{Creutz:2013xfa}} Ignoring doublers, which have been made heavy by the Wilson term, the quark propagator has a single low-energy pole per species. On varying the replica factor $N_t$ away from 4, the state degeneracies evolve as \begin{equation} \begin{matrix} 16&\rightarrow &N_t^2 \longrightarrow_{N_t\rightarrow 1}& 1 \cr 15&\rightarrow& N_t^2-1 \longrightarrow_{N_t\rightarrow 1}& 0 \cr \end{matrix} \end{equation} and we obtain the proper spectrum. \section{Staggered fermions}\label{sec-4} Now turn to the case of staggered fermions. In this theory, the extra species are not replicas. The doublers all appear in chiral pairs. Whether one roots or not, the propagator always has four light poles. This means that the spurious 15 multiplets will remain in the spectrum even after rooting. Furthermore, the exact chiral symmetry of the staggered approach requires at least one member of each of these taste-15 multiplets to become a Goldstone boson in the chiral limit. Even if we allow for taste breaking, some remnants of the spurious multiplets must remain. So if the approach is fundamentally flawed, why do previous calculations with this method frequently appear to be fairly accurate? The issues are connected with so called ``disconnected diagrams.'' These are fundamental to the mixing between the strange and the light quarks inherent in the eta meson. Most previous staggered calculations have concentrated on particles dominated by valence quarks, ones that propagate without such direct mixing. For these, the problems are swept into the sea quarks. Such are known to contribute of order ten percent relative to results in the valence\cite{Weingarten:1980hx} or quenched\cite{Hamber:1981zn} approximation, where the sea is ignored. Furthermore, the sea quark contributions will primarily differ because of incorrect multiplicities in the ``pion cloud.'' In the isospin limit the staggered theory has 63 degenerate pions. This is reduced to $63/16$ effective pions after the rooting trick. This compares with the physical cloud composed of of 3 pions. Thus the final error for valence physics is expected to be reduced to a few percent. Larger problems are expected when disconnected diagrams are crucial. This should be particularly serious for the physics of the eta and eta-prime mesons as well as for isospin breaking effects. \section{Summary}\label{sec-5} This discussion raises two issues that practitioners of rooted-staggered fermions should address \begin{enumerate} \item How can the differing taste multiplicities of pseudo-scalar-octet members be reconciled? \item How can the three unphysical taste-15 multiplets with unphysical masses be eliminated? \end{enumerate} Without answers to these questions, the approach can at best be regarded as an uncontrolled approximation to QCD. \include{references} \end{document}
proofpile-arXiv_065-3481
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} First passage percolation was introduced by Hammersley and Welsh \cite{HammersleyWelsh} in 1965. They defined in this model a random pseudo-metric that has been intensively studied since then. We will say a few words about it in Section \ref{s:T}, but this random metric is not the subject of this paper. The study of maximal flows in first passage percolation on $\mathbb{Z}^d$ has been initiated by Grimmett and Kesten \cite{GrimmettKesten84} in 1984 for dimension $2$ and by Kesten \cite{Kesten:flows} in 1987 for higher dimensions. This interpretation of the model of first passage percolation has been a lot less studied than the one in terms of random distance. One of the reasons is the added difficulty to deal with this interpretation, in which the study of the random paths that are the geodesics is replaced by the study of random cuts or hypersurfaces, objects which should be thought of as $(d-1)$-dimensional. Consider a large piece of material represented by a large box in $\mathbb{Z}^d$ and equip the edges with random i.i.d. capacities representing the maximum amount of flow that each edge can bear. Typically, one is interested in the maximal flow that can cross the box from one side to the other. This question was adressed notably in \cite{Kesten:flows} and \cite{RossignolTheret08b} where one can find laws of large numbers and large deviation principles when the dimensions of the box grow to infinity. We refer to section~\ref{sec:background} for a more precise picture of the background, but let us stress for the moment that in those works, moment assumptions were made on the capacities. It is however interesting for modelling purposes to remove this assumption, allowing even infinite capacities which would represent microscopic defects where capacities are of a different order of size than elsewhere. The first achievement of the present work, Theorem~\ref{t:CV}, is to prove a law of large numbers for maximal flows without any moment assumption, allowing infinite capacities under the assumtion that the probability that an edge has infinite capacity is less than the critical parameter of percolation in $\mathbb{Z}^d$. Once such a result is obtained, one may wonder in which way the limit obtained in this law of large numbers, the so-called flow constant, depends on the capacity distribution put on the edges. The second achievement of this article, Theorem~\ref{thmcont}, is to show the continuity of the flow constant. One application of this continuity result could be the study of maximal flows in an inhomogeneous environment when capacities are not identically distributed but their distribution depends smoothly (at the macroscopic scale) on the location of the edges. The rest of the paper is organized as follows. In section~\ref{sec:background}, we give the necessary definitions and background, state our main results and explain in detail the strategy of the proof. The law of large numbers is proved in section~\ref{s:CV} and the continuity result is shown in section~\ref{s:cont}. Between those two sections lies in section~\ref{s:ssadd} a technical intermezzo devised to express the flow constant as the limit of a subbadditive object. The reason why we need it will be decribed at length in section~\ref{s:T}. \section{Definitions, background and main results} \label{sec:background} \subsection[Maximal flows]{Definition of the maximal flows} We use many notations introduced in \cite{Kesten:flows} and \cite{RossignolTheret08b}. Given a probability measure $G$ on $[0,+\infty]$, we equip the graph $(\mathbb{Z}^d, \mathbb{E}^d)$ with an i.i.d. family $(t_G(e), e\in \mathbb{E}^d)$ of random variables of common distribution $G$. Here $\mathbb{E}^d$ is the set of all the edges between nearest neighbors in $\mathbb{Z}^d$ for the Euclidean distance. The variable $t_G(e)$ is interpreted as the maximal amount of water that can cross the edge $e$ per second. Consider a finite subgraph $\Omega=(V_\Omega,E_\Omega)$ of $(\mathbb{Z}^d,\mathbb{E}^d)$ (or a bounded subset of $\mathbb{R}^d$ that we intersect with $(\mathbb{Z}^d, \mathbb{E}^d)$ to obtain a finite graph), which represents the piece of rock through which the water flows, and let $\mathfrak{G}^1$ and $\mathfrak{G}^2$ be two disjoint subsets of vertices in $\Omega$: $\mathfrak{G}^1$ (resp. $\mathfrak{G}^2$) represents the sources (resp. the sinks) through which the water can enter in (resp. escapes from) $\Omega$. A possible stream inside $\Omega$ between $\mathfrak{G}^1$ and $\mathfrak{G}^2$ is a function $\vec f : \mathbb{E}^d \mapsto \mathbb{R}^d$ such that for all $e\in \mathbb{E}^d$, \begin{itemize} \item $\| \vec f(e) \|_2$ is the amount of water that flows through $e$ per second, \item $\vec f(e) / \| \vec f(e)\|_2$ is the direction in which the water flows through $e$. \end{itemize} For instance, if the endpoints of $e$ are the vertices $x$ and $y$, which are at Euclidean distance $1$, then $\vec f(e) / \| \vec f(e)\|_2$ can be either the unit vector $\vec{xy}$ or the unit vector $\vec{yx}$. A stream $\vec f$ inside $\Omega$ between $\mathfrak{G}^1$ and $\mathfrak{G}^2$ is $G$-admissible if and only if it satisfies the following constraints: \begin{itemize} \item {\em the node law:} for every vertex $x$ in $\Omega \smallsetminus (\mathfrak{G}^1 \cap \mathfrak{G}^2)$, we have $$\sum_{y\in \mathbb{Z}^d \,:\, e=\langle x,y\rangle \in \mathbb{E}^d \cap \Omega } \| \vec f (e)\|_2 \left( \mathds{1}_{\vec f (e)/\| \vec f (e)\|_2 = \vec{xy}} - \mathds{1}_{\vec f (e)/\| \vec f (e)\|_2 = \vec{yx}} \right) \,=\, 0 \,,$$ {\em i.e.}, there is no loss of fluid inside $\Omega$; \item {\em the capacity constraint:} for every edge $e$ in $\Omega$, we have $$ 0 \,\leq \, \| \vec f(e) \|_2 \,\leq \, t_G(e) \,,$$ {\em i.e.}, the amount of water that flows through $e$ per second cannot exceed its capacity $t_G(e)$. \end{itemize} Since the capacities are random, the set of $G$-admissible streams inside $\Omega$ between $\mathfrak{G}^1$ and $\mathfrak{G}^2$ is also random. With each such $G$-admissible stream $\vec f$, we associate its flow defined by $$ \flow (\vec f) \,=\, \sum_{x \in \mathfrak{G}^1} \,\sum_{y \in \Omega \smallsetminus \mathfrak{G}^1 \,:\, e=\langle x,y\rangle \in \mathbb{E}^d } \| \vec f (e) \|_2 \left( \mathds{1}_{\vec f (e) /\| \vec f (e)\|_2= \vec{xy}} - \mathds{1}_{\vec f (e)/\| \vec f (e)\|_2 = \vec{yx}} \right) \,.$$ This is the amount of water that enters in $\Omega$ through $\mathfrak{G}^1$ per second (we count it negatively if the water escapes from $\Omega$). By the node law, equivalently, $\flow (\vec f)$ is equal to the amount of water that escapes from $\Omega$ through $\mathfrak{G}^2$ per second: $$ \flow (\vec f) \,=\, \sum_{x \in \mathfrak{G}^2} \,\sum_{y \in \Omega \smallsetminus \mathfrak{G}^2 \,:\, e=\langle x,y\rangle \in \mathbb{E}^d } \| \vec f (e) \|_2 \left( \mathds{1}_{\vec f (e)/\| \vec f (e)\|_2 = \vec{yx}} - \mathds{1}_{\vec f (e)/\| \vec f (e)\|_2 = \vec{xy}} \right) \,.$$ The maximal flow from $\mathfrak{G}^1$ to $\mathfrak{G}^2$ in $\Omega$ for the capacities $(t_G(e), e\in \mathbb{E}^d)$, denoted by $\phi_G (\mathfrak{G}^1 \rightarrow \mathfrak{G}^2 \textrm{ in }\Omega)$, is the supremum of the flows of all admissible streams through $\Omega$: $$ \phi_G (\mathfrak{G}^1 \rightarrow \mathfrak{G}^2 \textrm{ in }\Omega ) \,=\, \sup \{ \flow (\vec f) \,:\, \vec f \textrm{ is a $G$-admissible stream inside $\Omega$ between $\mathfrak{G}^1$ and $\mathfrak{G}^2$} \} \,.$$ It is not so easy to deal with admissible streams, but there is an alternative description of maximal flows we can work with. We define a path from $\mathfrak{G}^1$ to $\mathfrak{G}^2$ in $\Omega$ as a finite sequence $(v_0, e_1, v_1, \dots ,e_n, v_n)$ of vertices $(v_i)_{0\leq i \leq n}$ and edges $(e_i)_{1\leq i \leq n}$ such that $v_0\in\mathfrak{G}^1$, $v_n\in\mathfrak{G}^2$ and $e_i = \langle v_{i-1},v_{i}\rangle\in E_\Omega$ for any $1\leq i \leq n$. We say that a set of edges $E \subset E_\Omega$ cuts $\mathfrak{G}^1$ from $\mathfrak{G}^2$ in $\Omega$ (or is a cutset, for short) if there is no path from $\mathfrak{G}^1$ to $\mathfrak{G}^2$ in $(V_\Omega,E_\Omega\setminus E)$. We associate with any set of edges $E$ its capacity $T_G(E)$ defined by $T_G(E) = \sum_{e\in E} t_G(e)$. The max-flow min-cut theorem (see \cite{Bollobas}), a result of graph theory, states that $$ \phi_G (\mathfrak{G}^1 \rightarrow \mathfrak{G}^2 \textrm{ in }\Omega) \,=\, \min \{ T_G(E) \,:\, E \textrm{ cuts $\mathfrak{G}^1$ from $\mathfrak{G}^2$ in $\Omega$} \}\,. $$ The idea of this theorem is quite intuitive: the maximal flow is limited by edges that are jammed, {\em i.e.}, that are crossed by an amount of water per second which is equal to their capacities. These jammed edges form a cutset, otherwise there would be a path of edges from $\mathfrak{G}^1$ to $\mathfrak{G}^2$ through which a higher amount of water could circulate. Finally, some of the jammed edges may not limit the flow since other edges, before or after them on the trajectory of water, already limit the flow, thus the maximal flow is given by the minimal capacity of a cutset. Kesten \cite{Kesten:flows} presented this interpretation of first passage percolation as a higher dimensional version of classical first passage percolation. To understand this point of view, let us associate with each edge $e$ a small plaquette $e^*$, {\em i.e.}, a $(d-1)$-dimensional hypersquare whose sides have length $1$, are parallel to the edges of the graph, such that $e^*$ is normal to $e$ and cuts $e$ in its middle. We associate with the plaquette $e^*$ the capacity $t_G (e)$ of the edge $e$ to which it corresponds. With a set of edges $E$ we associate the set of the corresponding plaquettes $E^*=\{ e^* \,:\, e \in E \}$. Roughly speaking, if $E$ cuts $\mathfrak{G}^1$ from $\mathfrak{G}^2$ in $\Omega$ then $E^*$ is a "surface" of plaquettes that disconnects $\mathfrak{G}^1$ from $\mathfrak{G}^2$ in $\Omega$ -- we do not try to give a rigorous definition of the term surface here. In dimension $2$, the plaquette $e^*$ associated to the edge $e$ is in fact the dual edge of $e$ in the dual graph of $\mathbb{Z}^2$. A "surface" of plaquettes is thus very similar to a path in the dual graph of $\mathbb{Z}^2$ in dimension $2$. The study of maximal flows in first passage percolation is equivalent, through the max-flow min-cut theorem, to the study of the minimal capacities of cutsets. When we compare this to the classical interpretation of first passage percolation, the study of geodesics (which are paths) is replaced by the study of minimal cutsets (which are rather hypersurfaces). In this sense, the study of maximal flow is a higher dimensional version of classical first passage percolation. We now define two specific maximal flows through cylinders that are of particular interest. Let $A$ be a non-degenerate hyperrectangle, {\em i.e.}, a rectangle of dimension $d-1$ in $\mathbb{R}^d$. Let $\vec{v}$ be one of the two unit vectors normal to $A$. For a positive real $h$, denote by $\cyl (A,h)$ the cylinder of basis $A$ and height $2h$ defined by \begin{equation} \label{e:defcyl} \cyl (A,h) \,=\, \{ x + t \vec{v} \,:\, x \in A \,, \, t \in [-h, h] \} \,. \end{equation} Let $B_1(A,h)$ (resp. $B_2(A,h)$) be (a discrete version of) the top (resp. the bottom) of this cylinder, more precisely defined by \begin{align*} B_1(A,h) & \,=\, \{ x \in \mathbb{Z}^d \cap \cyl (A,h) \,:\, \exists y \notin \cyl(A,h) \,,\, \langle x ,y \rangle \in \mathbb{E}^d \textrm{ and $\langle x ,y \rangle$ intersects } A+h \vec{v} \}\,,\\ B_2(A,h) & \,=\, \{ x \in \mathbb{Z}^d \cap \cyl (A,h) \,:\, \exists y \notin \cyl(A,h) \,,\, \langle x,y \rangle \in \mathbb{E}^d \textrm{ and $\langle x ,y \rangle$ intersects } A-h \vec{v} \}\,. \end{align*} We denote by $\phi_G(A,h)$ the maximal flow from the top to the bottom of the cylinder $\cyl(A,h)$ in the direction $\vec{v}$, defined by $$ \phi_G(A,h)\,=\, \phi_G(B_1(A,h) \rightarrow B_2(A,h) \textrm{ in } \cyl(A,h) ) \,. $$ We denote by $\mathcal{H}^{d-1}$ the Hausdorff measure in dimension $d-1$: for $A = \prod_{i=1}^{d-1} [k_i, l_i] \times \{c\}$ with $k_i < l_i, c \in \mathbb{R}$, we have $\mathcal{H}^{d-1} (A) = \prod_{i=1}^{d-1} (l_i - k_i)$. We can expect that $\phi_G (A,h)$ grows asymptotically linearly in $\mathcal{H}^{d-1} (A)$ when the dimensions of the cylinder go to infinity, since $\mathcal{H}^{d-1} (A)$ is the surface of the area through which the water can enter in the cylinder or escape from it. However, $\phi_G (A,h)$ is not easy to deal with. Indeed, by the max-flow min-cut theorem, $\phi_G (A,h)$ is equal to the minimal capacity of a set of edges that cuts $B_1(A,h)$ from $B_2(A,h)$ in the cylinder. The dual of this set of edges is a surface of plaquettes whose boundary on the sides of $\cyl (A, h)$ is completely free. This implies that the union of cutsets between the top and the bottom of two adjacent cylinders is not a cutset itself between the top and the bottom of the union of the two cylinders. Thus the maximal flow $\phi_G (A,h)$ does not have a property of subadditivity, which is the key tool in the study of classical first passage percolation. This is the reason why we define another maximal flow through $\cyl (A,h)$, for which subadditivity is recovered. The set $\cyl (A,h) \smallsetminus A$ has two connected components, denoted by $C_1 (A,h)$ and $C_2 (A,h)$. For $i=1,2$, we denote by $C_i' (A,h)$ the discrete boundary of $C_i (A,h)$ defined by \begin{equation} \label{e:deftau1} C_i' (A,h) \,=\, \{ x \in \mathbb{Z}^d \cap C_i (A,h) \,:\, \exists y \notin \cyl(A,h) \,,\, \langle x ,y \rangle \in \mathbb{E}^d \}\,. \end{equation} We denote by $\tau_G (A,h)$ the maximal flow from the upper half part of the boundary of the cylinder to its lower half part, {\em i.e.}, \begin{equation} \label{e:deftau2} \tau_G (A,h)\,=\, \phi_G (C'_1(A,h) \rightarrow C'_2(A,h) \textrm{ in } \cyl(A,h) ) \,. \end{equation} By the max-flow min-cut theorem, $\tau_G (A,h)$ is equal to the minimal capacity of a set of edges that cuts $C_1'(A,h)$ from $C'_2 (A,h)$ inside the cylinder. To such a cutset $E$ corresponds a dual set of plaquettes $E^*$ whose boundary has to be very close to $\partial A$, the boundary of the hyperrectangle $A$. We say that a cylinder is straight if $\vec{v} = \vec{v}_0 := (0, 0 , \dots , 1)$ and if there exists $k_i, l_i , c \in \mathbb{Z}$ such that $k_i<l_i$ for all $i$ and $A = A (\vec k, \vec l) = \prod_{i=1}^{d-1} [k_i, l_i] \times \{c\}$. In this case, for $c=0$ and $k_i \leq 0 < l_i$, the family of variables $(\tau_G ( A(\vec k , \vec l), h) )_{\vec k, \vec l}$ is subadditive, since the minimal cutsets in adjacent cylinders can be glued together along the common side of these cylinders. \subsection[Background]{Background on maximal flows} A straightforward application of ergodic subadditive theorems in the multiparameter case (see Krengel and Pyke \cite{KrengelPyke} and Smythe \cite{Smythe}) leads to the following result. \begin{prop} Let $G$ be a probability measure on $[0,+\infty[$ such that $\int_{\mathbb{R}^+} x \, dG(x) <\infty$. Let $A = \prod_{i=1}^{d-1} [k_i, l_i] \times \{0\}$ with $k_i \leq 0 < l_i \in \mathbb{Z}$. Let $h: \mathbb{N} \rightarrow \mathbb{R}^+$ such that $\lim_{p\rightarrow \infty} h(p) = +\infty$. Then there exists a constant $\nu_G (\vec{v}_0)$, that does not depend on $A$ and $h$, such that $$ \lim_{p\rightarrow \infty} \frac{\tau_G (pA, h(p))}{ \mathcal{H}^{d-1} (pA) } \,=\, \nu_G (\vec{v}_0) \quad \textrm{a.s. and in }L^1\,. $$ \end{prop} This result has been stated in a slightly different way by Kesten in \cite{Kesten:flows}. He considered there the more general case of flows through cylinders whose dimensions goes to infinity at different speeds in each direction, but in dimension $d=3$. The constant $\nu_G (\vec{v}_0)$ obtained here is the equivalent of the time constant $\mu_G(e_1)$ defined in the context of random distances (see Section \ref{s:T}), and by analogy we call it the flow constant. As suggested by classical first passage percolation, a constant $\nu_G (\vec{v})$ can be defined in any direction $\vec{v} \in \mathbb{S}^{d-1}$, where $\mathbb{S}^{d-1} =\{x\in \mathbb{R}^d \,:\, \|x\|_2 = 1 \}$. This is not that trivial, since a lack of subadditivity appears when we look at tilted cylinders, due to the discretization of the boundary of the cylinders. Moreover, classical ergodic subadditive theorems cannot be used if the direction $\vec{v}$ is not rational, {\em i.e.}, if there does not exist an integer $M$ such that $M \vec{v}$ has integer coordinates. However, these obstacles can be overcome and the two authors proved in \cite{RossignolTheret08b} the following law of large numbers. \begin{thm} \label{t:oldCV} Let $G$ be a probability measure on $[0, +\infty[$ such that $\int_{\mathbb{R}^+} x \, dG(x) <\infty$. For any $\vec{v} \in \mathbb{S}^{d-1}$, for any non-degenerate hyperrectangle $A$ normal to $\vec{v}$, for any function $h : \mathbb{N} \mapsto \mathbb{R}^+$ satisfying $\lim_{p\rightarrow +\infty} h(p) =+\infty$, there exists a constant $\nu_G (\vec{v}) \in [0,+\infty[$ (independent on $A$ and $h$) such that $$ \lim_{p\rightarrow \infty} \frac{\tau_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, \nu_G (\vec{v}) \quad \textrm{in }L^1\,. $$ If moreover the origin of the graph belongs to $A$, or if $\int_{\mathbb{R}^+} x^{1+1/(d-1)} \, dG(x) <\infty$, then $$ \lim_{p\rightarrow \infty} \frac{\tau_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, \nu_G (\vec{v}) \quad \textrm{a.s.} $$ If the cylinder is {\em flat}, {\em i.e.}, if $\lim_{p\rightarrow \infty} h(p) /p =0$, then the same convergences hold also for $\phi_G (pA, h(p))$. \end{thm} When the origin of the graph belongs to $A$, and for an increasing function $h$ for instance, the cylinder $\cyl (pA, h(p))$ is completely included in the cylinder $\cyl ((p+1)A, h(p+1))$. The mean of the capacities of the edges inside $\cyl (pA, h(p))$ converges a.s. when $p$ goes to infinity as soon as $\int_{\mathbb{R}^+} x \, dG(x) <\infty$ by a simple application of the law of large numbers, and Theorem \ref{t:oldCV} states that $\tau_G (pA, h(p))/\mathcal{H}^{d-1} (pA)$ converges a.s. under the same hypothesis. On the other hand, when the origin of the graph does not belong to $A$, the cylinders $\cyl (pA, h(p))$ and $\cyl ((p+1)A, h(p+1))$ may be completely disjoint. The a.s. convergence of the mean of the capacities of the edges included in $\cyl (pA, h(p))$ when $p$ goes to infinity is thus stated by some result about complete convergence of arrays of random variables, see for instance \cite{Gut92,Gut85}. This kind of results requires a stronger moment condition on the law of the random variables we consider, namely we need that $\int_{\mathbb{R}^+} x^{1+1/(d-1)} \, dG(x) <\infty$. Theorem \ref{t:oldCV} states that $\tau_G (pA, h(p))/\mathcal{H}^{d-1} (pA)$ converges a.s. in this case under the same hypothesis on the moments of $G$. Let $p_c(d)$ be the critical parameter of Bernoulli bond percolation on $(\mathbb{Z}^d, \mathbb{E}^d)$. Zhang investigated in \cite{Zhang} the positivity of $\nu_G$ and proved the following result. \begin{thm} \label{t:oldnul} Let $G$ be a probability measure on $[0, +\infty[$ such that $\int_{\mathbb{R}^+} x \, dG(x) <\infty$. Then $$ \nu_G (\vec{v}) \,>\,0 \quad \iff \quad G(\{0\}) \,<\, 1-p_c(d) \,. $$ \end{thm} The asymptotic behavior of the maximal flows $\phi_G (pA, h(p))$ in non-flat cylinders ({\em i.e.}, when $h(p)$ is not negligible in comparaison with $p$) is more difficult to study since these flows are not subadditive. In the case of straight cylinders (and even in a non isotropic case, {\em i.e.}, when the dimensions of the cylinders go to infinity at different speed in every directions), Kesten \cite{Kesten:flows} and Zhang \cite{Zhang2017} proved that $\phi_G (pA, h(p)) / \mathcal{H}^{d-1} (pA)$ converges a.s. towards $\nu_G (\vec{v}_0)$ also, under some moment condition on $G$. The behavior of $\phi_G (pA, h(p))$ is different in tilted and non-flat cylinders, we do not go into details and refer to \cite{RossignolTheret09b} (for $d=2$) and to \cite{CerfTheret09geoc,CerfTheret09supc,CerfTheret09infc,CerfTheretStream} in a more general setting. We stress the fact that for all the results mentioned above, a moment assumption is required on the probability measure $G$ on $[0,+\infty[$: $G$ must at least have a finite mean. \subsection{Main results} Our first goal is to extend the previous results to probability measures $G$ on $[0,+\infty[$ that are not integrable, and even to probability measures $G$ on $[0,+\infty]$ under the hypothesis that $G(\{+\infty\}) < p_c(d)$. For any probability measure $G$ on $[0, +\infty]$, for all $K>0$, we define $G^K = {\mathds{1}}_{[0,K[} G + G([K,+\infty[) \delta_K$, {\em i.e.}, $G^K$ is the law of $\min(t_G(e), K)$ for any edge $e$. Then we define \begin{equation} \label{defnu} \forall \vec{v} \in \mathbb{S}^{d-1} \,, \quad \nu_G (\vec{v}) \,:=\, \lim_{K \rightarrow \infty} \nu_{G^K} (\vec{v}) \,. \end{equation} Throughout the paper, we shall say that a function $h : \mathbb{N} \mapsto \mathbb{R}^+$ is \emph{mild} if \begin{equation} \lim_{p\rightarrow +\infty} h(p) / \log p =+\infty\text{ and }\lim_{p\rightarrow \infty} h(p) /p =0\;. \end{equation} We prove the following law of large numbers for cylinders with mild height functions. \begin{thm} \label{t:CV} For any probability measure $G$ on $[0,+\infty]$ such that $G(\{+\infty\}) < p_c(d)$, for any $\vec{v} \in \mathbb{S}^{d-1}$, for any non-degenerate hyperrectangle $A$ normal to $\vec{v}$, for any mild function $h$, we have $$ \lim_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, \nu_G (\vec{v}) \quad\textrm{a.s.} $$ Moreover, for every $\vec{v} \in \mathbb{S}^{d-1}$, $$\nu_G (\vec{v}) < +\infty$$ and $$ \nu_G (\vec{v}) \,>\,0 \quad \iff \quad G(\{0\}) \,<\, 1-p_c(d) \,. $$ \end{thm} \begin{rem} If $G$ is integrable, the constant $\nu_G$ defined in \eqref{defnu} is thus coherent with the definition given by Theorem \ref{t:oldCV}. \end{rem} We also want to establish the continuity of the function $G \mapsto \nu_G (\vec{v})$ when we equip the set of probability measures on $[0,+\infty]$ with the topology of weak convergence - in fact these two questions are linked, as we will see in Section \ref{s:T}. More precisely, let $(G_n)_{n\in \mathbb{N}}$ and $G$ be probability measures on $[0,+\infty]$. We say that $G_n$ converges weakly towards $G$ when $n$ goes to infinity, and we write $G_n \overset{d}{\rightarrow} G$, if for any continuous bounded function $f: [0,+\infty] \mapsto \mathbb{R}^+$ we have $$ \lim_{n \rightarrow +\infty} \int_{[0,+\infty]} f \, dG_n \,=\, \int_{[0,+\infty]} f \, dG \,. $$ Equivalently, $G_n \overset{d}{\rightarrow} G$ if and only if $\lim_{n \rightarrow \infty} G_n([t,+\infty]) = G([t,+\infty])$ for all $t\in \mathbb{R}^+$ such that $t\mapsto G([t,+\infty])$ is continuous at $t$. \begin{thm} \label{thmcont} Suppose that $G$ and $(G_n)_{n\in \mathbb{N}}$ are probability measures on $[0,+\infty]$ such that $G(\{+\infty\}) < p_c(d)$ and for all $n\in \mathbb{N}$, $G_n (\{+\infty\}) < p_c(d)$. If $G_n \overset{d}{\rightarrow} G$, then $$ \lim_{n\rightarrow \infty} \sup_{\vec{v} \in \mathbb{S}^{d-1}} \left\vert \nu_{G_n} (\vec v) - \nu_G (\vec v) \right\vert \,=\, 0 \,.$$ \end{thm} \subsection[Time constant]{About the existence and the continuity of the time constant} \label{s:T} First passage percolation was introduced by Hammersley and Welsh \cite{HammersleyWelsh} in 1965 with a different interpretation of the variables associated with the edges. We consider the graph $(\mathbb{Z}^d, \mathbb{E}^d)$ and we associate with the edges of the graph a family of i.i.d. random variables $(t_G(e), e\in \mathbb{E}^d)$ with common distribution $G$ as previously, but we interpret now the variable $t_G(e)$ as the time needed to cross the edge $e$ (we call it the passage time of $e$). If $\gamma$ is a path, we define the passage time of $\gamma$ as $T_G(\gamma) = \sum_{e\in \gamma} t_G(e)$. Then the passage time between two points $x$ and $y$ in $\mathbb{Z}^d$, {\em i.e.}, the minimum time needed to go from $x$ to $y$ for the passage times $(t_G(e), e\in \mathbb{E}^d)$, is given by $$ T_G(x,y) \,=\, \inf \{ T_G(\gamma) \,:\, \gamma \textrm{ is a path from $x$ to $y$} \} \,.$$ This defines a random pseudo-distance on $\mathbb{Z}^d$ (the only property that can be missing is the separation property). This random distance has been and is still intensively studied. A reference work is Kesten's lecture notes \cite{Kesten:StFlour}. Auffinger, Damron and Hanson wrote very recently the survey \cite{AuffingerDamronHanson} that provides an overview on results obtained in the 80's and 90's, describes the recent advances and gives a collection of old and new open questions. Fix $e_1 = (1,0, \dots , 0)$. Thanks to a subadditive argument, Hammersley and Welsh \cite{HammersleyWelsh} and Kingman \cite{Kingman68} proved that if $d=2$ and $F$ has finite mean, then $\lim_{n\rightarrow \infty} T_F(0,ne_1) / n $ exists a.s. and in $L^1$, the limit is a constant denoted by $\mu_F(e_1) $ and called the time constant. The moment condition was improved some years later by several people independently, and the study was extended to any dimension $d\geq 2$ (see for instance Kesten's Saint-Flour lecture notes \cite{Kesten:StFlour}). The convergence to the time constant can be stated as follows. \begin{thm} If $\mathbb{E}[\min (t_F(1),\dots , t_{F}(2d))] < \infty$ where $(t_F(i), i\in \{1,\dots , 2d\})$ are i.i.d. with distribution $F$ on $[0,+\infty[$, there exists a constant $\mu_F(e_1) \in \mathbb{R}^+$ such that $$ \lim_{n\rightarrow \infty} \frac{T_F(0,n e_1)}{ n} \,=\, \mu_F(e_1) \quad \textrm{a.s. and in } L^1\,.$$ Moreover, the condition $\mathbb{E}[\min (t_F(1),\dots , t_{F}(2d))] < \infty$ is necessary for this convergence to hold a.s. or in $L^1$. \end{thm} This convergence can be generalized by the same arguments, and under the same hypothesis, to rational directions : there exists a homogeneous function $\mu_F : \mathbb{Q}^d \rightarrow \mathbb{R}^+$ such that for all $x \in \mathbb{Z}^d$, we have $\lim_{n\rightarrow \infty} T_F(0, nx) / n = \mu_F (x)$ a.s. and in $L^1$. The function $\mu_F$ can be extended to $\mathbb{R}^d$ by continuity (see \cite{Kesten:StFlour}). These results can be extended by considering a law $F$ on $[0,+\infty[$ which does not satisfy any moment condition, at the price of obtaining weaker convergence. This work was performed successfully by Cox and Durrett \cite{CoxDurrett} in dimension $d=2$ and then by Kesten \cite{Kesten:StFlour} in any dimension $d\geq 2$. More precisely, they proved that there always exists a function $\hat \mu_F : \mathbb{R}^d \rightarrow \mathbb{R}^+$ such that for all $x \in \mathbb{Z}^d$, we have $\lim_{n\rightarrow \infty} T_F(0, nx) / n =\hat \mu_F (x)$ in probability. If $\mathbb{E}[\min (t_F(1),\dots , t_{F}(2d))] < \infty$ then $\hat \mu_F = \mu_F$. The function $\hat \mu_F$ is built as the a.s. limit of a more regular sequence of times $\hat T_F (0,nx) /n$ that we now describe roughly. They consider an $M\in \mathbb{R}^+$ large enough so that $ F( [0,M])$ is very close to $1$. Thus the percolation $({\mathds{1}}_{\{ t_F(e) \leq M \}}, e \in \mathbb{E}^d)$ is highly supercritical, so if we denote by $\mathcal{C}_{F,M}$ its infinite cluster, each point $x\in \mathbb{Z}^d$ is a.s. surrounded by a small contour $S(x) \subset \mathcal{C}_{F,M}$. They define $\hat T_F (x,y) = T_F(S(x), S(y))$ for $x,y \in \mathbb{Z}^d$. The times $\hat T _F(0,x)$ have good moment properties, thus $\hat \mu_F(x)$ can be defined as the a.s. and $L^1$ limit of $\hat T_F (0,nx) /n$ for all $x\in \mathbb{Z}^d$ by a classical subadditive argument; then $\hat \mu_F$ can be extended to $\mathbb{Q}^d$ by homogeneity, and finally to $\mathbb{R}^d$ by continuity. The convergence of $T_F(0, nx)/n$ towards $\hat \mu_F (x)$ in probability is a consequence of the fact that $T_F$ and $\hat T_F$ are close enough. It is even possible to consider a probability measure $F$ on $[0,+\infty]$ under the hypothesis that $F([0,+\infty[) > p_c(d)$. This was done first by Garet and Marchand in \cite{GaretMarchand04} and then by Cerf and the second author in \cite{CerfTheretForme}. We concentrate on \cite{CerfTheretForme}, where the setting is closer to the one we consider here. To prove the existence of a time constant for a probability measure $F$ on $[0,+\infty]$ such that $F([0,+\infty[) > p_c(d)$, Cerf and the second author exhibit a quite intuitive object that is still subadditive. For $x\in \mathbb{Z}^d$, $\tilde \mu_F (x)$ is defined by a subadditive argument as the limit of $T_F(f_M(0), f_M(nx))/n$ a.s. and in $L^1$, where $M$ is a real number large enough such that $F([0,M]) >p_c(d)$, and for $z\in \mathbb{Z}^d$, $f_M(z)$ is the points of $\mathcal{C}_{F,M}$ which is the closest to $z$. The convergence of $T_F(0, nx)/n$ towards $\tilde \mu_F (x)$ still holds, but in a very weak sense: $T_F(0, nx)/n$ converges in fact in distribution towards $\theta_F^2 \delta_{\mu_F (x)} + (1-\theta_F^2) \delta_{+\infty}$, where $\theta_F$ is the probability that the connected component of $0$ in the percolation $(\mathds{1}_{t_F(e) < \infty}, e\in \mathbb{E}^d)$ is infinite. For short, all these constants ($\hat \mu_F, \tilde \mu_F$ and $\mu_F$) being equal when they are defined, we denote all of them by $\mu_F$. Once the time constant is defined, a natural question is to wonder if it varies continuously with the distribution of the passage times of the edges. This question has been answered positively by Cox and Kesten \cite{Cox,CoxKesten,Kesten:StFlour} for probability measures on $[0,+\infty[$. \begin{thm} Let $F$, $F_n$ be probability measures on $[0,+\infty[$. If $F_n$ converges weakly towards $F$, then for every $x\in \mathbb{R}^d$, $$\lim_{n\rightarrow \infty} \mu_{F_n} (x) \,=\, \mu_F (x) \,.$$ \end{thm} Cox \cite{Cox} proved first this result in dimension $d=2$ with an additional hypothesis of uniform integrability: he supposed that all the probability measures $F_n$ were stochastically dominated by a probability measure $H$ with finite mean. To remove this hypothesis of uniform integrability in dimension $d=2$, Cox and Kesten \cite{CoxKesten} used the regularized passage times and the technology of the contours introduced by Cox and Durrett \cite{CoxDurrett}. Kesten \cite{Kesten:StFlour} extended these results to any dimension $d\geq 2$. The key step of their proofs is the following lemma. \begin{lem} \label{l:key} Let $F$ be a probability measure on $\mathbb{R}^+$, and let $F^K = \mathds{1}_{[0,K)} F + F([K,+\infty)) \delta_K$ be the distribution of the passage times $t_F(e)$ truncated at $K$. Then for every $x\in \mathbb{R}^d$, $$ \lim_{K \rightarrow \infty} \mu_{F^K} (x) \,=\, \mu_F (x) \,. $$ \end{lem} To prove this lemma, they consider a geodesic $\gamma$ from $0$ to a fixed vertex $x $ for the truncated passage times $\inf (t_{F}(e), K)$. When looking at the original passage times $t_F (e)$, some edges along $\gamma$ may have an arbitrarily large passage time: to recover a path $\gamma'$ from $0$ to $x$ such that $T_F (\gamma')$ is not too large in comparison with $T_{F^K} (\gamma)$, they need to bypass these bad edges. They construct the bypass of a bad edge $e$ inside the contour $S(e) \subset \mathcal{C}_{F,M}$ of the edge $e$, thus they bound the passage time of this bypass by $M \carde (S(e))$ where $\carde (S(e))$ denotes the number of edges in $S(e)$. More recently, Garet, Marchand, Procaccia and the second author extended in \cite{GaretMarchandProcacciaTheret} these results to the case where the probability measures considered are defined on $[0,+\infty]$ as soon as the percolation of edges with finite passage times are supercritical. To this end, they needed to perform a rescaling argument, since for $M$ large enough the percolation of edges with passage times smaller than $M$ can be choosen supercritical but not highly supercritical as required to use the technology of the contours. The study of the existence of the time constant without any moment condition and the study of the continuity of the time constant with regard to the distribution of the passage times of the edges are closely related. Indeed, in the given proofs of the continuity of the time constant, the following results are used: \begin{itemize} \item the time constant $\mu_F$ is the a.s. limit of a subadditive process, \item this subadditive process is integrable (for any distribution $F$ of the passage times, even with infinite mean), \item this subadditive process is monotonic with regard to the distribution of the passage times. \end{itemize} Moreover, the technology used to prove the key Lemma \ref{l:key} (using the contours) is directly inspired by the study of the existence of the time constant without any moment condition. The proof of the continuity of the flow constant, Theorem \ref{thmcont}, we propose in this paper is heavily influenced by the proofs of the continuity of the time constant given in \cite{CoxKesten,Kesten:StFlour,GaretMarchandProcacciaTheret}. The real difficulty of our work is to extend the definition of the flow constant to probability measure with infinite mean - once this is done, it is harmless to admit probability measures $F$ on $[0,+\infty]$ such that $F(\{+\infty\}) <p_c(d)$, we do not even have to use a renormalization argument. We choose to define the flow constant $\nu_F$ via \eqref{defnu} so that the result equivalent to Lemma \ref{l:key} in our setting is given by the precise definition of $\nu_F$. However, two major issues remain : \begin{itemize} \item[$(i)$] prove that $\nu_F$ is indeed the limit of some quite natural sequence of maximal flows, \item[$(ii)$] prove that $\nu_F$ can be recovered as the limit of a nice subadditive process. \end{itemize} The first point, $(i)$, is precisely the object of Theorem \ref{t:CV}, that we prove in Section \ref{s:CV}. With no surprise, the difficulties we do not meet to prove the result equivalent to Lemma \ref{l:key} for the flow constant are found in the proof of this convergence, see Proposition \ref{p:tronquer}. The maximal flows that converge towards $\nu_G$ are maybe the most natural ones, {\em i.e.}, maximal flows from the top to the bottom of flat cylinders, and the convergence holds a.s., {\em i.e.}, in a strong sense, which is quite satisfying. It is worth noticing that in fact, to prove the a.s. convergence in tilted cylinders when $\nu_F=0$ (see Proposition \ref{p:zeroter}), we use the continuity of the flow constant - without this property, we obtain only a convergence in probability. However, to obtain a convergence (at least in probability) of these maximal flows towards $\nu_F$, we do not have to exhibit a subadditive process converging towards $\nu_F$. The existence of such a nice subadditive process, {\em i.e.}, the point $(ii)$ above, is nevertheless needed to prove the continuity of the flow constant. In Section \ref{s:ssadd}, we define such a process and prove its convergence towards $\nu_F$ (see Theorem \ref{t:ssadd}). Finally in Section \ref{s:cont} we prove the continuity of the flow constant, Theorem \ref{thmcont}. Before starting these proofs, we give in the next section some additional notations. \subsection{More notations} We need to introduce a few more notations that will be useful. Given a unit vector $\vec{v} \in \mathbb{S}^{d-1}$ and a non-degenerate hyperrectangle $A$ normal to $\vec{v}$, $\hyp (A)$ denotes the hyperplane spanned by $A$ defined by $$ \hyp (A) \,=\, \{ x+ \vec w \,:\, x\in A \,,\, \vec w \cdot \vec{v} =0 \} $$ where $\cdot$ denotes the usual scalar product on $\mathbb{R}^d$. For a positive real $h$, we already defined $\cyl (A,h)$ as the cylinder of height $2h$ with base $A-h\vec{v}$ and top $A+h\vec{v}$, see Equation \eqref{e:defcyl}. It will sometimes be useful to consider the cylinder $\cyl^{\vec{v}} (A,h)$ with height $h$, base $A$ and top $A+h\vec{v}$, {\em i.e.}, $$ \cyl^{\vec{v}} (A,h) \,=\, \{ x+t \vec{v} \,:\, x\in A\,,\, t\in [0,h] \} \,,$$ and the maximal flow $\phi_G^{\vec{v}} (A,h)$ from the discrete version of its top $$ B_1^{\vec{v}}(A,h) \,=\, \{ x \in \mathbb{Z}^d \cap \cyl^{\vec{v}} (A,h) \,:\, \exists y \notin \cyl^{\vec{v}}(A,h) \,,\, \langle x ,y \rangle \in \mathbb{E}^d \textrm{ and $\langle x ,y \rangle$ intersects } A+h \vec{v} \} $$ to the discrete version of its bottom $$ B^{\vec{v}}_2(A,h) \,=\, \{ x \in \mathbb{Z}^d \cap \cyl^{\vec{v}} (A,h) \,:\, \exists y \notin \cyl^{\vec{v}}(A,h) \,,\, \langle x,y \rangle \in \mathbb{E}^d \textrm{ and $\langle x ,y \rangle$ intersects } A-h \vec{v} \}\,.$$ Some sets can be seen as sets of edges or vertices, thus when looking at their cardinality it is convenient to specify whether we count the number of edges or the number of vertices in the set. The notation $\carde (\cdot)$ denotes the number of edges in a set whereas $\cardv (\cdot)$ denotes the number of vertices. Given a probability measure $G$ on $[0,+\infty]$, a constant $K\in ]0,+\infty[$ and a vertex $x\in \mathbb{Z}^d$ (respectively an edge $f\in \mathbb{E}^d$), we denote by $C_{G,K} (x)$ (resp. $C_{G,K} (f)$) the connected component of $x$ (resp. the union of the connected components of the two endpoints of $f$) in the percolation $(\mathds{1}_{t_G(e) > K}, e\in \mathbb{E}^d)$, which can be seen as an edge set and as a vertex set. For any vertex set $C \subset \mathbb{Z}^d$, we denote by $\diam (C)$ the diameter of $C$, $\diam (C) = \sup\{ \|x-y\|_2 \,:\, x,y\in C \cap \mathbb{Z}^d \}$, by $\partial_{\textrm{e}} C$ its exterior edge boundary defined by $$ \partial_{\textrm{e}} C \,=\, \{ e = \langle x,y \rangle \in \mathbb{E}^d \,:\, x\in C \,,\, y\notin C \textrm{ and there exists a path from $y$ to infinity in } \mathbb{Z}^d \smallsetminus C\}\,, $$ and by $\partial_{\textrm{v}} C$ its exterior vertex boundary defined by $$ \partial_{\textrm{v}} C \,=\, \{ x\in \mathbb{Z}^d \,:\, x\notin C \,,\, \exists y\in C \textrm{ s.t. } \langle x,y \rangle \in \mathbb{E}^d\}\,. $$ Given a set $E$ of edges, we can define also its diameter $\diam (E)$ as the diameter of the vertex set made of the endpoints of the edges of $E$. We also define its exterior $\ext (E)$ by $$ \ext (E) \,=\, \{ x\in \mathbb{Z}^d \,:\, \textrm{there exists a path from $x$ to infinity in }\mathbb{E}^d \smallsetminus E \} $$ and its interior $$ \intt (E) \,=\, \mathbb{Z}^d \smallsetminus \ext (E)\,. $$ Notice that by definition, $C \subset \intt (\partial_{\textrm{e}} C)$ and if $C$ is bounded and $x \in \intt (\partial_{\textrm{e}} C)$, then $ \partial_{\textrm{e}} C$ separates $x$ from infinity. For any vertices $x$ and $y$, for any probability measure $G$ on $[0,+\infty]$ and any $K\in ]0,+\infty]$, one of the three following situation occurs: \begin{itemize} \item[$(i)$] $\partial_{\textrm{e}} C_{G,K} (x)=\partial_{\textrm{e}} C_{G,K} (y)$; \item[$(ii)$] $\intt (\partial_{\textrm{e}} C_{G,K} (x)) \cap \intt (\partial_{\textrm{e}} C_{G,K} (y)) = \emptyset $; \item[$(iii)$] $ \intt (\partial_{\textrm{e}} C_{G,K} (x)) \subset \intt (\partial_{\textrm{e}} C_{G,K} (y))$, or $ \intt (\partial_{\textrm{e}} C_{G,K} (y) )\subset \intt (\partial_{\textrm{e}} C_{G,K} (x))$. \end{itemize} Case $(i)$ corresponds to the case where $x$ and $y$ are connected in the percolation $(\mathds{1}_{t_G(e) > K}, e\in \mathbb{E}^d)$, whereas cases $(ii)$ and $(iii)$ correspond to the case where $x$ an $y$ are not connected, thus their connected components for this percolation are disjoint. Case $(iii)$ corresponds to the case where $x\in \intt( \partial_{\textrm{e}} C_{G,K} (y))$ (thus $C_{G,K} (x)$ is nested in a hole of $C_{G,K} (y)$ inside $\intt (\partial_{\textrm{e}} C_{G,K} (y))$) or conversely, whereas case $(ii)$ corresponds to the case where $x\in \ext( \partial_{\textrm{e}} C_{G,K} (y))$ and $y\in \ext (\partial_{\textrm{e}} C_{G,K} (x))$. For any subset $\mathcal{C}$ of $\mathbb{R}^d$ and any $h\in \mathbb{R}^+$, we denote by $\mathcal{E}_{G,K} (\mathcal{C}, h)$ the following event \begin{equation} \label{e:E} \mathcal{E}_{G,K} (\mathcal{C}, h) \,=\, \bigcap_{x\in \mathcal{C} \cap \mathbb{Z}^d} \{ \diam (C_{G,K} (x)) < h \}\,, \end{equation} and by $\mathcal{E}'_{G,K} (\mathcal{C}, h)$ the corresponding event involving edges instead of vertices \begin{equation} \label{e:E'} \mathcal{E}'_{G,K} (\mathcal{C}, h) \,=\, \bigcap_{e\in \mathcal{C} \cap \mathbb{E}^d} \{ \diam (C_{G,K} (e)) < h \}\,. \end{equation} In what follows $c_d$ denotes a constant that depends only on the dimension $d$ and may change from one line to another. Notice that for any finite and connected set $C$ of vertices, $\carde (\partial_{\textrm{e}} C) \leq c_d \cardv (C)$. For two probability measures $H$ and $G$ on $[0,+\infty]$, we define the following stochastic domination relation: \begin{equation} \label{e:defdomsto} G\preceq H \quad \iff \quad \forall t\in [0,+\infty) \quad G([t,+\infty]) \leq H([t,+\infty])\,. \end{equation} In what follows, we always build the capacities of the edges for different distributions by coupling, using a family of i.i.d. random variables with uniform distribution on $]0,1[$ and the pseudo-inverse of the distribution function of these distributions. Thus the stochastic comparison between probability measures $H$ and $G$ on $[0,+\infty]$ implies a simple comparison between the corresponding capacities of the edges: \begin{equation} \label{e:couplage} G\preceq H \quad \Longrightarrow \forall e\in \mathbb{E}^d \quad t_G(e) \leq t_H (e) \,. \end{equation} \section{Convergence of the maximal flows} \label{s:CV} This section is devoted to the proof of Theorem \ref{t:CV}. \subsection{Properties of $\nu_G$} First we investigate the positivity $\nu_G$ as defined by \eqref{defnu}. \begin{prop} \label{p:trivial} Let $G$ be a probability measure on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$. For every $\vec{v} \in \mathbb{S}^{d-1}$, we have $$ \nu_G (\vec{v}) = 0 \quad \iff \quad G(\{0\})\geq 1-p_c(d)\,. $$ \end{prop} \begin{dem} By the above coupling, see Equation \eqref{e:couplage}, for any such probability $G$, for any $0<K_1\leq K_2$, for any $\vec{v}\in \mathbb{S}^{d-1}$, for any hyperrectangle $A$ and any $h\in \mathbb{R}^+$, we have $$ \phi_{G^{K_1}} (A,h) \,\leq \, \phi_{G^{K_2}} (A,h) \,.$$ By definition of $\nu_{G^K} (\vec{v})$ (see Theorem \ref{t:oldCV}), this proves that $K\mapsto \nu_{G^K} (\vec{v})$ is non-decreasing. Thus $\nu_G (\vec{v}) = 0$ if and only if for every $K\in \mathbb{R}^+$, $\nu_{G^K} (\vec{v}) = 0$. By Theorem \ref{t:oldnul}, we know that $\nu_{G^K} (\vec{v}) = 0$ if and only if $G^K (\{0\}) \geq 1-p_c(d)$. But $G^K (\{0\})=G (\{0\})$ for all $K$, thus Proposition \ref{p:trivial} is proved. \end{dem} We now state a stochastic domination result, in the spirit of Fontes and Newman \cite{FontesNewman}, which will be useful to prove that $\nu_G$ is finite, and will be used again in section~\ref{subsec:truncating}. \begin{lem} \label{lem:domsto} Let $W=\{x_1,\ldots,x_n\}$ be a finite subset of $\mathbb{Z}^d$. Consider an i.i.d. Bernoulli bond percolation on $\mathbb{Z}^d$. For $i=1,\ldots,n$, define $Z_i = Z(x_i)$ to be $\cardv (C(x_i))$, where $C(x_i)$ is the connected component of $x_i$ for the underlying percolation. Let $Y_1=Z_1$ and define recursively $Y_i$ for $i=2,\ldots,n$ by $$Y_{i}=\left\lbrace\begin{array}{ll}Z_{i}&\text{ if }x_i\not\in \cup_{j=1}^{i-1}C(x_j)\\0 & \text{ if }x_i\in \cup_{j=1}^{i-1}C(x_j) \,.\end{array}\right.$$ Let also $(X_i, i\in \{1,\ldots,n\})$ be a family of i.i.d. random variables distributed as $Z_1 = \cardv (C(x_1))$. Then, for all $a$, $a_1,\ldots,a_n$ in $\mathbb{R}$, $$\mathbb{P}\left(\sum_{i=1}^nY_i\geq a\text{ and }\forall i=1,\ldots ,n,\;Y_i\geq a_i\right)\leq \mathbb{P}\left(\sum_{i=1}^n X_i\geq a\text{ and }\forall i=1,\ldots ,n,\;X_i\geq a_i \right)\,.$$ \end{lem} \begin{dem} For any $i$, let $\mathcal{F}_i$ be the sigma-field generated by the successive exploration of $C(x_1)$, $C(x_2)$, \ldots, $C(x_i)$. The conditional distribution of $C(x_{i})$ knowing $\mathcal{F}_{i-1}$ is the same as its conditional distribution knowing $\bigcup_{j=1}^{i-1}C(x_j)$. Then, conditionally on the event $\{\bigcup_{j=1}^{i-1}C(x_j)=B\}$, $Y_i=0$ if $x_i\in B$, and $Y_i$ is distributed like the cardinal of the cluster of $x_i$ in $\mathbb{Z}^d\setminus  B $ if $x_i\not \in B$. Thus the distribution of $Y_i$ conditionally on $\mathcal{F}_{i-1}$ is stochastically dominated by that of $X_i$. A straightforward induction gives the result. \end{dem} We now state that the constant $\nu_G$ is finite. \begin{prop} \label{p:finitude} For any probability measure $G$ on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$, for any $\vec{v}\in \mathbb{S}^{d-1}$, $\nu_G (\vec{v}) < +\infty$. \end{prop} \begin{dem} Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$. Let $\vec{v}\in \mathbb{S}^{d-1}$ be a unit vector, let $A$ be a non-degenerate hyperrectangle normal to $\vec{v}$ containing the origin $0$ of the graph, and let $h: \mathbb{N} \mapsto \mathbb{R}^+$ be mild. Let $K_0 <\infty$ be large enough such that $G(]K_0,+\infty]) < p_c(d)$. We recall that for every $x\in \mathbb{Z}^d$, $C_{G,K_0} (x)$ is the connected component of $x$ in the percolation $(\mathds{1}_{t_G(e)> K_0}, e \in \mathbb{E}^d)$. We recall that $\mathcal{E}_{G,K_0}(\cyl(pA,h(p)),h(p))$ denotes the event $$ \mathcal{E}_{G,K_0}(\cyl(pA,h(p)),h(p)) \,=\, \bigcap_{ x \in \cyl(pA, h(p))\cap \mathbb{Z}^d} \{ \diam (C_{G,K_0} (x)) < h(p) \} \,.$$ To every $x\in B_2(pA, h(p))$, the bottom of the cylinder $\cyl(pA, h(p))$, we associate $S(x) = \partial_{\textrm{e}} C_{G,K_0} (x)$. Some of the sets $S(x)$ may be equal, thus we denote by $(S_i)_{i=1,\dots , r}$ the collection of disjoint edge sets we obtain (notice that by construction for every $i\neq j$, $S_i \cap S_j = \emptyset$). For every $i\in \{1,\dots , r\}$, let $z_i \in B_2 (pA, h(p))$ be such that $S_i = S(z_i)$. We consider the set of edges $$ E(p) \,=\, \bigcup_{i=1}^{r} \left( S_i \cap \cyl(pA, h(p)) \right) \,. $$ On the event $\mathcal{E}_{G,K_0}(\cyl(pA,h(p)),h(p))$, the set $E(p)$ is a cutset that separates the top $B_1(pA, h(p))$ from the bottom $B_2(pA, h(p))$ of $\cyl(pA, h(p))$. Indeed, let $\gamma = (x_0, e_1, x_1 , \dots ,e_n, x_n )$ be a path from the bottom to the top of $\cyl(pA, h(p))$. There exists $i\in \{1,\dots ,r\}$ such that $x_0 \in \intt (S_i) = \intt ( \partial_{\textrm{e}} C_{G,K_0} (z_i))$. Since $z_i \in B_2(pA, h(p))$ and $x_n \in B_1(pA, h(p))$ we get $\|z_i-x_n\| \geq 2h(p) - 2 \geq h(p)$ (at least for $p$ large enough), thus on $\mathcal{E}_{G,K_0}(\cyl(pA,h(p)),h(p))$ we know that $x_n \notin \intt (\partial_{\textrm{e}} C_{G,K_0} (z_i))$. Let $$ k_0 \,=\, \min \{ k \in \{0,\dots , n\} \,:\, x_k \notin \intt (\partial_{\textrm{e}} C_{G,K_0} (z_i)) \} \,.$$ Then $k_0 \in \{1, \dots , n\}$, $x_{k_0} \notin \intt (\partial_{\textrm{e}} C_{G,K_0} (z_i))$ and $x_{k_0 -1} \in \intt (\partial_{\textrm{e}} C_{G,K_0} (z_i))$, thus $e_{k_0} \in \partial_e C_{G,K_0} (z_i) = S_i $. Since $e_{k_0} \in \gamma \subset \cyl(pA, h(p))$, we conclude that $e_{k_0} \in E(p) \cap \gamma$, thus $E(p)$ cuts the top from the bottom of $\cyl (pA, h(p))$. For any vertex $x$, by definition of $C_{G,K_0} (x)$ we know that if $e\in \partial_e C_{G,K_0} (x)$ then $t_G(e) \leq K_0$. By definition of $\phi_G (pA, h(p))$, we deduce that on the event $\mathcal{E}_{G,K_0} (\cyl(pA,h(p)),h(p))$ we have \begin{equation*} \phi_{G} (pA, h(p)) \,\leq \, T_G (E(p)) \,\leq \, K_0 \carde (E(p)) \,. \end{equation*} For every $\beta >0$, we obtain that \begin{align*} \mathbb{P} [\phi_{G} & (pA, h(p)) \geq \beta \mathcal{H}^{d-1} (pA)] \\ & \,\leq \, \mathbb{P}[ \mathcal{E}_{G,K_0} (\cyl(pA,h(p)),h(p)) ^c] + \mathbb{P} \left[\carde (E(p)) \geq \frac{\beta \mathcal{H}^{d-1} (pA)}{K_0} \right]\\ & \,\leq \, \cardv (\cyl(pA, h(p))\cap \mathbb{Z}^d ) \mathbb{P} [\diam (C_{G,K_0} (0)) \geq h(p)] + \mathbb{P} \left[ \sum_{i=1}^{r} \carde (S_i) \geq \frac{\beta \mathcal{H}^{d-1} (pA)}{K_0} \right]\,. \end{align*} We now want to use the stochastic comparison given by Lemma \ref{lem:domsto}. Consider the set of vertices $W=B_2(pA,h(p))$, the percolation $(\mathds{1}_{t_G(e) >K_0} , e\in \mathbb{E}^d)$, and associate to each vertex $x\in W$ the variable $Z(x) = \cardv (C_{G,K_0}(x))$. We put an order on $W$ and build the variables $(Y(x), x\in V)$ as in Lemma \ref{lem:domsto}. Then $$ \sum_{i=1}^r \card_e(S_i) \,=\, \sum_{i=1}^r \carde (\partial_e C_{G,K_0} (z_i)) \,\leq\, c_d \sum_{i=1}^r \cardv (C_{G,K_0} (z_i) ) \,=\, c_d \sum_{i=1}^r Z (z_i) \,\leq\, c_d \sum_{x\in V} Y(x) $$ since the vertices $z_i$ have been chosen in $V$ such that the sets $C_{G,K_0} (z_i)$ are disjoint. By Lemma \ref{lem:domsto}, noticing that $\cardv (W) \leq c_d \lfloor \mathcal{H}^{d-1} (pA) \rfloor$, we obtain \begin{align*} \mathbb{P} [\phi_{G} (pA, h(p)) \geq \beta \mathcal{H}^{d-1} (pA)] & \,\leq \, c_d \mathcal{H}^{d-1} (pA) h(p) \, \mathbb{P} [\diam (C_{G,K_0} (0)) \geq h(p)] \\ & \quad + \mathbb{P} \left[ \sum_{i=1}^{c_d \lfloor \mathcal{H}^{d-1} (pA) \rfloor} X_i \geq \frac{\beta \mathcal{H}^{d-1} (pA)}{K_0c_d} \right] \end{align*} where the variables $X_i$ are i.i.d. with the same distribution as $\cardv ( C_{G,K_0} (0))$. Since $G(]K_0, +\infty]) < p_c(d)$, the percolation $(\mathds{1}_{t_G(e) > K_0}, e\in \mathbb{E}^d)$ is sub-critical thus $$ \mathbb{P} [X_1 \geq k] \,\leq \, \kappa_1 e^{-\kappa_2 k} $$ and \begin{equation} \label{e:*} \mathbb{P} [\diam (C_{G,K_0} (0)) \geq k] \,\leq\, \kappa_1 e^{-\kappa_2 k} \,, \end{equation} where $\kappa_i$ are constants depending only on $d$ and $G(]K_0, +\infty])$, see for instance Theorems (6.1) and (6.75) in \cite{grimmettt:percolation}. Thus there exists $\lambda (G,d) >0$ such that $\mathbb{E}[\exp (\lambda X_1)] < \infty$, and we get \begin{align} \label{e:hop3} \mathbb{P} [\phi_{G} (pA, h(p)) \geq \beta \mathcal{H}^{d-1} (pA)] & \,\leq \, c_d \mathcal{H}^{d-1} (pA) h(p) \kappa_1 e^{-\kappa_2 h(p)} \nonumber \\ &\quad \quad + \mathbb{E}[\exp (\lambda X_1)]^{c_d \mathcal{H}^{d-1} (pA)} e^{-\lambda \beta \mathcal{H}^{d-1} (pA)/K_0}\,. \end{align} Since $\lim_{p\rightarrow \infty} h(p)/\log p = +\infty$, the first term of the right hand side of \eqref{e:hop3} vanishes when $p$ goes to infinity. We can choose $\beta (G,d)$ large enough such that the second term of the right hand side of \eqref{e:hop3} vanishes too when $p$ goes to infinity, and we get $$ \lim_{p\rightarrow \infty} \mathbb{P} [\phi_{G} (pA, h(p)) \geq \beta \mathcal{H}^{d-1} (pA)] \,=\, 0\,.$$ Since for every $K\in \mathbb{R}^+$, $\phi_{G^K} (pA, h(p)) \leq \phi_G (pA, h(p))$ by coupling (see Equation \eqref{e:couplage}), we get for the same $\beta$ that \begin{equation} \label{e:hop1} \forall K\in \mathbb{R}^+\,, \quad \lim_{p\rightarrow \infty} \mathbb{P} [\phi_{G^K} (pA, h(p)) \geq \beta \mathcal{H}^{d-1} (pA)] \,=\, 0\,. \end{equation} By Theorem \ref{t:oldCV}, we know that for every $K\in \mathbb{R}^+$, \begin{equation} \label{e:hop2} \nu_{G^K} (\vec{v}) \,=\, \lim_{p\rightarrow \infty} \frac{\phi_{G^K} (pA, h(p)) }{ \mathcal{H}^{d-1} (pA)} \qquad \textrm{a.s.} \end{equation} Combining \eqref{e:hop1} and \eqref{e:hop2} we conclude that $ \nu_{G^K} (\vec{v}) \leq \beta$ for all $K$, thus $\nu_G(\vec{v}) = \lim_{K\rightarrow \infty} \nu_{G^K} (\vec{v}) \leq \beta < \infty $. This ends the proof of Proposition \ref{p:finitude}. \end{dem} Finally we state that $\nu_G$ satisfies some weak triangular inequality. \begin{prop} \label{p:cvx} Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$. Let $(ABC)$ be a non-degenerate triangle in $\mathbb{R}^d$ and let $\vec{v}_A, \vec{v}_B$ and $\vec{v}_C$ be the exterior normal unit vectors to the sides $[BC], [AC], [AB]$ in the plane spanned by $A,B,C$. Then $$ \mathcal{H}^1 ([AB]) \nu_G (\vec{v}_C) \,\leq\, \mathcal{H}^1 ([AC]) \nu_G (\vec{v}_B) + \mathcal{H}^1 ([BC]) \nu_G (\vec{v}_A) \,.$$ As a consequence, the homogeneous extension of $\nu_G$ to $\mathbb{R}^d$, defined by $$ \nu_G(0) \,=\, 0 \quad \textrm{and} \quad \forall \vec{w}\in \mathbb{R}^d\smallsetminus\{0\} \,, \,\, \nu_G(\vec{w}) \,=\, \|\vec{w}\|_2 \nu_G \left( \frac{\vec{w}}{\|\vec{w}\|_2} \right) $$ is a convex function. \end{prop} This proposition is a direct consequence of the corresponding property already known for $G^K$ for all $K$, see Proposition 4.5 in \cite{RossignolTheret08b} (see also Proposition 11.6 and Corollary 11.7 in \cite{Cerf:StFlour}). \subsection{Truncating capacities} \label{subsec:truncating} We first need a new definition. Given a probability measure $G$ on $[0, +\infty]$, a unit vector $\vec{v} \in \mathbb{S}^{d-1}$, a non-degenerate hyperrectangle $A$ normal to $\vec{v}$ and a height function $h:\mathbb{N} \rightarrow \mathbb{R}^+$, we denote by $E_{G} (pA, h(p))$ the (random) cutset that separates the top from the bottom of the cylinder $\cyl (pA, h(p))$ with minimal capacity, {\em i.e.}, $\phi_G (pA, h(p)) = T_G (E_G (pA, h(p)))$, with minimal cardinality among them, with a deterministic rule to break ties. Furthermore, in this section, if $E\subset \mathbb{E}^d$ is a set of edges and $C=\cyl(A,h)$ a cylinder, we shall say that $E$ \emph{ cuts $C$ efficiently} if it cuts the top of $C$ from its bottom and no subset of $E$ does. Notice that $E_{G} (pA, h(p))$ cuts $\cyl (pA, h(p))$ efficiently. \begin{prop} \label{p:tronquer} Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$. Then, for any $\varepsilon>0$ and $\alpha >0$, there exist constants $K_1 $ and $C<1$ such that for every $K\geq K_1$, every unit vector $\vec{v}\in \mathbb{S}^{d-1}$, every non-degenerate hyperrectangle $A$ normal to $\vec{v}$, every mild height function $h: \mathbb{N} \mapsto \mathbb{R}^+$, and for every $p\in \mathbb{N}^+$ large enough, we have $$\mathbb{P}\left[ \phi_{G}(pA, h(p))\geq \phi_{G^K}(pA, h(p))+\varepsilon p^{d-1}\text{ and }\carde (E_{G^K}(pA, h(p)) )\leq \alpha p^{d-1} \right] \leq C^{h(p)}\,.$$ \end{prop} Let us say a few words about the proof before starting it. Proposition \ref{p:tronquer} is the equivalent of Lemma \ref{l:key} in the study of the time constant. The proof of Proposition \ref{p:tronquer} is thus inspired by the proof of Lemma \ref{l:key}. The spirit of the proof is the following: we consider a cutset $E$ which is minimal for the truncated $G^K$-capacities. Our goal is to construct a new cutset $E'$ whose $G$-capacity is not much larger than the $G^K$-capacity of $E$. To obtain this cutset $E'$, we remove from $E$ the edges with huge $G$-capacities, and replace them by some local cutsets whose $G$-capacity is well behaved. In fact, the construction of these local modifications of $E$ is in a sense more natural when dealing with cutsets rather than geodesics. Before embarking to the proof of Proposition~\ref{p:tronquer}, let us state a lemma related to renormalization of cuts. For a fixed $L\in \mathbb{N}^*$, we define $\Lambda_L = [-L/2,L/2]^d $, and we define the family of $L$-boxes by setting, for $\mathbf{i}\in \mathbb{Z}^d$, \begin{equation} \label{e:Lambda} \Lambda_L (\mathbf{i}) \,=\, \{ x+L \mathbf{i} \,:\, x\in \Lambda_L \} \,. \end{equation} The box $\Lambda_L (\mathbf{i})$ is the translated of the box $\Lambda_L$ by a translation of vector $L \mathbf{i}\in \mathbb{Z}^d$. A lattice animal is a finite set which is $\mathbb{Z}^d$-connected. For $E\subset\mathbb{E}^d$, let $$\Gamma(E) \,=\, \{ \mathbf{j} \in \mathbb{Z}^d \,:\, E \cap \Lambda_L(\mathbf{j}) \neq \emptyset \} $$ be the set of all $L$-boxes that $E$ intersects. \begin{lem} \label{lem:controleanimal} Let $A$ be a non-degenerate hyperrectangle and $h$ a positive real number. Let $l(A,h)$ denote the minimum of the edge-lengths of $\cyl (A,h)$. Suppose that $E\subset\mathbb{E}^d$ cuts $\cyl(A, h)$ efficiently. Then, $\Gamma(E)$ is a lattice animal. Furthermore, there exists a constant $c_d$ depending only on $d$ such that if $l(A,h)\geq c_d L$, \begin{equation} \label{e:controleanimal} \cardv (\Gamma (E)) \,\leq\, c_d \frac{\carde (E)}{L} \end{equation} \end{lem} \begin{dem} Let us prove that $\Gamma(E)$ is a lattice animal. Since $E$ cuts $\cyl (A,h)$ efficiently, we know that $E$ is somehow connected. More precisely, let us associate with any edge $e\in \mathbb{E}^d$ a small plaquette that is a hypersquare of size length $1$, that is normal to $e$, that cuts $e$ in its middle and whose sides are parallel to the coordinates hyperplanes. We associate with $E$ the set $E^*$ of all the plaquettes associated with the edges of $E$, and we can see $E^*$ as a subset of $\mathbb{R}^d$. Then $E^*$ is connected in $\mathbb{R}^d$ (see \cite{Kesten:flows} Lemma 3.17 in dimension $3$, but the proof can be adapted in any dimension). Thus $\Gamma(E)$ is $\mathbb{Z}^d$-connected. Now, let us prove~\eqref{e:controleanimal}. We shall denote by $\Lambda'_L$ the enlarged box \begin{equation} \label{e:Lambda'} \Lambda'_L (\mathbf{i}) \,=\, \{ x+L \mathbf{i} \,:\, x\in \Lambda_{3L} \} \,=\, \bigcup_{ \mathbf{j} \in \mathbb{Z}^d \,:\, \|\mathbf{i}-\mathbf{j} \|_{\infty} \leq 1} \Lambda_L (\mathbf{j}) \,. \end{equation} First of all we prove that for every $\mathbf{i}\in \mathbb{Z}^d$, if $ E \cap \Lambda_L(\mathbf{i}) \neq \emptyset$, then $\carde (E \cap \Lambda'_L(\mathbf{i})) \geq L/2$. Let $e$ be an edge in $E \cap \Lambda_L(\mathbf{i})$. Since $E\smallsetminus \{e\}$ is not a cutset, there exists a path $\gamma = (x_0, e_1, x_1, \dots , e_n, x_n)$ in $\cyl(A, h)$ from the top to the bottom of $\cyl(A, h)$ such that $\gamma $ does not intersect $E\smallsetminus \{e\}$. Since $E$ is a cutset, this implies that $e\in \gamma$. We will prove that locally, inside $\Lambda'_L(\mathbf{i})\smallsetminus \Lambda_L(\mathbf{i})$, the set $E$ must contain at least $L/2$ edges. To do so, we shall remove $e$ from $\gamma$ and construct of order $L$ possible bypaths of $e$ for $\gamma$ inside $\Lambda'_L(\mathbf{i})\smallsetminus \Lambda_L(\mathbf{i})$, {\em i.e.}, $L/2$ disjoint paths $\gamma'$ such that $\gamma' \subset \Lambda'_L(\mathbf{i})\smallsetminus \Lambda_L(\mathbf{i})$ and the concatenation of the two parts of $\gamma\smallsetminus \{e\}$ and of $\gamma'$ creates a path in $\cyl(A, h)$ from the top to the bottom of $\cyl(A, h)$, see Figure \ref{f:trunc2}. \begin{figure}[h!] \centering \input{trunc2.pdf_t} \caption{The path $\gamma$ and the sets $V_k$ ($d=2$).} \label{f:trunc2} \end{figure} For all $k\in [L/2, 3L/2] \cap \mathbb{N}$, let $V_k$ be the set of vertices that lies on the faces of $\Lambda_{2k} (\mathbf{i})$, {\em i.e.}, $$ V_k \,=\, \{x+L\mathbf{i} \,:\, x\in \partial \Lambda_{2k} \} $$ and let $E_k$ be the set of edges between vertices in $V_k$, $$ E_k \,=\, \{ \langle x, y \rangle \in \mathbb{E}^d \,:\, x,y \in V_k\} \,.$$ When looking at Figure \ref{f:trunc2}, one sees that the graph $(V_k,E_k)$ forms a kind of shell that surrounds the box $\Lambda_L(\mathbf{i})$. Then any two points $x,y\in V_k$ are connected by a path in $(V_k,E_k)$, and if $x,y$ also belong both to $\cyl (A,h)$ and $h$ is at least $c_dL$ for some constant $c_d$ depending only on the dimension, $x$ and $y$ are also connected by a path in $(V_k\cap \cyl (A,h),E_k\cap \cyl (A,h))$. Let $k\in [L/2, 3L/2] \cap \mathbb{N}$. We claim that the set $(\gamma\smallsetminus \{e\} )\cup (E_k \cap \cyl(A, h) )$ contains a path from the top to the bottom of $\cyl(A, h)$. Let us assume this for the moment, and finish the proof of the lemma. Since the set $(\gamma\smallsetminus \{e\} )\cup (E_k \cap \cyl(A, h) )$ contains a path from the top to the bottom of $\cyl(A, h)$, we know that $E_k$ must intersect the cutset $E$. Since the sets $E_k$ are disjoint, we conclude that $$ \carde (E \cap \Lambda'_L(\mathbf{i})) \,\geq\, \card ([L/2, 3L/2] \cap \mathbb{N}) \,\geq L/2\,. $$ This implies that \begin{align*} \frac{L}{2} \cardv (\Gamma (E)) & \,\leq\, \sum_{\mathbf{i}\in \Gamma(E)} \carde (E \cap \Lambda'_L(\mathbf{i})) \,\leq\, \sum_{\mathbf{i} \in \mathbb{Z}^d} \carde (E \cap \Lambda'_L(\mathbf{i}))\\ & \,\leq\, \sum_{\mathbf{i} \in \mathbb{Z}^d} \sum_{\mathbf{j} \in \mathbb{Z}^d\,:\, \|\mathbf{i} - \mathbf{j} \|_{\infty} \leq 1} \carde (E \cap \Lambda_L(\mathbf{j}))\\ & \,\leq\, \sum_{\mathbf{j} \in \mathbb{Z}^d} \carde (E \cap \Lambda_L(\mathbf{j})) \card (\{ \mathbf{i} \in \mathbb{Z}^d \,:\, \|\mathbf{i} - \mathbf{j} \|_{\infty} \leq 1\})\\ & \,\leq\, 3^d \sum_{\mathbf{j} \in \mathbb{Z}^d} \carde (E \cap \Lambda_L(\mathbf{j})) \,\leq \, c_d \carde (E)\,. \end{align*} It remains to prove the claim we have left aside, {\it i.e.}, that the set $(\gamma\smallsetminus \{e\} )\cup (E_k \cap \cyl(A, h) )$ contains a path from the top to the bottom of $\cyl(A, h)$. Suppose first that $x_0$ and $x_n$ do not belong to $\Lambda_{2k} (\mathbf{i})$. Then let $$ l_1 \,=\, \min \{ l \,:\, x_l \in \Lambda_{2k} (\mathbf{i}) \} \quad \textrm{and} \quad l_2 \,=\, \max \{ l\,:\, x_l \in \Lambda_{2k} (\mathbf{i}) \} \,,$$ see Figure \ref{f:trunc2}. There exists a path $\gamma'$ from $x_{l_1}$ to $x_{l_2}$ in $(V_k\cap \cyl (A,h),E_k\cap \cyl (A,h))$. We can now concatenate the paths $(x_0, e_1, \dots, x_{l_1})$, $\gamma'$ and $(x_{l_2}, \dots, x_n)$ to obtain a path from the top to the bottom of $\cyl(A, h)$. Suppose now that $x_0 \in \Lambda_{2k} (\mathbf{i})$. Thus, if $l(A,h)$ is at least $c_d L$ for some constant $c_d$ depending only on the dimension, $x_n \notin \Lambda_{2k} (\mathbf{i})$ and there exists a vertex $y \in V_k \cap B_1 (A,h)$ ($B_1 (A, h)$ is the top of the cylinder). We define as previously $$l_2 \,=\, \max \{ l\,:\, x_l \in \Lambda_{2k}(\mathbf{i}) \} \,.$$ There exists a path $\gamma''$ from $y$ to $x_{l_2}$ in $(V_k\cap \cyl (A,h),E_k\cap \cyl (A,h))$, and we can concatenate $\gamma'$ with $(x_{l_2}, \dots, x_n)$ to obtain a path from the top to the bottom of $\cyl(A, h)$. We can perform the symmetric construction if $x_n \in \Lambda_{2k} (\mathbf{i})$. Thus for every $k\in [L/2, 3L/2] \cap \mathbb{N}$ the set $(\gamma\smallsetminus \{e\} )\cup (E_k \cap \cyl(A, h) )$ contains a path from the top to the bottom of $\cyl(A, h)$. \end{dem} {\bf Proof of Proposition~\ref{p:tronquer}:} Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$. We use the natural coupling $t_{G^K} (e) = \min (t_G(e),K)$ for all $e\in \mathbb{E}^d$. Let $K_0$ be such that $G(]K_0,+\infty])<p_c(d)$. We shall modify $E_{G^K}(pA, h(p))$ around the edges having too large $G$-capacities in order to obtain a cut whose capacity is close enough to $\phi_{G^K}(pA,h(p))$ (for $K$ large enough). We recall that $C_{G,K_0}(f)$ is the connected component of the edge $f$ in the percolation $(\mathds{1}_{t_{G}(e)> K_0}, e\in \mathbb{E}^d)$. For short, we write $S(e) = \partial_{\textrm{e}} C_{G,K_0} (e)$, the edge-boundary of $C_{G,K_0}(e)$ separating $e$ from infinity, see Figure \ref{f:trunc1}. \begin{figure}[h!] \centering \input{trunc1.pdf_t} \caption{The cutset $E_{G^K}(pA, h(p))$ and the set $S(e)$ for $e\in F(p)$ ($d=2$).} \label{f:trunc1} \end{figure} Define also $$F(p) \,=\, F_{G,K}(pA, h(p)) \,=\,\{e\in E_{G^K}(pA, h(p))\;:\;t_G(e)\geq K\}\;.$$ We collect all the sets $(S(e), e\in F(p))$. As in the proof of Proposition \ref{p:finitude}, from this collection we keep only one copy of each distinct edge set. We obtain a collection $(S_i)_{i=1, \dots , r}$ of disjoint edge sets. For every $i\in \{1,\dots, r\}$, let $f_i \in F(p)$ such that $S_i = S(f_i)$. Let us define $$E'(p)\, =\, E_{G,K}'(pA,h(p))= \left( E_{G^K}(pA, h(p))\setminus F(p) \right) \cup \bigcup_{i=1}^{r}\left(S_i \cap\cyl(pA,h(p))\right)\;.$$ We consider the event $$\mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p))\,=\,\bigcap_{e\in \cyl(pA,h(p))\cap \mathbb{E}^d} \{\diam C_{G,K_0}(e) < h(p)\}\,.$$ First, we claim that on the event $\mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p))$, the set $E'(p)$ cuts the top from the bottom of $\cyl(pA,h(p))$. Indeed, suppose that $\gamma$ is a path in $\cyl(pA,h(p))$ joining its bottom to its top. Since $E_{G^K}(pA, h(p))$ is a cutset, there is an edge $e$ in $E_{G^K}(pA, h(p))\cap\gamma$. If $e$ does not belong to $F(p)$, then $e$ belongs to $E'(p)$ and thus $\gamma$ intersects $E'(p)$. If $e$ belongs to $F(p)$, denote by $x$ (resp. $y$) a vertex belonging to $\gamma$ and the top of $\cyl(pA,h(p))$ (resp. to $\gamma$ and the bottom of $\cyl(pA,h(p))$), and let $i\in \{1,\dots , r\}$ such that $e\in \intt (S_i) = \intt (S(f_i)) $. On the event $\mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p))$, $x$ and $y$ cannot belong both to $\intt S(f_i)$, otherwise $\diam C_{G,K_0}(f_i)$ would be at least $2h(p)-2 \geq h(p)$ (at least for $p$ large enough). Thus, $\gamma$ contains at least one vertex in $\ext S(f_i)$ and one vertex (any endpoint of $e$) in $ \intt (S(f_i))$. Thus, at least one edge $e'$ of $\gamma$ must be in $S(f_i)$, and since $\gamma$ is included in $\cyl(pA,h(p))$, $e'$ must be in $\cyl(nA,h(p))\cap S(f_i)$. Thus $e'\in E'(p)$ and this proves that $E'(p)$ cuts the top from the bottom of $\cyl(pA, h(p))$. Now, on the event $\mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p))$ we get \begin{equation} \label{e:raf1} \phi_G(pA,h(p))\leq \phi_{G^K}(pA,h(p))+K_0 \, \sum_{i=1}^r \carde ( S_i ) \,. \end{equation} Moreover, still on the event $\mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p))$, we notice that if we replace a single edge $e$ of $F(p)$ by $\left(S(e) \cap\cyl(pA,h(p))\right)$ in $E_{G^K}(pA, h(p))$ we obtain a new set of edges that is still a cutset between the top and the bottom of $\cyl(pA, h(p))$ (this could be proved by a similar but simpler argument than the one presented to prove that $E'(p)$ is a cutset). By minimality of the capacity of $E_{G^K}(pA, h(p))$ among such cutsets, we deduce that \begin{equation} \label{e:raf2} \forall e \in F(p) \,,\quad K_0 \carde (S(e)) \,\geq \, K \,. \end{equation} We recall that $\carde (S(e)) \leq c_d \cardv (C_{G,K_0} (e))$. Furthermore, notice that if $e_i=\langle z^1_i,z^2_i \rangle$, then $C_{K_0}(e) = C_{K_0}(z^1_i) \cup C_{K_0}(z^2_i) $ and $\cardv (C_{K_0}(e))\leq \cardv (C_{K_0}(z^1_i))+\cardv (C_{K_0}(z^2_i))$. Consequently, $$\max_{j=1,2} \cardv (C_{K_0}(z^j_i)) \geq \cardv (C_{K_0}(e)) /2 $$ Let us denote by $B$ the event whose probability we want to bound from above: $$B:=\{ \phi_G (pA,h(p))\geq \phi_{G^K}(pA,h(p))+\varepsilon p^{d-1}\textrm{ and }\carde (E_{G^K}(pA, h(p)) )\leq \alpha \, p^{d-1}\}$$ and for positive $\beta$, $\gamma$ and for $x_1,\ldots,x_k$ in $\mathbb{Z}^d$, let $$B_{G,K_0}(x_1,\ldots,x_k;\beta,\gamma):=\left[\begin{array}{c}( C_{G,K_0}(x_i) )_{i\geq 1}\text{ are pairwise disjoint,}\\ \sum_{i=1}^k \cardv (C_{G,K_0}(x_i)) \geq \beta \\ \text{ and }\forall i=1,\ldots,k,\; \cardv (C_{G,K_0}(x_i)) \geq \gamma \end{array} \right]\;.$$ If $E\subset \mathbb{E}^d$ and $x\in \mathbb{Z}^d$, we say that $x\in E$ if and only if $x$ is the endpoint of an edge $e$ that belongs to $E$. We obtain this way \begin{align*} \mathbb{P} & \left[B\cap \mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p)) \right] \\ & \,\leq \,\mathbb{P}\left[ \begin{array}{c} \exists E \subset \mathbb{E}^d \;:\; \carde (E )\leq \alpha\, p^{d-1}\,,\, E \textrm{ cuts }\cyl (pA,h(p))\textrm{ efficiently}\,,\\ \exists k\geq 0\,,\, \exists x_1,\ldots, x_k\in E\text{ such that } B_{G,K_0}\left(x_1,\ldots,x_k;\frac{ \varepsilon p^{d-1}}{2 c_d K_0},\frac{K}{2 c_d K_0}\right)\textrm{ holds} \end{array}\right]\,. \end{align*} As in the proof of the continuity of the time constant given by Cox and Kesten in \cite{CoxKesten}, we need a renormalization argument to localize these vertices $x_1,\ldots, x_k$ in a region of the space whose size can be controlled. For a given $\mathbf{i}\in \mathbb{Z}^d$ and $k\in \mathbb{N}^*$, we denote by $\mathcal{A}(\mathbf{i}, k)$ the set of all lattice animals of size $k\in \mathbb{N}^*$ containing $\mathbf{i}$. If $k\in \mathbb{R}^+$, then we write $\mathcal{A}(\mathbf{i}, k)$ instead of $\mathcal{A}(\mathbf{i}, \lfloor k \rfloor)$ for short, where $\lfloor k \rfloor \in \mathbb{N}$ and satisfies $\lfloor k \rfloor \leq k < \lfloor k \rfloor+1$. Let $E \subset \mathbb{E}^d$ such that $E$ cuts $\cyl (pA,h(p))$ efficiently. Let us denote by $u\in \mathbb{R}^d$ one of the corners of $A$. We can find a path from the top to the bottom of $\cyl (pA, h(p))$ that is located near any of the vertical sides of $\cyl (pA, h(p))$, more precisely there exists a constant $c_d$ depending only on $d$ such that the top and the bottom of $\cyl(pA,h(p))$ are connected in $V(u, h(p)) := \{ u+l\vec{v} + \vec w \,:\, l\in [-h(p), h(p)] \,,\, \|\vec w\|_2 \leq c_d \} \cap \cyl (pA, h(p))$. Thus any custset $E$ must contain at least one edge in $V(u,h(p))$. We denote by $\mathbf{I} (pA, h(p))$ the set of $L$-boxes that intersect $V(u,h(p))$: $$ \mathbf{I} (pA, h(p)) \,=\, \{ \mathbf{i}\in \mathbb{Z}^d \,:\, \Lambda_L(\mathbf{i}) \cap V(u,h(p))\neq \emptyset \} \,.$$ Then $\Gamma(E)$ must intersect $ \mathbf{I} (pA, h(p))$, and $\cardv ( \mathbf{I} (pA, h(p))) \leq c_d h(p)/L$. Furthermore, Lemma~\ref{lem:controleanimal} ensures that for $p$ large enough, $$ \cardv (\Gamma (E)) \,\leq\, c_d \frac{\carde (E)}{L} \,. $$ From all these remarks, we conclude that if $E$ cuts $\cyl (pA,h(p))$ efficiently and if $\carde (E )\leq \alpha\, p^{d-1}$, then $$\Gamma(E) \,\in\, \bigcup_{\mathbf{i}\in \mathbf{I} (pA, h(p))} \bigcup_{k\leq c_d \alpha p^{d-1}/L} \mathcal{A}(\mathbf{i}, k) \,.$$ Notice that for any $\Gamma \in \mathcal{A}(\mathbf{i}, k)$ with $k\leq c_d \alpha p^{d-1}/L$, there exists $\Gamma' \in \mathcal{A}(\mathbf{i}, c_d \alpha p^{d-1}/L)$ such that $\Gamma\subset \Gamma'$. Thus if $\carde (E )\leq \alpha\, p^{d-1}$, we obtain that there exists $\mathbf{i}\in \mathbf{I} (pA, h(p))$ and $\Gamma \in \mathcal{A}(\mathbf{i}, c_d \alpha p^{d-1}/L)$ such that $\Gamma (E) \subset \Gamma$. For any lattice animal $\Gamma$, we denote by $\Gamma_L$ the uion of the boxes associated to this vertex set, {\em i.e.}, $$ \Gamma_L \,=\,\bigcup_{\mathbf{j} \in \Gamma} \Lambda_L (\mathbf{j}) \,.$$ We obtai \begin{align*} \mathbb{P} & \left[ B \cap \mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p)) \right] \\ &\,\leq \, \sum_{\mathbf{i}\in \mathbf{I} (pA, h(p))} \sum_{\Gamma \in \mathcal{A}(\mathbf{i}, c_d \alpha p^{d-1}/L)} \mathbb{P} \left[ \begin{array}{c} \exists k\geq 0\,,\, \exists \textrm{ vertices }x_1,\ldots, x_k\in \Gamma_L\text{ such that }\\ B_{G,K_0}\left(x_1,\ldots,x_k;\frac{ \varepsilon p^{d-1}}{2 c_d K_0},\frac{K}{2 c_d K_0}\right)\textrm{ holds} \end{array} \right]\,.\\ &\,\leq \, \sum_{\mathbf{i}\in \mathbf{I} (pA, h(p))} \sum_{\Gamma \in \mathcal{A}(\mathbf{i}, c_d \alpha p^{d-1}/L)} \sum_{k\in \mathbb{N}^*} \sum_{x_1,\ldots, x_k\in \Gamma_L} \mathbb{P} \left[B_{G,K_0}\left(x_1,\ldots,x_k;\frac{ \varepsilon p^{d-1}}{2 c_d K_0},\frac{K}{2 c_d K_0}\right) \right] \,, \end{align*} We now use the stochastic comparison given by Lemma \ref{lem:domsto}. We consider the set of vertices $W=\{x_1,\dots, x_k\} $, the percolation $(\mathds{1}_{t_G(e)>K_0}, e\in \mathbb{E}^d)$ and associate to each vertex $x_i$ the variable $Z_i = Z (x_i) = \cardv (C_{G,K_0}(x_i))$. We build the variables $(Y_i, 1\leq i \leq k)$ as in Lemma \ref{lem:domsto} and let $(X_i, 1\leq i \leq k)$ be i.i.d. random variables with distribution $\cardv (C_{G,K_0}(e))$. Then by Lemma \ref{lem:domsto} we obtain for $\lambda>0$, \begin{align*} \mathbb{P} \left[ B_{G,K_0}(x_1,\ldots,x_k;\beta,\gamma) \right] & \,= \, \mathbb{P} \left[ \sum_{i=1}^k Y_i \geq \beta \text{ and }\forall i=1,\ldots,k,\; Y_i \geq \gamma \right] \\ & \,\leq \, \mathbb{P} \left[\sum_{i=1}^k X_i \geq \beta \text{ and }\forall i=1,\ldots,k,\; X_i \geq \gamma \right] \\ & \,\leq \, e^{-\lambda \beta} \mathbb{E}\left[ e^{\lambda X_1 }\mathds{1}_{X_1 \geq \gamma} \right]^k\\ & \,\leq \, e^{-\lambda \beta} \mathbb{E}\left[ e^{2\lambda X_1 }\right]^{\frac{k}{2}}\mathbb{P}\left[X_1 \geq \gamma \right]^{\frac{k}{2}} \end{align*} where we used the Cauchy-Schwartz inequality. Thus, we get \begin{align} \label{e:raf4} \mathbb{P} & \left[ B \cap \mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p)) \right] \nonumber\\ &\,\leq \, \sum_{\mathbf{i}\in \mathbf{I} (pA, h(p))} \sum_{\Gamma \in \mathcal{A}(\mathbf{i}, c_d \alpha p^{d-1}/L)} \sum_{k\in \mathbb{N}^*} \sum_{x_1,\ldots, x_k\in \Gamma_L} e^{-\lambda \frac{\varepsilon p^{d-1}}{2 c_d K_0}} \mathbb{E} \left[ e^{2 \lambda X_1 } \right]^{\frac{k}{2}} \mathbb{P} \left[ X_1 \geq \frac{K}{2 c_d K_0} \right] ^{\frac{k}{2}}\nonumber \\ & \,\leq \,e^{-\lambda \frac{\varepsilon p^{d-1}}{2 c_dK_0}} c_d \frac{h(p)}{L} c_d^{ \frac{\alpha p^{d-1}}{L}} \sum_{k\in \mathbb{N}^*} \left( \begin{array}{c} c_d \alpha p^{d-1}L^{d-1} \\ k \end{array} \right) \mathbb{E} \left[ e^{2 \lambda X_1 } \right]^{\frac{k}{2}} \mathbb{P} \left[X_1 \geq \frac{K}{2 c_d K_0} \right] ^{\frac{k}{2}}\nonumber \\ & \,\leq \, c_d \frac{h(p)}{L} \left[ e^{-\lambda \frac{\varepsilon }{2 c_d K_0}} c_d^{ \frac{\alpha }{L}} \left( 1 + \mathbb{E} \left[ e^{2 \lambda X_1 } \right]^{\frac{1}{2}} \mathbb{P} \left[X_1 \geq \frac{K}{2 c_d K_0} \right] ^{\frac{1}{2}} \right)^{c_d \alpha L^{d-1}} \right]^{p^{d-1}} \,. \end{align} Now, since $G(]K_0,+\infty])<p_c(d)$ we can choose first $\lambda = \lambda (G, d)$ such that $$\mathbb{E}\left[ e^{2\lambda X_1 } \right]<\infty \,.$$ Then, we choose $L = L(G, d, \alpha, \varepsilon)$ large enough such that $$e^{-\lambda \frac{\varepsilon }{2 c_d K_0}}c_d^\frac{\alpha}{L}<1\,.$$ And finally we choose $K_1 = K_1 (G,d,\alpha, \varepsilon)$ such that: \begin{equation} \label{e:raf5} C(G,d,\alpha, \varepsilon)\,:=\, e^{-\lambda \frac{\varepsilon }{2 c_d K_0}}c_d^\frac{\alpha}{L}\left( 1 + \mathbb{E}[e^{2 \lambda X_1}]^{\frac{1}{2}} \mathbb{P} \left[c_d X_1\geq \frac{K_1}{2 K_0} \right] ^{\frac{1}{2}} \right)^{c_d \alpha L^{d-1}}\,<\,1 \,. \end{equation} Combining \eqref{e:raf4} and \eqref{e:raf5} we get \begin{eqnarray*} \mathbb{P} \left[ B \cap \mathcal{E}'_{G,K_0} (\cyl(pA,h(p)), h(p)) \right] &\leq& c_d \frac{h(p)}{L} C(G,d,\alpha, \varepsilon)^{p^{d-1}}\\ & \leq & C(G,d,\alpha, \varepsilon)^{\frac{1}{2} p^{d-1}} \end{eqnarray*} for some $C(G,d,\alpha, \varepsilon)<1$, for every $K\geq K_1 (G,d,\alpha, \varepsilon)$ and for every $p$ large enough, since $\lim_{p\rightarrow \infty} h(p)/p =0$. For every edge $e=\langle x,y \rangle$, since $$\diam (C_{G,K_0} (e)) \,\leq\, \diam (C_{G,K_0} (x)) + \diam (C_{G,K_0} (y)) +1 \,, $$ it is easy to show that $$ \mathbb{P}[\mathcal{E}'_{G,K_0} (pA, h(p))^c]\,\leq\, c_d \mathcal{H}^{d-1}(pA) h(p) \kappa_1 e^{-\kappa_2 h(p)} \,\leq \,e^{-\frac{\kappa_2}{2} h(p)} $$ for some positive constants $\kappa_i $ (see \eqref{e:*}) and for $p$ large enough since $\lim_{p\rightarrow \infty} h(p) / \log p = +\infty$. Since $\lim_{p\rightarrow \infty}h(p)/p =0$, this ends the proof of Proposition \ref{p:tronquer}. {\flushright$\blacksquare$\\} \begin{rem} The result of Proposition \ref{p:tronquer} could apply, with the same constants depending on $G$, to any probability measure $H$ on $[0,+\infty]$ such that $H \preceq G$ (we recall that the stochastic comparison $H\preceq G$ is defined in \eqref{e:defdomsto}). \end{rem} \subsection[Proof of the convergence I]{Proof of the convergence I: case $G(\{0\})<1-p_c(d)$} To prove Theorem \ref{t:CV}, we shall consider the two situations $G(\{0\})<1-p_c(d)$ and $G(\{0\})\geq 1-p_c(d)$. The purpose of this section is to prove the following proposition, that corresponds to the statement of Theorem \ref{t:CV} in the case where $G(\{0\})<1-p_c(d)$. \begin{prop} \label{p:positif} For any probability measure $G$ on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$ and $G(\{0\}) < 1-p_c(d)$, for any $\vec{v} \in \mathbb{S}^{d-1}$, for any non-degenerate hyperrectangle $A$ normal to $\vec{v}$, for any mild function $h : \mathbb{N} \mapsto \mathbb{R}^+$, we have $$ \lim_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, \nu_G(\vec{v}) \quad \textrm{a.s.}$$ \end{prop} \begin{dem} Let $\vec{v}\in \mathbb{S}^{d-1}$, let $A$ be a non-degenerate hyperrectangle normal to $\vec{v}$, let $h:\mathbb{N}^* \mapsto \mathbb{R}^+$ be mild. Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$. Since $d,G,\vec{v}, A, h$ are fixed, we will omit in the notations a lot of dependences in these parameters. In this section, we suppose that $G(\{ 0 \}) < 1-p_c(d)$. For any fixed $K\in \mathbb{R}^+$, we know by Theorem~\ref{t:oldCV} that a.s. $$ \liminf_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,\geq\, \lim_{p\rightarrow \infty} \frac{\phi_{G^K} (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, \nu_{G^K} (\vec{v}) \,, $$ thus $$ \liminf_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,\geq\, \sup_K \nu_{G^K} (\vec{v}) \,=\, \nu_G (\vec{v}) \,. $$ It remains to prove that a.s., $$ \limsup_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,\leq\, \nu_G(\vec{v})\,. $$ We claim that it is sufficient to prove that \begin{equation} \label{e:flop1} \forall \varepsilon >0\,,\, \exists K(\varepsilon)\quad \sum_{p\geq1} \mathbb{P} [\phi_G(pA, h(p)) \geq \phi_{G^K} (pA, h(p)) + \varepsilon \mathcal{H}^{d-1} (pA)] \,<\, +\infty \,. \end{equation} Indeed, if \eqref{e:flop1} is satisfied, by Borel-Cantelli and Theorem \ref{t:oldCV} it implies that $$ \forall \varepsilon >0\,,\, \exists K(\varepsilon)\,,\, \textrm{a.s.}\quad \limsup_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,\leq\, \nu_{G^{K}} (\vec{v})+ \varepsilon \,\leq\, \nu_G(\vec{v}) + \varepsilon $$ and Proposition \ref{p:positif} is proved. Now for every $\varepsilon>0$, for every $\alpha >0$ and every $\beta >0$, we have \begin{align} \label{e:flop2} \mathbb{P} [ & \phi_G(pA, h(p)) \geq \phi_{G^K} (pA, h(p)) + \varepsilon \mathcal{H}^{d-1} (pA)] \nonumber \\ &\,\leq\, \mathbb{P} [\{\phi_G(pA, h(p)) \geq \phi_{G^K} (pA, h(p)) + \varepsilon \mathcal{H}^{d-1} (pA)\} \cap \{ \carde (E_{G^K} (pA, h(p)) \leq \alpha p^{d-1} \}] \nonumber\\ &\qquad + \mathbb{P} [\{\carde (E_{G^K} (pA, h(p)) > \alpha p^{d-1} \} \cap \{ \phi_{G^K} (pA,h(p)) \leq \beta \mathcal{H}^{d-1} (pA )\}] \nonumber\\ &\qquad + \mathbb{P} [\phi_{G^K} (pA,h(p))> \beta \mathcal{H}^{d-1} (pA ) ] \end{align} By \eqref{e:hop3}, since $\phi_{G^K} (pA,h(p))\leq \phi_{G} (pA,h(p))$ by coupling (see Equation \eqref{e:couplage}) and $\lim_{p\rightarrow \infty} h(p)/\log p = +\infty$, we know that we can choose $\beta = \beta (G,d)$ such that for any $K\in \mathbb{R}^+$, the last term of the right hand side of \eqref{e:flop2} is summable in $p$. Given this $\beta (G,d)$, by Zhang's Theorem 2 in \cite{Zhang2017}, as adapted in Proposition 4.2 in \cite{RossignolTheret08b}, we know that since all the probability measures $G^K$ coincide on a neighborhood of $0$, we can choose a constant $\alpha (G,d)$ such that for any $K\in \mathbb{R}^+$ the second term of the right hand side of \eqref{e:flop2} is summable in $p$. Given this $\alpha (G,d)$, by Proposition \ref{p:tronquer}, we know that there exist some constants $C=C(G,d,\varepsilon)<1$ and $K_1(G,d,\varepsilon)$ such that for every $K\geq K_1(G,d,\varepsilon)$ and for all $p$ large enough \begin{equation} \label{e:flop3} \mathbb{P} [\{\phi_G(pA, h(p)) \geq \phi_{G^K} (pA, h(p)) + \varepsilon \mathcal{H}^{d-1} (pA)\} \cap \{ \carde (E_{G^K} (pA, h(p)) \leq \alpha p^{d-1} \}] \,\leq\, C^{h(p)}\,. \end{equation} The right hand side of \eqref{e:flop3} is summable in $p$ since $\lim_{p\rightarrow \infty} h(p)/\log p = +\infty$. This concludes the proof of \eqref{e:flop1}, thus the convergence in Theorem \ref{t:CV} is proved when $G(\{0\})<1-p_c(d)$. \end{dem} \subsection[Proof of the convergence II]{Proof of the convergence II : case $G(\{0\})\geq 1-p_c(d)$} It remains to prove that the convergence in Theorem \ref{t:CV} holds when $G(\{0\})\geq 1-p_c(d)$, {\em i.e.}, when $\nu_G =0$. We first deal with straight cylinders. For $A = \prod_{i=1}^{d-1} [0,k_i] \times \{0\}$ (with $k_i>0$ for all $i$) and $h \in \mathbb{N}$, we denote by $\phi^{\vec{v}_0}_G (A,h) $ the maximal flow from the top $B_1^{\vec{v}_0}(A,h) = (\prod_{i=1}^{d-1} [0,k_i] \times \{h\}) \cap \mathbb{Z}^d$ to the bottom $B_2^{\vec{v}_0}(A,h)=(\prod_{i=1}^{d-1} [0,k_i] \times \{0\})\cap \mathbb{Z}^d$ in the cylinder $\cyl^{\vec{v}_0}(A,h)=\prod_{i=1}^{d-1} [0,k_i] \times [0,h]$ for $\vec{v}_0 = (0,\dots, 0,1)$. We recall the definition of the event $\mathcal{E}_{G,K} (\mathcal{C}, h)$, for any subset $\mathcal{C}$ of $\mathbb{R}^d$ and any $h\in \mathbb{R}^+$, that was given in \eqref{e:E}: $$ \mathcal{E}_{G,K} (\mathcal{C}, h) \,=\, \bigcap_{x\in \mathcal{C} \cap \mathbb{Z}^d} \{ \diam (C_{G,K} (x)) < h \}\,.$$ \begin{prop} \label{p:zero} Let $G$ be a probability measure on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$ and $G(\{0\}) \geq 1-p_c(d)$. Let $A = \prod_{i=1}^{d-1} [0,k_i] \times \{0\}$ (with $k_i>0$ for all $i$). For any function $h : \mathbb{N} \mapsto \mathbb{N}$ satisfying $\lim_{p\rightarrow +\infty} h(p) /\log p =+\infty$, we have $$ \lim_{p\rightarrow \infty} \frac{\phi^{\vec{v}_0}_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, 0 \quad \textrm{a.s.}$$ Moreover, if $G(]K_0,+\infty]) < p_c(d)$, then we also have $$ \lim_{p\rightarrow \infty} \frac{\phi^{\vec{v}_0}_G (pA, h(p)) \mathds{1}_{ \mathcal{E}_{G,K_0}( \cyl^{\vec{v}_0}(pA,h(p)) ,h(p))}}{ \mathcal{H}^{d-1} (pA)} \,=\, 0 \quad \textrm{in }L^1\,. $$ \end{prop} This result is in fact a generalization of Zhang's Theorem \ref{t:oldnul}, and the strategy of the proof is indeed largely inspired by Zhang's proof. However, we need to work a little bit harder, because we do not have good integrability assumptions. We thus re-use here some ideas that appeared in the proof of Proposition \ref{p:finitude}. Notice that $\phi_G (pA, h(p))$ itself may not be integrable in general (it can even be infinite with positive probability). \begin{dem} We shall construct a particular cutset with an idea quite similar to the one we used in the proof of Proposition \ref{p:finitude}. Let $K_0$ be large enough to have $G(]K_0,+\infty]) < p_c(d)$. Let $h\in \mathbb{N}^*$. Let $\mathbb{H}$ be the half-space $\mathbb{Z}^{d-1}\times \mathbb{N}$. For any $x\in \mathbb{Z}^d$, $D_1 \subset \mathbb{Z}^d$ and $D_2 \subset \mathbb{Z}^d$, let us denote by $\left\{x \overset{D_2}{\underset{G,0}{\longleftrightarrow}} D_1 \right\}$ the event $$ \left\{x \overset{D_2}{\underset{G,0}{\longleftrightarrow}} D_1 \right\} \,=\, \{ x \textrm{ is connected to $D_1$ by a path $\gamma \subset D_2$ s.t. } \forall e\in \gamma \,,\, t_G(e)>0 \} $$ For any $x\in \mathbb{Z}^{d-1}\times \{0\}$, we define the event $$ F_{x,\ell} \,=\, \left\{ x \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell\} \right\} \,.$$ Let $x\in A$. If $F_{x,\ell}^c$ occurs, we associate with $x$ the set $\partial_{\textrm{e}} C_{G,0} (x)$, that is by definition made of edges with null capacity. If $F_{x,\ell}$ occurs, we associate with $x$ the set $ \partial_{\textrm{e}} C_{G,K_0} (x)$, see Figure \ref{f:zero}. \begin{figure}[h!] \centering \input{zero.pdf_t} \caption{The construction of the cutset $E'(A,\ell)$ in $\cyl^{\vec{v}_0} (A,\ell)$ ($d=2$).} \label{f:zero} \end{figure} We consider the set $$ E'(A,\ell) \,=\, \left [\left(\bigcup_{x\in A\cap \mathbb{Z}^d \,,\, F_{x,\ell}^c} \partial_{\textrm{e}} C_{G,0} (x) \right) \cup \left( \bigcup_{x\in A\cap \mathbb{Z}^d \,,\, F_{x,\ell}} \partial_{\textrm{e}} C_{G,K_0} (x) \right) \right] \cap \cyl^{\vec{v}_0} (A,\ell) \,. $$ We consider the good event $$ \mathcal{E}_{G,K_0}\left( \cyl^{\vec{v}_0}(A, \ell) ,\ell \right)\,=\, \bigcap_{ x \in \cyl^{\vec{v}_0}(A, \ell) \cap \mathbb{Z}^d} \{ \diam (C_{G,K_0} (x)) < \ell \} \,.$$ We claim that on $ \mathcal{E}_{G,K_0}( \cyl^{\vec{v}_0}(A, \ell) ,\ell)$, the set $E'(A,\ell)$ cuts the top $B_1^{\vec{v}_0}(A,\ell)$ from the bottom $B_2^{\vec{v}_0}(A,\ell)$ in the cylinder $\cyl^{\vec{v}_0}(A,\ell)$. Let $\gamma = (x_0, e_1, x_1 , \dots ,e_n, x_n )$ be a path from the bottom to the top of the cylinder $\cyl^{\vec{v}_0}(A,\ell)$. If $F_{x_0,\ell}^c$ occurs, since $x_n \in \mathbb{Z}^{d-1} \times \{\ell\}$, then $x_n \in \ext (\partial_{\textrm{e}} C_{G,0} (x_0))$, thus $\gamma$ has to use an edge in $\partial_{\textrm{e}} C_{G,0} (x)\cap \cyl^{\vec{v}_0} (A,\ell)$. If $F_{x_0,\ell}$ occurs, on $ \mathcal{E}_{G,K_0}(\cyl^{\vec{v}_0} (A,\ell), \ell)$ we know that $x_n \in \ext (\partial_{\textrm{e}} C_{G,K_0} (x_0))$, thus $\gamma$ must contain an edge in $\partial_{\textrm{e}} C_{G,K_0} (x_0) \cap \cyl^{\vec{v}_0}(A,\ell)$. We conclude that on $ \mathcal{E}_{G,K_0}(\cyl^{\vec{v}_0}(A,\ell),\ell)$, $E'(A,\ell)$ is indeed a cutset from the top to the bottom of $\cyl^{\vec{v}_0}(A,\ell)$. Thus on $ \mathcal{E}_{G,K_0}( \cyl^{\vec{v}_0}(A,\ell),\ell)$, \begin{equation} \label{e:pif1} \phi^{\vec{v}_0}_G (A,\ell) \,\leq\, K_0 \sum_{x\in A\cap \mathbb{Z}^d} \mathds{1}_{F_{x,\ell}} \carde \left( \partial_{\textrm{e}} C_{G,K_0} (x) \right) \,\leq\, c_d K_0 \sum_{x\in A\cap \mathbb{Z}^d} \mathds{1}_{F_{x,\ell}} \cardv \left( C_{G,K_0} (x) \right) \,. \end{equation} For every $x\in \mathbb{Z}^{d-1} \times \{0\}$, let us define $$ R^\ell _x \,=\, \mathds{1}_{F_{x,\ell}} \cardv \left( C_{G,K_0} (x) \right) \,, $$ and for every $D \in \mathcal J = \{ \prod_{i=1}^{d-1} [l_i,l_i']\,:\, \forall i \,,\,0\leq l_i\leq l_i' \}$, let us define $$ X^\ell_D \,=\, \sum_{x\in D\times \{0\} \cap \mathbb{Z}^d} R^\ell_x \,.$$ For every $\ell$, the process $(X^\ell_D, D \in \mathcal J)$ is a discrete additive process. By classical multiparameter ergodic Theorems (see for instance Theorem 2.4 in \cite{Akcoglu} and Theorem 1.1 in \cite{Smythe}), if $\mathbb{E}[R^\ell_0] <\infty$, then there exists an integrable random variable $X^\ell$ such that for every $D = \prod_{i=1}^{d-1} [0,k_i]$ (with $k_i\in \mathbb{N}^*$ for all $i$), \begin{equation} \label{e:pif2} \lim_{p\rightarrow \infty} \frac{1}{\cardv (pD \cap \mathbb{Z}^{d-1})} X^\ell_{pD} =X^\ell \quad \textrm{a.s. and in }L^1 \end{equation} and $\mathbb{E}[X^\ell] = \mathbb{E}[R_0^\ell]$. Moreover, by ergodicity $X^\ell$ is constant a.s., so \begin{equation} \label{e:pif3} X^\ell \,=\, \mathbb{E}[R_0^\ell] \qquad \textrm{a.s.} \end{equation} We need to control the expectation of $R_0^\ell$ to apply these ergodic theorems. For all $r>0$ we have by the independence of the edge weights \begin{align*} \mathbb{P} [ R_0^\ell = r] & \,\leq \, \mathbb{P}[ F_{0,\ell} \cap \{ \cardv ( C_{G,K_0} (0)) = r \} ] \\ & \,\leq \, \sum_{C\,:\, \cardv (C)= r} \,\sum_{v\in \partial_{\textrm{v}} C \cap \mathbb{H}} \mathbb{P} \left[ \{C_{G,K_0} (0) = C\} \cap \left\{ v \overset{\mathbb{H}\smallsetminus C}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell\} \right\} \right] \\ & \,\leq \, \sum_{C\,:\, \cardv (C) = r} \,\sum_{v\in \partial_{\textrm{v}} C \cap \mathbb{H}} \mathbb{P} \left[ C_{G,K_0} (0) = C \right] \mathbb{P}\left[ v \overset{\mathbb{H}\smallsetminus C}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell\} \right] \\ & \,\leq \, \sum_{C\,:\, \cardv (C)= r} \,\sum_{v\in \partial_{\textrm{v}} C \cap \mathbb{H}} \mathbb{P} \left[ C_{G,K_0} (0) = C \right] \mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell-(r+1)\} \right] \\ & \,\leq \, c_d\, r\, \mathbb{P}\left[ \cardv ( C_{G,K_0} (0)) = r \right]\mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell-(r+1)\} \right]\,. \end{align*} Using the fact that $k\mapsto \mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{k\} \right] $ is non-increasing, we get \begin{align*} \mathbb{E}[R_0^\ell] & \,=\, \sum_{r\in \mathbb{N}} r \,\mathbb{P} [ R_0^\ell = r]\\ & \,\leq \, \sum_{r\leq \ell/2} c_d \,r^2 \, \mathbb{P}\left[ \cardv ( C_{G,K_0} (0)) = r \right] \mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell-(r+1)\} \right]\\ & \qquad + \sum_{r> \ell/2} c_d\, r^2\, \mathbb{P}\left[ \cardv ( C_{G,K_0} (0)) = r \right] \mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell-(r+1)\} \right]\\ & \,\leq\, c_d \,\mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell/2 - 1\} \right] \sum_{r\in \mathbb{N}} r^2\, \mathbb{P}\left[ \cardv ( C_{G,K_0} (0)) = r \right]\\ &\qquad + c_d \sum_{r> \ell/2} r^2 \,\mathbb{P}\left[ \cardv ( C_{G,K_0} (0)) = r \right] \\ & \,\leq\, c_d \,\mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell/2 - 1\} \right] \mathbb{E} \left[ \cardv ( C_{G,K_0} (0)) ^2 \right] \\ & \qquad + \mathbb{E} \left[ \cardv ( C_{G,K_0} (0)) ^2 \mathds{1}_{ \cardv ( C_{G,K_0} (0)) > \ell/2 }\right] \,. \end{align*} Since we choose $K_0$ such that $G(]K_0,+\infty]) < p_c(d)$, we know that $\mathbb{E} \left[ \cardv ( C_{G,K_0} (0)) ^2 \right] < \infty$ (see for instance Theorem (6.75) in \cite{grimmettt:percolation}), thus $\mathbb{E}[R_0^\ell] < \infty$ and the multiparameter ergodic theorems mentioned above apply to get \eqref{e:pif2} and \eqref{e:pif3}. Moreover, by a dominated convergence theorem, we obtain that $$ \lim_{\ell\rightarrow \infty} \mathbb{E} \left[ \cardv ( C_{G,K_0} (0)) ^2 \mathds{1}_{ \cardv ( C_{G,K_0} (0)) > \ell/2 }\right] \,=\, 0\,. $$ It is known that at criticality, there is no infinite cluster in the percolation in half space, see \cite{grimmettt:percolation}, Theorem (7.35). Thus $G(\{0\}) \geq 1-p_c(d)$ implies that $$ \lim_{\ell\rightarrow \infty} \mathbb{P}\left[ 0 \overset{\mathbb{H}}{\underset{G,0}{\longleftrightarrow}} \mathbb{Z}^{d-1} \times \{\ell/2 - 1\} \right] \,=\, \mathbb{P} \left[ 0 \textrm{ is connected to $\infty$ in }(\mathds{1}_{t_G(e)>0},e\in\mathbb{H}) \right] \,=\, 0\,.$$ Thus for all $\eta >0$ we can choose $\ell ^\eta$ large enough so that for every $\ell\geq \ell^{\eta}$, $\mathbb{E}[R_x^\ell]<\eta$. For every height function $h:\mathbb{N} \mapsto \mathbb{R}^+$ such that $\lim_{n\rightarrow \infty} h (n) =+\infty$, let $p_0$ be large enough such that for all $p\geq p_0$, $h(p) \geq \ell ^{\eta}$. The function $\ell\mapsto R_x^\ell$ is non-increasing, thus for every $D=\prod_{i=1}^{d-1} [0,k_i]$ ($k_i>0$) we have a.s. \begin{align*} 0 & \,\leq \, \limsup_{p\rightarrow \infty} \frac{1}{\cardv (pD \cap \mathbb{Z}^{d-1})} X_{pD}^{h(p)} \,\leq \, \limsup_{p\rightarrow \infty} \frac{1}{\cardv (pD \cap \mathbb{Z}^{d-1}) } X_{pD}^{\ell^{\eta}} \,=\, \mathbb{E} [X^{\ell^\eta}] \,\leq\, \eta \,, \end{align*} thus \begin{equation} \label{e:step2} \lim_{p\rightarrow \infty} \frac{X_{pD}^{h(p)} }{\mathcal{H}^{d-1}(pD\times \{0\})} \,=\, 0 \quad \textrm{a.s.} \end{equation} We turn back to the study of $\phi^{\vec{v}_0}_G (pA, h(p))$. We recall that we supposed $\lim_{p\rightarrow \infty} h(p) / \log p =+\infty$. As in the proof of Proposition \ref{p:finitude} (see \eqref{e:*}), since $G(]K_0,+\infty])<p_c(d)$, we have \begin{align*} \sum_{p\in \mathbb{N}} \mathbb{P} [ \mathcal{E}_{G,K_0}( \cyl^{\vec{v}_0}(pA,h(p)),h(p))^c ] &\,\leq \, \sum_{p\in \mathbb{N}}c_d \mathcal{H}^{d-1} (pA) h(p) \mathbb{P} [\diam (C_{G,K_0} (0)) \geq h(p)]\\ & \,\leq \, \sum_{p\in \mathbb{N}}c_d \mathcal{H}^{d-1} (pA) h(p) \kappa_1 e^{-\kappa_2 h(p)} \\ & \,<\, +\infty \,, \end{align*} thus by Borel-Cantelli we know that \begin{equation} \label{e:step3} \textrm{a.s., for all $p$ large enough,} \quad \mathcal{E}_{G,K_0}( \cyl^{\vec{v}_0}(pA,h(p)),h(p)) \textrm{ occurs.} \end{equation} Proposition \ref{p:zero} is proved by combining \eqref{e:pif1}, \eqref{e:step2} and \eqref{e:step3}. \end{dem} We now extend Proposition \ref{p:zero} to the study of any tilted cylinder. We will bound the maximal flow through a tilted cylinder by maximal flows through straight boxes at an intermediate level. Unfortunately, at this stage, we could not prove that the convergence holds almost surely. However, we prove that the convergence holds in a weaker sense, namely in probability. We will upgrade this convergence in Proposition \ref{p:zeroter}. \begin{prop} \label{p:zerobis} For any probability measure $G$ on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$ and $G(\{0\}) \geq 1-p_c(d)$, for any $\vec{v} \in \mathbb{S}^{d-1}$, for any non-degenerate hyperrectangle $A$ normal to $\vec{v}$, for any function $h : \mathbb{N} \mapsto \mathbb{R}^+$ satisfying $\lim_{p\rightarrow +\infty} h(p) / \log p =+\infty$, we have $$ \lim_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, 0 \quad \textrm{in probability.}$$ \end{prop} \begin{dem} Fix $A$, $\vec{v}$ and $h$ and consider a $p$ large enough. Since $\phi(A,h)$ is non increasing in $h$, we can suppose that $h(p) \leq p$ for all $p$. We will bound $\phi_G (pA, h(p))$ by maximal flows through straight boxes at an intermediate level $L$, $1\leq L \leq p$ (in what follows $L$ will depend on $p$). For a fixed $L\in 2 \mathbb{N}^*$, we chop $\mathbb{Z}^d$ into (almost) disjoint $L$-boxes as we already did in the proof of Proposition \ref{p:tronquer}. We recall the definitions of the $L$- and $3L$-boxes given in \eqref{e:Lambda} and \eqref{e:Lambda'}: let $\Lambda_L = [-L/2,L/2]^d $, for $\mathbf{i}\in \mathbb{Z}^d$ we have $$ \Lambda_L (\mathbf{i}) \,=\, \{ x+L \mathbf{i} \,:\, x\in \Lambda_L \} \quad \textrm{and} \quad \Lambda'_L (\mathbf{i}) \,=\, \{ x+L \mathbf{i} \,:\, x\in \Lambda_{3L} \} \,.$$ For every $L\in \mathbb{N}^*$, for every $\mathbf{i}\in \mathbb{Z}^d$, let $\phi_G (L,\mathbf{i})$ be the maximal flow from $\partial \Lambda_L (\mathbf{i}) \cap \mathbb{Z}^d$ to $\partial \Lambda'_L (\mathbf{i}) \cap \mathbb{Z}^d$ in $\Lambda'_L(\mathbf{i}) \smallsetminus \Lambda_L(\mathbf{i})$. By the max-flow min-cut Theorem, $$ \phi_G (L,\mathbf{i}) \,=\, \min \{ T_G (E) \,:\, E \subset \mathbb{E}^d \,,\, E \textrm{ cuts $\partial \Lambda_L (\mathbf{i}) \cap \mathbb{Z}^d$ from $\partial \Lambda'_L (\mathbf{i}) \cap \mathbb{Z}^d$ in $\Lambda'_L(\mathbf{i}) \smallsetminus \Lambda_L(\mathbf{i})$} \} \,,$$ {\em i.e.}, roughly speaking, $ \phi_G (L,\mathbf{i})$ is the minimal capacity of a cutset in the annulus $\Lambda'_L (\mathbf{i}) \smallsetminus \Lambda_L (\mathbf{i})$. For every $\mathbf{i}\in \mathbb{Z}^d$, let $E_G(L,\mathbf{i})$ be a minimal cutset for $\phi_G (L,\mathbf{i})$. We choose $L =L(p)$ such that $h(p)$ is large in comparison with $L$, in the sense that no $3L$-box can intersect both the top and the bottom of $\cyl (pA, h(p)$). Thus we can choose $L(p) = 2 \lfloor h(p)/c_d \rfloor$ for some constant $c_d$ depending only on the dimension. Let $\mathbf{J}(pA, L)$ be the indices of all the $L$-boxes that are intersected by the hyperrectangle $pA$ (see Figure \ref{f:zerobis}): $$ \mathbf{J} \,=\, \{ \mathbf{i}\in \mathbb{Z}^d \,:\, pA \cap \Lambda_L (\mathbf{i}) \neq \emptyset \}\,. $$ \begin{figure}[h!] \centering \input{zerobis.pdf_t} \caption{The cylinders $\cyl(pA,h(p)$ and $\Lambda_L (\mathbf{j}), \mathbf{j}\in \mathbf{J}$ ($d=2$).} \label{f:zerobis} \end{figure} Let us prove that inequality \eqref{e:comp1} holds: \begin{equation} \label{e:comp1} \phi_G (pA, h(p)) \,\leq \, \sum_{\mathbf{i}\in \mathbf{J}} \phi_G (L,\mathbf{i}) \end{equation} by proving that $\cup_{\mathbf{i}\in \mathbf{J}} E_G(L,\mathbf{i}) $ is a cutset for $\phi_G(pA, h(p))$. Let $\gamma = (x_0, e_1, x_1, \dots , e_n, x_n)$ be a path from the top to the bottom of $\phi_G(pA, h(p))$. Since $pA \subset \cup_{\mathbf{i}\in \mathbf{J}} \Lambda_L (\mathbf{i}) $ and $\gamma$ (seen as a continuous curve) must intersect $pA$, then there exists $\mathbf{i} \in J$ such that $\gamma \cap \Lambda_L (\mathbf{i}) \neq \emptyset $. Since $h(p)$ is large in comparison with $L$, $\gamma$ cannot be included in $ \Lambda'_L (\mathbf{i}) $, thus $\gamma$ contains a path from $\partial \Lambda_L (\mathbf{i}) \cap \mathbb{Z}^d$ to $\partial \Lambda'_L (\mathbf{i}) \cap \mathbb{Z}^d$ in $\Lambda'_L(\mathbf{i}) \smallsetminus \Lambda_L(\mathbf{i})$, thus by definition it must intersect $E_G(L,\mathbf{i})$ (see Figure \ref{f:zerobis}). This proves that $\cup_{\mathbf{i}\in \mathbf{J}} E_G(L,\mathbf{i}) $ is a cutset for $\phi_G(pA, h(p))$, thus $$\phi_G (pA, h(p)) \,\leq \, \sum_{\mathbf{i}\in \mathbf{J}} T_G(E_G(L,\mathbf{i}) )\,=\, \sum_{\mathbf{i}\in \mathbf{J}} \phi_G (L,\mathbf{i}) \,.$$ It remains to compare $\phi_G (L,\mathbf{i})$ for any fixed $\mathbf{i}\in \mathbb{Z}^d$ with maximal flows through straight cylinders. Fix $\mathbf{i}\in \mathbb{Z}^d$. For every $k\in \{1,\dots , d\}$, define $$ \widetilde{\Lambda}_L (\mathbf{i}, k , +) \,=\, \{ x+L\mathbf{i} \,:\, x\in [-3L/2, 3L/2]^{k-1} \times [L/2, 3L/2] \times [-3L/2, 3L/2]^{d-k} \} $$ and $$ \widetilde{\Lambda}_L (\mathbf{i}, k , -) \,=\, \{ x+L\mathbf{i} \,:\, x\in [-3L/2, 3L/2]^{k-1} \times [-3L/2, -L/2] \times [-3L/2, 3L/2]^{d-k} \} \,,$$ see Figure \ref{f:zerobis2}. \begin{figure}[h!] \centering \input{erobis2.pdf_t} \caption{The cylinders $\Lambda_L (\mathbf{i}), \Lambda_L'(\mathbf{i})$ and $\widetilde{\Lambda}_L (\mathbf{i}, k , l)$ for $k\in \{1,2\}$ and $l\in\{+,-\}$ ($d=2$).} \label{f:zerobis2} \end{figure} Let $\phi_G (L, \mathbf{i}, k , +)$ (rep. $\phi_G (L, \mathbf{i}, k , -)$) be the maximal flow in $ \widetilde{\Lambda}_L (\mathbf{i}, k , +)$ (resp. $ \widetilde{\Lambda}_L (\mathbf{i}, k , -)$) from its top $[-3L/2, 3L/2]^{k-1} \times\{ 3L/2 \} \times [-3L/2, 3L/2]^{d-k}\cap \mathbb{Z}^d$ (resp. $[-3L/2, 3L/2]^{k-1} \times\{ -3L/2 \} \times [-3L/2, 3L/2]^{d-k}\cap \mathbb{Z}^d$) to its bottom $[-3L/2, 3L/2]^{k-1} \times\{ L/2 \} \times [-3L/2, 3L/2]^{d-k}\cap \mathbb{Z}^d$ (resp. $[-3L/2, 3L/2]^{k-1} \times\{ -L/2 \} \times [-3L/2, 3L/2]^{d-k}\cap \mathbb{Z}^d$), and let $E_G (L, \mathbf{i}, k , +)$ (rep. $E_G (L, \mathbf{i}, k , -)$) be a corresponding minimal cutset. We claim that for every $L\in \mathbb{N}^*$, for every $\mathbf{i}\in \mathbb{Z}^d$, $\cup_{l=+,-} \cup_{k=1}^d E_G (L, \mathbf{i}, k , l)$ cuts $\partial (\Lambda_L (\mathbf{i})) \cap \mathbb{Z}^d$ from $\partial (\Lambda'_L (\mathbf{i}) ) \cap \mathbb{Z}^d$ in $\Lambda'_L(\mathbf{i}) \smallsetminus \Lambda_L(\mathbf{i})$, we have \begin{equation} \label{e:comp2} \phi_G (L,\mathbf{i}) \,\leq\, \sum_{l=+,-} \sum_{k=1}^d \phi_G (L, \mathbf{i}, k , l) \,. \end{equation} We now prove this claim. Let $\gamma = (x_0, e_1, x_1, \dots , e_n, x_n)$ be a path from $\partial (\Lambda_L (\mathbf{i})) \cap \mathbb{Z}^d$ to $\partial (\Lambda'_L (\mathbf{i}) ) \cap \mathbb{Z}^d$ in $\Lambda'_L(\mathbf{i}) \smallsetminus \Lambda_L(\mathbf{i})$. Let $$ j \,=\, \inf \{ i\in \{0,\dots , n \} \,:\, x_i \in \partial (\Lambda'_L(\mathbf{i})) \} \,. $$ Then there exists $k\in \{1,\dots, d\}$, $l\in\{+,-\}$ such that $$ x_{j} \,=\, [-3L/2, 3L/2]^{k-1} \times\{ l 3L/2 \} \times [-3L/2, 3L/2]^{d-k} \,,$$ thus $x_{j} \in \widetilde{\Lambda}_L (\mathbf{i}, k , l)$. Let $$ j' \,=\, \inf \{ i\leq j \,:\, \forall i' \in \{i,\dots , j \} \,,\, x_{i'} \in \widetilde{\Lambda}_L (\mathbf{i}, k , l) \} \,.$$ Then by continuity of $\gamma$ we know that $x_{j'}\in \widetilde{\Lambda}_L (\mathbf{i}, k , l)$ but $x_{j'}$ has a neighbor outside $\widetilde{\Lambda}_L (\mathbf{i}, k , l)$. By definition of $j$, since $j'<j$, $x_{j'}$ can be only on one side of the boundary of $\widetilde{\Lambda}_L (\mathbf{i}, k , l)$, precisely $x_{j'}\in [-3L/2, 3L/2]^{k-1} \times\{ l L/2 \} \times [-3L/2, 3L/2]^{d-k}$. Thus the subset of $\gamma$ between $x_{j'}$ and $x_{j}$ is a path from the bottom to the top of $\widetilde{\Lambda}_L (\mathbf{i}, k , l)$, thus it must intersect $E_G (L, \mathbf{i}, k , l)$. This proves that $\cup_{l=+,-} \cup_{k=1}^d E_G (L, \mathbf{i}, k , l)$ cuts $\partial (\Lambda_L (\mathbf{i})) \cap \mathbb{Z}^d$ from $\partial (\Lambda'_L (\mathbf{i}) ) \cap \mathbb{Z}^d$ in $\Lambda'_L(\mathbf{i}) \smallsetminus \Lambda_L(\mathbf{i})$, thus \eqref{e:comp2} is proved. Combining \eqref{e:comp1} and \eqref{e:comp2}, we obtain that for every $L$, for every $p$ large enough, \begin{equation} \label{e:comp3} \phi_G (pA, h(p)) \,\leq \, \sum_{\mathbf{i}\in \mathbf{J}} \sum_{l=+,-} \sum_{k=1}^d \phi_G (L, \mathbf{i}, k , l) \,. \end{equation} For short, we denote by $\mathcal E (L,\mathbf i, k, l)$ the event $\mathcal{E}_{G,K_0}(\widetilde{\Lambda}_L (\mathbf{i}, k, l), L )$ defined by $$ \mathcal E (L,\mathbf i, k, l) := \bigcap_{x\in \widetilde{\Lambda}_L (\mathbf{i}, k, l) \cap \mathbb{Z}^d } \{ \diam (C_{G,K_0} (x)) < L \} $$ for any $\mathbf{i}\in \mathbf{J}$, $k\in \{1,\dots, d\}$ and $l\in\{+,-\}$. On one hand, by symmetry and invariance of the model by translations of integer coordinates, we have, for any such $(\mathbf i, k, l)$, $$ \mathbb{E} [ \phi_G (L, \mathbf{i}, k , l) \mathds{1}_{\mathcal E (L,\mathbf i, k, l)}] \,=\, \mathbb{E} [ \phi_G (L, \mathbf{0}, d , +) \mathds{1}_{\mathcal E (L,\mathbf 0, d, +)}] \,=\, \mathbb{E} [ \phi^{\vec{v}_0}_G (LD,L ) \mathds{1}_{\mathcal E_{G,K_0} (\cyl^{\vec{v}_0}(LD,L),L)}] $$ with $D = [0, 3]^{d-1} \times \{0\}$ and $\phi^{\vec{v}_0}$ defined as in Proposition \ref{p:zero}. By Proposition \ref{p:zero} we know that \begin{equation} \label{e:comp4} \lim_{L \rightarrow \infty } \frac{\mathbb{E} [ \phi^{\vec{v}_0}_G (LD,L ) \mathds{1}_{\mathcal E_{G,K_0} (\cyl^{\vec{v}_0}(LD,L),L)}] }{L^{d-1}} \,=\, 0 \,. \end{equation} On the other hand, let $A^1$ by a hyperrectangle a bit larger than $A$, namely $$ A^1 \,=\, \{ x+\vec w \,:\, x\in A \,,\, \|\vec w \|_{\infty} \leq 1 \,,\, \vec w \cdot \vec{v} =0 \} \,. $$ We recall that the event $\mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L)$ is defined by $$ \mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L) \,=\, \bigcap_{x\in \cyl (pA^1,h(p)) \cap \mathbb{Z}^d } \{ \diam (C_{G,K_0} (x)) < L \} \,.$$ Notice that for all $\mathbf{i}\in \mathbf{J}$, we have $\Lambda'_L(\mathbf{i}) \subset \cyl (pA^1,h(p))$ at least for $p$ large enough since we choose $L=L(p) = 2 \lfloor h(p)/c_d \rfloor$ for some constant $c_d$ depending only on the dimension. Then $$ \mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L)\,\subset \, \bigcap_{\mathbf{i}\in \mathbf{J}} \bigcap_{l=+,-} \bigcap_{k=1}^d \mathcal{E}(L,\mathbf i, k, l) \,.$$ By \eqref{e:comp3} we obtain \begin{align*} \frac{\mathbb{E} [\phi_G (pA, h(p)) \mathds{1}_{\mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L) }]} {p^{d-1}} & \,\leq\, \frac{ \mathbb{E} [\phi_G (pA, h(p)) \mathds{1}_{ \cap_{\mathbf{i}\in \mathbf{J}} \cap_{l=+,-} \cap_{k=1}^d \mathcal{E}(L,\mathbf i, k, l)}]}{p^{d-1}} \\ & \,\leq\, \frac{1}{p^{d-1}} \sum_{\mathbf{i}\in \mathbf{J}} \sum_{l=+,-} \sum_{k=1}^d \mathbb{E}[ \phi_G (L, \mathbf{i}, k , l) \mathds{1}_{ \mathcal{E}(L,\mathbf i, k, l)} ]\\ & \,\leq\, \frac{2d L^{d-1} \card (\mathbf{J})}{p^{d-1}} \frac{\mathbb{E} [ \phi^{\vec{v}_0}_G (LD,L ) \mathds{1}_{\mathcal E_{G,K_0} (\cyl^{\vec{v}_0}(LD,L),L)}] }{L^{d-1}}\,, \end{align*} and it remains to notice that $L=L(p)$ goes to infinity and $\card (\mathbf{J}) L(p)^{d-1} / p^{d-1}$ remains bounded when $p$ goes to infinity to conclude by \eqref{e:comp4} that \begin{equation} \label{e:comp5} \lim_{p\rightarrow \infty} \frac{\mathbb{E} [\phi_G (pA, h(p)) \mathds{1}_{ \mathcal{E}_{G,K_0 }(\cyl (pA^1,h(p)), L) } ]}{p^{d-1}} \,=\, 0 \,. \end{equation} For every $\eta >0$, we obtain as in \eqref{e:*} that \begin{align*} \mathbb{P} [\phi_G (pA, h(p))& \geq \eta p^{d-1}] \\ &\,\leq\, \mathbb{P} [\mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L) ^c] + \mathbb{P} [\phi_G (pA, h(p)) \mathds{1}_{ \mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L) } \geq \eta p^{d-1}] \\ &\,\leq\, c_d \mathcal{H}^{d-1} (pA^1) h(p) \kappa_1 e^{-\kappa_2 L(p)}+ \eta^{-1} \frac{\mathbb{E} [\phi_G (pA, h(p)) \mathds{1}_{ \mathcal{E}_{G,K_0 }(\cyl(pA^1,h(p)), L) } ]}{p^{d-1}} \end{align*} that goes to zero when $p$ goes to infinity since $h(p) \leq p $, $L(p) = 2 \lfloor h(p)/c_d \rfloor$ and $\lim_{p\rightarrow \infty} h(p)/\log (p) =+\infty$. \end{dem} \subsection[Proof of the convergence III]{Proof of the convergence III: end of the proof of Theorem \ref{t:CV}} \label{s:retourzero} At this stage, what remains to prove to finish the proof of Theorem \ref{t:CV} is to strengthen the mode of convergence in Proposition \ref{p:zerobis}. This can be done easily using the continuity of $G\mapsto \nu_G$, {\em i.e.}, Theorem \ref{thmcont}. \begin{prop} \label{p:zeroter} We suppose that $G\mapsto \nu_G(\vec{v})$ is continuous, {\em i.e.}, if $G$ and $(G_n)_{n\in \mathbb{N}}$ are probability measures on $[0,+\infty]$ such that $G(\{+\infty\}) < p_c(d)$ and for all $n\in \mathbb{N}$, $G_n (\{+\infty\}) < p_c(d)$ and $G_n \overset{d}{\rightarrow} G$, then $$ \lim_{n\rightarrow \infty} \sup_{\vec{v} \in \mathbb{S}^{d-1}} \left\vert \nu_{G_n} (\vec v) - \nu_G (\vec v) \right\vert \,=\, 0 \,.$$ For any probability measure $G$ on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$ and $G(\{0\}) \geq 1-p_c(d)$, for any $\vec{v} \in \mathbb{S}^{d-1}$, for any non-degenerate hyperrectangle $A$ normal to $\vec{v}$, for any function $h : \mathbb{N} \mapsto \mathbb{R}^+$ satisfying $\lim_{p \rightarrow +\infty} h(p) / \log p =+\infty$, we have $$ \lim_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, 0 \quad \textrm{a.s.}$$ \end{prop} \begin{dem} Let $\vec{v}\in \mathbb{S}^{d-1}$. Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$ and $G(\{ 0 \}) \geq 1-p_c(d)$. By Proposition \ref{p:trivial}, this implies that $\nu_G(\vec{v}) =0$. let $A$ be a non-degenerate hyperrectangle normal to $\vec{v}$, and let $h:\mathbb{N}^* \mapsto \mathbb{R}^+$ such that $\lim_{p\rightarrow \infty} h(p)/\log p =+\infty$. Suppose first that $h$ also satisfies $\lim_{p\rightarrow \infty} h(p)/ p =0$. For any $\varepsilon >0$, we denote by $^\varepsilon G $ the distribution of the variables $t_G(e) + \varepsilon$. Obviously $^\varepsilon G (\{0\}) = 0$ thus Proposition \ref{p:positif} states that $$ \lim_{p\rightarrow \infty} \frac{\phi_{^\varepsilon G}(pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, \nu_{^\varepsilon G} (\vec{v}) \quad \textrm{a.s.}$$ Moreover, by coupling (see Equation \eqref{e:couplage}), $ \phi_{G}(pA, h(p)) \leq \phi_{^\varepsilon G}(pA, h(p))$, thus \begin{equation} \label{e:casnul} \forall \varepsilon >0 \quad \limsup_{p\rightarrow \infty} \frac{\phi_{G}(pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,\leq\, \nu_{^\varepsilon G} (\vec{v}) \quad \textrm{a.s.} \end{equation} To conclude the proof, we will use the continuity of $G\mapsto \nu_G$: since $^\varepsilon G \overset{d}{\longrightarrow} G$ when $\varepsilon $ goes to $0$ we obtain that \begin{equation} \label{e:casnul2} \lim_{\varepsilon \rightarrow 0} \nu_{^\varepsilon G} (\vec{v}) \,=\, \nu_G(\vec{v}) \,=\, 0 \,. \end{equation} Combining \eqref{e:casnul} and \eqref{e:casnul2} we obtain that $$ \lim_{p\rightarrow \infty} \frac{\phi_{G}(pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,=\, 0 \qquad \textrm{a.s.}$$ If $h$ does not satisfy $\lim_{p\rightarrow \infty} h(p)/ p =0$, define $\tilde{h} (p) = \min (h(p), \sqrt{p})$ for all $p\in \mathbb{N}^*$. Then $\tilde h$ is mild, thus we just proved that $$ \lim_{p\rightarrow \infty} \frac{\phi_{G}(pA, \tilde{h}(p))}{\mathcal{H}^{d-1} (pA)} \,=\, 0 \qquad \textrm{a.s.}$$ Moreover, since $\tilde{h} (p) \leq h(p)$ for all $p\in \mathbb{N}^*$, any cutset from the top to the bottom of $\cyl(pA, \tilde{h}(p))$ is also a cutset from the top to the bottom of $\cyl(pA, h(p))$, thus by the max-flow min-cut Theorem we obtain that $\phi_{G}(pA, \tilde{h}(p)) \geq \phi_{G}(pA, h(p))$ for all $p\in \mathbb{N}^*$. This allows us to conclude that $$ \limsup_{p\rightarrow \infty} \frac{\phi_{G}(pA, h(p))}{\mathcal{H}^{d-1} (pA)} \,\leq\, \lim_{p\rightarrow \infty} \frac{\phi_{G}(pA, \tilde{h}(p))}{\mathcal{H}^{d-1} (pA)} \,=\, 0 \qquad \textrm{a.s.}$$ This ends the proof of Proposition \ref{p:zeroter}. \end{dem} \begin{rem} It is worth noticing that this proof does not use Propositions \ref{p:zero} or Proposition \ref{p:zerobis} directly. However, we need these intermediate results to prove the continuity of $G\mapsto \nu_G$ that we use here. \end{rem} \section{Subadditivity} \label{s:ssadd} As mentioned in section~\ref{s:T}, expressing the flow constant as the limit of a subadditive and integrable object is crucial to prove its continuity. This is the purpose of the present section. The first idea is to take the capacity of a cut which in a sense separates a hyperrectangle $A$ from infinity in a half-space. This will ensure subadditivity. However, in order to have a chance to compare it to the flows that we used so far, one needs the cut to stay at a small enough distance from $A$ so that it will be flat in the limit. In addition, to ensure good integrability properties, one needs this distance to be large enough so that one may find enough edges with bounded capacity to form a cutset. These constraints lead to searching for a cutset in a \emph{slab} of random height, which height is defined in \eqref{e:defH}. Let $\vec{v} \in \mathbb{S}^{d-1}$, and let $A$ be any non-degenerate hyperrectangle normal to $\vec{v}$. For any $h$, we denote by $\slab (A,h, \vec{v})$ the cylinder whose base is the hyperplane spanned by $A$ and of height $h$ (possibly infinite), {\em i.e.}, the subset of $\mathbb{R}^d$ defined by $$ \slab (A,h,\vec{v}) \,=\, \{ x + r \vec{v} \,:\, x \in \hyp (A) \,,\, r\in [0,h] \} \,.$$ Let $V(A)$ be the following set of vertices in $\mathbb{Z}^d$, which is a discretized version of $A$ : $$ V(A) \,=\, \{ x\in \mathbb{Z}^d \cap \slab (A,\infty,\vec{v})^c \,:\, \exists y \in \mathbb{Z}^d \cap \slab (A,\infty,\vec{v}) \,,\, \langle x,y \rangle \in \mathbb{E}^d \textrm{ and $\langle x,y \rangle$ intersects $A$}\}\,. $$ Let $W(A,h, \vec{v})$ be the following set of vertices in $\mathbb{Z}^d$, which is a discretized version of $\hyp (A + h\vec{v})$ : $$ W(A,h, \vec{v}) \,=\, \{ x\in \mathbb{Z}^d \cap \slab (A,h,\vec{v}) \,:\, \exists y \in \mathbb{Z}^d \cap ( \slab (A,\infty,\vec{v}) \smallsetminus \slab (A,h,\vec{v})) \,,\, \langle x,y \rangle \in \mathbb{E}^d \}\,. $$ We say that a path $\gamma = (x_0, e_1, x_1, \dots, e_n, x_n)$ goes from $A$ to $\hyp (A+h\vec{v})$ in $\slab (A,h,\vec{v})$ if $x_0 \in V(A)$, $x_n \in W(A,h, \vec{v})$ and for all $ i\in\{1,\dots , n\}$, $x_i \in \slab (A,h,\vec{v})$ (see Figure \ref{f:sub1}). \begin{figure}[h!] \centering \input{sub1.pdf_t} \caption{A path $\gamma $ from $A$ to $\hyp (A+h\vec{v})$ in $\slab (A,h,\vec{v})$ and a corresponding cutset ($d=2$).} \label{f:sub1} \end{figure} We say that a set of edges $E$ cuts $A$ from $\hyp (A+h\vec{v})$ in $\slab (A,h,\vec{v})$ if $E$ contains at least one edge of any path $\gamma $ that goes from $A$ to $\hyp (A+h\vec{v})$ in $\slab (A,h,\vec{v})$. For any probability measure $F$ on $[0,+\infty]$ such that $F(\{+\infty\}) < p_c(d)$, for any $K_0\in \mathbb{R}$ such that $F([K_0, +\infty])<p_c(d)$, we define the random height $H_{F,K_0}(A)$ as \begin{equation} \label{e:defH} H_{F,K_0}(A) \,=\, \inf \left\{ h\geq \mathcal{H}^{d-1} (A) ^{\frac{1}{2(d-1)}} \,:\, \begin{array}{c}\exists E \subset \mathbb{E}^d \textrm{ s.t. } \forall e\in E \,,\, t_F(e) \leq K_0\\ \text{ and $E$ cuts $A$ from $\hyp(A+h\vec{v})$}\\ \text{ in $\slab(A,h,\vec{v})$} \end{array}\right\}\,. \end{equation} We will say a few words about the definition of $H_{F,K_0} (A)$ in Remark \ref{r:H} after the proof of the first result of this section, namely Theorem \ref{t:ssadd}. We finally define the random alternative maximal flow $\widetilde{\phi}_{G, F,K_0} (A)$ by \begin{equation} \label{e:varphi} \widetilde{\phi}_{G,F,K_0} (A) \,=\, \inf \left\{T_G(E) \,:\, \begin{array}{c} E\subset \mathbb{E}^d \textrm{ and $E$ cuts $A$ from $\hyp(A+H_{F,K_0}(A) \vec{v})$}\\ \textrm{in $\slab(A,H_{F,K_0} (A),\vec{v})$ } \end{array} \right\} \,. \end{equation} Notice that we do not know if the infimum in the definition \eqref{e:varphi} of $\widetilde{\phi}_{G,F,K_0} (A)$ is achieved. The purpose of using two different distributions $F$ and $G$ in the definition of $ \widetilde{\phi}_{G,F,K_0} (A)$ is to have monotonicity in $G$, which will be used later, in the proof of Proposition~\ref{propupper}. Finally, we say that a direction $\vec{v} \in \mathbb{S}^{d-1}$ is rational if there exists $M\in \mathbb{R}^+$ such that $M\vec{v}$ has rational coordinates. Now, we will prove that these flows $\widetilde{\phi}_{G,F,K_0} (A)$ properly rescaled converge for large hyperplanes towards $\nu_G(\vec{v})$ (as defined in \eqref{defnu}), and thus obtain an alternative definition of $\nu_G(\vec{v})$. This will be done in two steps. First we prove the convergence of $\widetilde{\phi}_{G,F,K_0} (pA)/\mathcal{H}^{d-1} (pA)$ towards some limit $\widetilde{\nu}_{G,F,K_0} (\vec{v})$ by some subadditive argument in Theorem~\ref{t:ssadd}. Then, we compare $\widetilde{\phi}_{G,F,K_0} (pA)$ with $\phi_{G} (pA, h(p))$ to prove that $\widetilde{\nu}_{G,F,K_0}(\vec{v}) = \nu_G (\vec{v})$, and this is done in Proposition~\ref{prop:ssadd}. \begin{thm} \label{t:ssadd} Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\}) < p_c(d)$. For any probability measure $F$ on $[0,+\infty]$ such that $F(\{+\infty\}) < p_c(d)$ and $G\preceq F$, for any $K_0\in \mathbb{R}$ such that $F(]K_0, +\infty])<p_c(d)$, for any rational $\vec{v} \in \mathbb{S}^{d-1}$, there exists a non-degenerate hyperrectangle $A$ (depending on $\vec{v}$ but neither on $G, F$ nor on $K_0$) which is normal to $\vec{v}$ and contains the origin of the graph $\mathbb{Z}^d$ such that $$\widetilde{\nu}_{G,F,K_0}(\vec{v}):= \inf_{p\in\mathbb{N}^*} \frac{\mathbb{E}[\widetilde{\phi}_{G, F,K_0} (pA)]}{\mathcal{H}^{d-1} (pA)}<\infty$$ and $$ \lim_{p\rightarrow \infty} \frac{\widetilde{\phi}_{G,F,K_0} (pA)}{\mathcal{H}^{d-1} (pA)} \,=\,\widetilde{\nu}_{G,F,K_0}(\vec{v})\quad \text{a.s. and in }L^1\;.$$ \end{thm} \begin{dem} Let $G$ be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\}) < p_c(d)$. Let $F$ be a probability measure on $[0,+\infty]$ such that $F(\{+\infty\}) < p_c(d)$ and $G\preceq F$. Let $K_0\in \mathbb{R}$ such that $F(]K_0, +\infty])<p_c(d)$. We consider a fixed rational $\vec{v} \in \mathbb{S}^{d-1}$ and $H$, the hyperplane normal to $\vec{v}$ containing $0$. Since $\vec{v}$ is rational, there exists a orthogonal basis of $H$ of vectors with integer coordinates, let us call it $(\vec{f}_1,\ldots,\vec{f}_{d-1})$. Then, we take $A$ to be the hyperrectangle built on the origin and this basis: $A=\{\sum_{i=1}^{d-1}\lambda_i\vec{f}_i\;:\;\forall i, \;\lambda_i\in [0,1]\}$. Notice that the model is invariant under translations by $\vec{f}_i$ for any $i$, in the sense that the flow $\widetilde{\phi}_{G,F,K_0} (A+\vec{f}_i)$ with capacities $t$ is equal to the flow $\widetilde{\phi}_{G,F,K_0} (A)$ with capacities $t'$ defined by $t'(\langle x,y\rangle)=t(\langle x+\vec{f}_i,y+\vec{f}_i\rangle)$. Moreover, if $A_1,\dots, A_k$ are hyperrectangles included in $\hyp (A)$ with disjoint interiors and such that $B=\cup_{i=1}^k A_i$ is also an hyperrectangle, we claim that \begin{equation} \label{e:ssadd} \widetilde{\phi}_{G,F,K_0} (B) \,\leq\, \sum_{i=1}^k \widetilde{\phi}_{G,F,K_0} (A_i) \,. \end{equation} Indeed, first notice that if $B_1, B_2$ are hyperrectangles normal to $\vec{v}$ such that $B_1 \subset B_2$, then by definition any set of edges $E$ that cuts $B_2$ from $\hyp (B_2 + h\vec{v})$ in $\slab (B_2, h \vec{v})$ also cuts $B_1$ from $\hyp (B_1 + h\vec{v}) = \hyp (B_2 + h\vec{v})$ in $\slab (B_1, h \vec{v}) = \slab (B_2, h \vec{v})$, thus $H_{F,K_0}(B_2) \geq H_{F,K_0} (B_1)$. Thus if $B=\cup_{i=1}^k A_i$ then $H_{F,K_0}(B) \geq \max_{1\leq i \leq k} H_{F,K_0} (A_i)$. For all $i\in \{1,\dots , k\}$, let $E_i$ be a set of edges that cuts $A_i$ from $\hyp (A_i + H_{F,K_0} (A_i) \vec{v})$ in $\slab (A_i, H_{F,K_0} (A_j) , \vec{v})$. Let us prove that $\cup_{i=1}^k E_i$ cuts $B$ from $\hyp (B + H_{F,K_0} (B) \vec{v})$ in $\slab (B, H_{F,K_0} (B) , \vec{v})$. Let $\gamma =(x_0, e_1, x_1, \dots, e_n, x_n) $ be a path from $B$ to $\hyp (B + H_{F,K_0} (B) \vec{v})$ in $\slab (B,H_{F,K_0}(B),\vec{v})$. Since $B=\cup_{i=1}^k A_i$, we have $V(B) = \cup_{i=1}^k V(A_i)$ thus $x_0 \in V(A_j)$ for some $j\in \{1,\dots , k\}$. If $x_n\in W(A_j,H_{F,K_0} (A_j), \vec{v}) $ thus $\gamma$ is a path from $A_j$ to $\hyp (A_j + H_{F,K_0} (A_j) \vec{v})$ in $\slab (A_j, H_{F,K_0} (A_j) , \vec{v})$, thus $\gamma$ contains an edge of $E_j$. Otherwise since $H_{F,K_0} (A_j) \leq H_{F,K_0} (B)$, we know that $m:=\inf\{ p\in\{1,\dots ,n\} \,:\, x_p \notin \slab (A_j, H_{F,K_0} (A_j), \vec{v})\} \leq n$, thus $\gamma$ contains a path $(x_0, e_1, x_1, \dots, x_{m-1})$ from $A_j$ to $\hyp (A_j + H_{F,K_0} (A_j) \vec{v})$ in $\slab (A_j, H_{F,K_0} (A_j) , \vec{v})$ for some $j\in \{1,\dots , k\}$, and we conclude also that $\gamma$ contains an edge of $E_j$. Inequality \eqref{e:ssadd} follows by optimizing on $T_G(E_i)$ for all $i$. We now prove that $\widetilde{\phi}_{G,F,K_0} (A)$ has good integrability properties. For any $x\in V(A)$ we consider the connected component of $x$ in $\slab(A, \infty, \vec{v})$ for the percolation $(\mathds{1}_{t_F(e)>K_0})$, {\em i.e.}, $$ C^{\vec{v}}_{F,K_0} (x) \,=\, \left\{ y\in \slab(A, \infty, \vec{v}) \,:\,\begin{array}{c} \textrm{$y$ is connected to $x$ by a path $\gamma = (x_0, e_0,\dots , x_n)$}\\ \textrm{s.t. $\forall i\in \{1,\dots, n\}$, $x_i \in \slab(A, \infty, \vec{v}) $ and $t_F(e_i) > K_0$}\end{array} \right\}\,.$$ By definition of $H_{F,K_0} (A)$, we know that any path $\gamma$ in $ \slab(A,H_{F,K_0} (A),\vec{v})$ from $A$ to $\hyp (A + H_{F,K_0} (A)\vec{v})$ must contain at least one edge $e$ such that $t_F(e)\leq K_0$. Thus $\gamma$ cannot be included in $\cup_{x\in V(A)} C^{\vec{v}}_{F,K_0} (x)$, and this implies that $\gamma$ must contain at least one edge $e$ that belongs to $\partial_{\textrm{e}} (\cup_{x\in V(A)} C^{\vec{v}}_{F,K_0} (x) )$. This edge $e$ satisfies (by the coupling relation \eqref{e:couplage}) $t_G(e)\leq t_F(e) \leq K_0$. Thus comparing clusters in the slab with clusters in the full space, we obtain \begin{align*} \mathbb{E}[ \widetilde{\phi}_{G,F,K_0} (A)] & \,\leq \, K_0 \sum_{x\in V(A)} \mathbb{E}[ \card_e (\partial_{\textrm{e}} C^{\vec{v}}_{F,K_0} (x) )] \,\leq \, c_d K_0 \sum_{x\in V(A)}\mathbb{E}[ \card_v ( C^{\vec{v}}_{F,K_0} (x) )]\\ & \,\leq \, c_d K_0 \sum_{x\in V(A)} \mathbb{E}[\card_v ( C_{F,K_0} (x) )] \,\leq \, c_d K_0 \card_v(V(A)) \mathbb{E}[\card_v ( C_{F,K_0} (0) )] \,, \end{align*} and $$ \frac{\mathbb{E}[ \widetilde{\phi}_{G,F,K_0} (A)] }{\mathcal{H}^{d-1} (A)} \,\leq \, c_d K_0 \frac{\card_v(V(A))}{\mathcal{H}^{d-1} (A)} \mathbb{E}[\card_v ( C_{F,K_0} (0) )] \,\leq \,c'_d K_0 \mathbb{E}[\card_v ( C_{F,K_0} (0) )] \,<\, +\infty $$ uniformly in $A$. We can thus apply a multi-parameter ergodic theorem (see for instance Theorem~2.4 in \cite{Akcoglu} and Theorem~1.1 in \cite{Smythe}) to deduce that there exists a constant $\widetilde{\nu}_{G,F,K_0}(\vec{v})$ (that depends on $\vec{v}$ but not on $A$ itself) such that $$ \widetilde{\nu}_{G,F,K_0}(\vec{v}) \,=\, \inf_{p\in\mathbb{N}^*} \frac{\mathbb{E}[\widetilde{\phi}_{G,F,K_0} (pA)]}{\mathcal{H}^{d-1} (pA)}\,=\, \lim_{p\rightarrow \infty} \frac{\widetilde{\phi}_{G,F,K_0} (pA)}{\mathcal{H}^{d-1} (pA)} \quad\textrm{a.s. and in }L^1 \,. $$ \end{dem} We now state that the limit $\widetilde{\nu}_{G,F,K_0}(\vec{v})$ appearing in Theorem \ref{t:ssadd} is in fact equal to $\nu_G(\vec{v})$. We want to clarify the fact that in the proof of Proposition~\ref{prop:ssadd} below, we will use the convergence in probability of rescaled flows in tilted cylinders towards the flow-constant, stated above in Propositions~\ref{p:positif} and~\ref{p:zerobis}. \begin{rem} \label{r:H} We will see in the proof of Proposition~\ref{prop:ssadd} below that with large probability, $H_{F,K_0}(pA)$ equals $\mathcal{H}^{d-1} (pA) ^{\frac{1}{2(d-1)}} $ for $p$ large. Since $\widetilde{\phi}_{G,F,K_0} (pA)$ depends on $F$ only through $\mathcal{H}^{d-1} (pA) ^{\frac{1}{2(d-1)}} $, it is thus natural that the limit in Theorem~\ref{t:ssadd} does not depend on $F$. Moreover, notice that the function $h:p\mapsto \mathcal{H}^{d-1} (pA) ^{\frac{1}{2(d-1)}}$ is mild: this is why we chose to make appear the lower bound $\mathcal{H}^{d-1} (pA) ^{\frac{1}{2(d-1)}}$ in the definition \eqref{e:defH} of $H_{F,K_0}(pA)$. \end{rem} \begin{prop} \label{prop:ssadd} For any fixed rational $\vec{v} \in \mathbb{S}^{d-1}$, any probability measure $F$ on $[0,+\infty]$ such that $F(\{+\infty\}) < p_c(d)$ and $G\preceq F$, and any $K_0\in \mathbb{R}$ such that $F(]K_0, +\infty])<p_c(d)$, $$\widetilde{\nu}_{G,F,K_0}(\vec{v}) = \nu_G(\vec{v})\;.$$ \end{prop} \begin{dem} We first prove that $\nu_G(\vec{v}) \leq \widetilde{\nu}_{G,F,K_0} (\vec{v})$. We associate with a fixed rational $\vec{v} \in \mathbb{S}^{d-1}$ the same hyperrectangle $A$ as in the proof of Theorem \ref{t:ssadd}. We consider the function $h(p) = \mathcal{H}^{d-1} (pA) ^{\frac{1}{2(d-1)}} $. Then $h$ is mild. Thus we can apply Propositions \ref{p:positif} or \ref{p:zerobis} to state that \begin{equation} \label{e:sens11} \lim_{p\rightarrow \infty} \frac{\phi_G (pA, h(p))}{ \mathcal{H}^{d-1} (pA)} \,=\, \nu_G(\vec{v}) \quad \textrm{in probability.} \end{equation} Moreover, let $\gamma = (x_0, e_0, \dots, e_n, x_n)$ be a path from the bottom to the top of $\cyl(pA, h(p))$ inside $\cyl(pA,h(p))$. Let $k=\max \{ j\geq 0 \,:\, x_j \notin \slab (pA,\infty,\vec{v}) \}$. Then $x_j \in V(pA)$ and the truncated path $\gamma' = (x_k, e_{k+1}, \dots , x_n)$ is a path from $pA$ to $\hyp (pA+h(p) \vec{v})$ in $\slab (pA,h(p),\vec{v})$. On the event $\{H_{F,K_0} (pA) = h(p)\}$, we conclude that any set of edges $E$ that cuts $pA$ from $\hyp(pA+H_{F,K_0}(pA) \vec{v})$ in $\slab(pA,H_{F,K_0} (pA),\vec{v})$ also cuts any path from the bottom to the top of $\cyl (pA, h(p))$, thus on the event $\{H_{F,K_0} (pA) = h(p)\} $ we have \begin{equation} \label{e:sens12} \phi_G (pA, h(p)) \leq \widetilde{\phi}_{G,F,K_0}(pA) \,. \end{equation} Combining \eqref{e:sens12} and \eqref{e:*} we obtain \begin{align} \label{e:sens13} \mathbb{P}[ \phi_G (pA, h(p)) > \widetilde{\phi}_{G,F,K_0}(pA) ]& \,\leq \, \mathbb{P} [H_{F,K_0} (pA) > h(p) ] \nonumber \\ & \,\leq\, \mathbb{P}[ \exists x\in V(pA) \,:\, \diam (C_{F,K_0} (x) ) \geq h(p)/2 ] \nonumber \\ & \,\leq \, \card_v (V(pA)) \mathbb{P}[\diam (C_{F,K_0} (0) ) \geq h(p)/2] \nonumber \\ & \,\leq \, c_d \mathcal{H}^{d-1} (pA) \kappa_1 e^{-\kappa_2 h(p)} \end{align} that goes to zero when $p$ goes to infinity since $\lim_{p\rightarrow \infty} h(p)/\log p =+\infty$. Combining Theorem~\ref{t:ssadd}, \eqref{e:sens11} and \eqref{e:sens13} we conclude that $\nu_G(\vec{v}) \leq \widetilde{\nu}_{G,F,K_0} (\vec{v})$. We now prove that $\nu_G(\vec{v}) \geq \widetilde{\nu}_{G,F,K_0} (\vec{v})$ for a fixed rational $\vec{v} \in \mathbb{S}^{d-1}$. We associate with a fixed rational $\vec{v} \in \mathbb{S}^{d-1}$ the same hyperrectangle $A$ as in the proof of Theorem~\ref{t:ssadd}, $A= \{ \sum_{i=1}^{d-1}\lambda_i\vec{f}_i\;:\;\forall i, \;\lambda_i\in [0,1] \}$. Let $(\vec{w}_1, \dots , \vec{w}_{d-1}) = (\vec{f}_1 / \|\vec{f}_1\|_2 , \dots , \vec{f}_{d-1} / \|\vec{f}_{d-1}\|_2)$: it is an orthonormal basis of the orthogonal complement of $\vec{v}$ made of rational vectors. We want to construct a set of edges that cuts $pA$ from $\hyp(pA+H_{F,K_0}(pA) \vec{v})$ in $\slab(pA,H_{F,K_0} (pA),\vec{v})$ by gluing together cutsets from the top to the bottom of different cylinders. For any fixed $\eta >0$, we slightly enlarge the hyperrectangle $A$ by considering $$ A^\eta \,=\, \{ x+\vec w \,:\, x\in A \,,\, \|\vec w \|_{\infty} \leq \eta \,,\, \vec w \cdot \vec{v} =0 \} \,.$$ Let $h(p) = \mathcal{H}^{d-1} (pA) ^{\frac{1}{2(d-1)}} $ as previously. We consider the cylinder $\cyl^{\vec{v}}(pA^\eta , h(p))$, and a minimal cutset $E_0 (p,A,\eta )$ between the top $B_1^{\vec{v}}(pA^\eta , h(p))$ and the bottom $B_2^{\vec{v}} (pA^\eta , h(p))$ of this cylinder. To obtain a set of edges that cuts $pA$ from $\hyp (pA + H_{F,K_0} (pA) \vec{v})$ in $\slab (pA, H_{F,K_0} (pA), \vec{v})$, we need to add to $E_0 (p,A,\eta)$ some edges that prevent some flow to escape from $\cyl^{\vec{v}} (pA^\eta , h(p))$ by its vertical sides, see Figure \ref{f:sub2}. \begin{figure}[h!] \centering \input{sub2.pdf_t} \caption{The construction of a cutset that separates $pA$ from $\hyp (pA + H_{F,K_0} (pA) \vec{v})$ in $\slab (pA, H_{F,K_0} (pA), \vec{v})$ (here $d=2$ and the cutsets $E_0 (p,A,\eta ), E^+_1 (p,A,\eta)$ and $E^+_1 (p,A,\eta)$ are represented {\em via} their dual as surfaces).} \label{f:sub2} \end{figure} For $i\in \{1,\dots , d\}$, let $D^+_i (p,A,\eta)$ and $D^-_i (p,A,\eta)$ be the two $d-1$ dimensional sides of $\partial (\cyl(p A^\eta, h(p)))$ that are normal to $\vec{w}_i$, and such that $D^+_i (p,A,\eta)$ is the translated of $D^-_i (p,A,\eta)$ by a translation of vector $f (i,p,A,\eta) \vec{w}_i$ for some $f(i,p,A,\eta) >0$. We consider the cylinder $\cyl^{-\vec{w}_i} (D^+_i (p,A,\eta), p^{1/4} )$ (resp. $\cyl^{+\vec{w}_i} (D^-_i (p,A,\eta), p^{1/4} )$) and a minimal cutset $E^+_i (p,A,\eta)$ (resp. $E^-_i (p,A,\eta)$) from the top to the bottom of $\cyl^{-\vec{w}_i} (D^+_i (p,A,\eta), p^{1/4} )$ (resp. from the top to the bottom of $\cyl^{+\vec{w}_i} (D^-_i (p,A,\eta), p^{1/4} )$) in the direction $\vec{w}_i$. We emphasize the fact that the lengths of the sides of $\cyl^{-\vec{w}_i} (D^+_i (p,A,\eta), p^{1/4} )$ and $\cyl^{+\vec{w}_i} (D^-_i (p,A,\eta), p^{1/4} )$ do no grow to infinity at the same rate in $p$. We shall prove the three following properties: \begin{itemize} \item[$(i)$] For every $\eta >0$, at least for $p$ large enough, the set of edges $F(p,A,\eta)$ defined by $$ F(p,A,\eta) \,:=\, E_0 (p,A,\eta ) \cup \left(\bigcup_{i=1}^{d-1} E^+_i (p,A,\eta) \right) \cup \left(\bigcup_{i=1}^{d-1} E^-_i (p,A,\eta) \right)$$ cuts $pA$ from $\hyp (pA+ H_{F,K_0} (pA) \vec{v})$ in $\slab (pA, H_{F,K_0} (pA), \vec{v})$. \item[$(ii)$] For every $\eta >0$, $$ \lim_{p\rightarrow \infty} \frac{\phi^{\vec{v}} (p A^\eta , h(p) )}{\mathcal{H}^{d-1} ( p A^\eta)} \,=\, \lim_{p\rightarrow \infty} \frac{T_G (E_0 (p,A,\eta ))}{\mathcal{H}^{d-1} ( p A^\eta)} \,=\, \nu_G(\vec{v}) \quad \textrm{in probability}\,.$$ \item[$(iii)$] For all $i\in \{1,\dots , d-1\}$, for all $l\in \{+,-\}$, for all $\eta>0$, we have $$ \lim_{p\rightarrow \infty} \frac{\phi^{-l \vec{w}_i} (D^l_i (p,A,\eta), p^{1/4} )}{p^{d-1}} \,=\, \lim_{p\rightarrow \infty} \frac{ T_G (E^l_i (p,A,\eta)) }{p^{d-1}} \,=\, 0 \quad \textrm{in probability}\,.$$ \end{itemize} Before proving these three properties, we show how they help us to conclude the proof. By property $(i)$ we know that for every $\eta>0$, for $p$ large enough, \begin{equation} \label{e:sens21} \frac{\widetilde{\phi}_{G,F,K_0} (pA)}{\mathcal{H}^{d-1} (pA)} \,\leq \, \frac{\mathcal{H}^{d-1} (A^\eta)}{ \mathcal{H}^{d-1} (A)} \frac{T_G (E_0 (p,A,\eta ))}{\mathcal{H}^{d-1} ( p A^{\eta})} +\frac{1}{\mathcal{H}^{d-1} (A)} \sum_{i=1}^{d-1} \sum_{l=+,-} \frac{ T_G (E^l_i (p,A,\eta)) }{p^{d-1} } \,. \end{equation} By Theorem~\ref{t:ssadd}, we know that the left hand side of \eqref{e:sens21} converges a.s. to $\widetilde{\nu}_{G,F,K_0}(\vec{v})$ when $p$ goes to infinity. By properties $(ii)$ and $(iii)$, we know that the right hand side of \eqref{e:sens21} converges in probability to $\nu_G(\vec{v}) \mathcal{H}^{d-1} (A^\eta)/ \mathcal{H}^{d-1} (A)$, that is arbitrarily close to $\nu_G(\vec{v})$ when $\eta$ goes to $0$. We conclude that $\widetilde{\nu}_{G,F,K_0}(\vec{v}) \leq \nu_G(\vec{v})$. To conclude the proof of Proposition~\ref{prop:ssadd} it remains to prove the properties $(i)$, $(ii)$ and $(iii)$. The proof of property $(i)$ is very similar to the proof of inequality \eqref{e:comp2} so we just recall the underlying idea of this proof without giving every details again. Indeed, if $\gamma$ is a path from $pA$ to $\hyp (pA+ H_{F,K_0} (pA) \vec{v})$ in $\slab (pA, H_{F,K_0} (pA), \vec{v})$, then $\gamma$ starts at a vertex of $V(pA)$, its next vertex is inside the cylinder $\cyl^{\vec{v}} (p A^{\eta/2} , h(p)/2) $ (for a fixed $\eta>0$, at least for $p$ large enough) and then after a finite number of steps it has to leave the cylinder $\cyl^{\vec{v}} (p A^{\eta} , h(p)) $ by its top or by one of its vertical faces. If it leaves by the top of this cylinder it must contain an edge of $E_0 (p,A, \eta)$, and if it leaves by one of its vertical faces it must contain an edge of one of the $E^l_i (p,A,\eta)$. Property $(ii)$ is a straightforward application of Proposition \ref{p:positif} or \ref{p:zerobis}. Property $(iii)$ is a bit more delicate to prove since we cannot apply Proposition \ref{p:positif} or \ref{p:zerobis} here. Indeed, for given $i\in \{1,\dots, d-1\}$ and $l\in\{+,-\}$, the base of the cylinder $\cyl^{-l \vec{w}_i} (D^l_i (p,A,\eta), p^{1/4} )$, namely $D^l_i (p,A,\eta)$, grows at speed $p$ in $(d-2)$ directions and at speed $h(p)$ (of order $ p^{\frac{1}{2(d-1)}}$) in one direction. We did not take into account this kind of anisotropic growth in our study (contrary to Kesten in \cite{Kesten:StFlour} and Zhang in \cite{Zhang,Zhang2017}). We can conjecture that $ \phi^{-l \vec{w}_i} (D^l_i (p,A,\eta), p^{1/4} )$ grows linearly with $p^{d-2} h(p)$, with a multiplicative constant given precisely by $\nu_G(\vec{w}_i)$, but this cannot be deduced easily from what has already been proved. However we do not need such a precise result. We recall that the definition of the event $\mathcal{E}_{G,K_0}(\cdot,\cdot)$ was given in \eqref{e:E}. Mimicking the proof of inequality \eqref{e:hop3} in the proof of Proposition \ref{p:finitude}, we obtain that for a constant $K(A,d,\eta)$, for variables $(X_i)$ that are i.i.d. with the same distribution as $\carde (C_{G,K_0} (0))$, we have \begin{align} \label{e:pfff} \mathbb{P} [ & \phi^{-l \vec{w}_i} (D^l_i (p,A,\eta), p^{1/4} ) \geq \beta p^{d-2} h(p) ]\nonumber\\ & \,\leq\, \mathbb{P} \left[\mathcal{E}_{G,K_0} \left( \cyl^{-l \vec{w}_i} (D^l_i (p,A,\eta), p^{1/4} ), \frac{p^{1/4}}{2} \right) ^c\right] + \mathbb{P} \left[\sum_{i=1}^{K(A,d,\eta) \lfloor p^{d-2} h(p) \rfloor } X_i \geq c_dK_0^{-1} \beta p^{d-2} h(p) \right] \nonumber\\ & \,\leq\, K(A,d,\eta) p^{d-2} h(p) p^{1/4} \kappa_1 e^{-\kappa_2 p^{1/4} /2} + \mathbb{E}[\exp(\lambda X_1)]^{K(A,d,\eta) p^{d-2} h(p)} e^{-\lambda \beta c_dK_0^{-1} p^{d-2} h(p)} \end{align} where $\lambda (G,d)>0$ satisfies $\mathbb{E}[\exp(\lambda X_1)]<\infty$. Since $h(p)$ is of order $p^{\frac{1}{2(d-1)}}$, the first term of the right hand side of \eqref{e:pfff} vanishes when $p$ goes to infinity. We can choose $\beta (G,d,A)$ large enough such that the second term of the right hand side of \eqref{e:pfff} vanishes too when $p$ goes to infinity. This is enough to conclude that property $(iii)$ holds. \end{dem} \section{Continuity of $G \mapsto \nu_G $} \label{s:cont} This section is devoted to the proof of Theorem \ref{thmcont}. To prove this theorem we mimick the proof of the corresponding property for the time constant, see \cite{Cox}, \cite{CoxKesten}, \cite{Kesten:StFlour} and \cite{GaretMarchandProcacciaTheret}. We stress the fact that the proof relies heavily on these facts: \begin{itemize} \item[$(i)$] $\nu_G (\vec{v}) $ can be seen without any moment condition as the limit of a subadditive process that has good properties of monotonicity, \item[$(ii)$] $ \lim_{K\rightarrow \infty} \nu_{G^K} (\vec{v}) = \nu_G(\vec{v})$. \end{itemize} We stated Theorem \ref{t:ssadd} to get $(i)$. Property $(ii)$ is a direct consequence of the definition \eqref{defnu} of $\nu_G$ itself, but we had consequently to work to prove that the constant $\nu_G$ defined this way is indeed the limit of some rescaled flows. As a consequence, the following proof of Theorem \ref{thmcont} is quite classical and easy, since we already have in hand all the appropriate results to perform it efficiently. \subsection{Preliminary lemmas} Let $G$ (resp. $G_n,n\in \mathbb{N}$) be a probability measure on $[0,+\infty]$ such that $G(\{+\infty\})<p_c(d)$ (resp. $G_n(\{+\infty\})<p_c(d)$). We define the function $\mathcal{G} : t \in [0,+\infty[ \mapsto G([t,+\infty])$ (respectively $\mathcal{G}_n(t) = G_n ([t,+\infty]) $) that characterizes $G$ (resp. $G_n$). Notice that the stochastic domination between probability measures can be easily characterized with these functions : $$ G_1 \succeq G_2 \quad \Leftrightarrow \quad \forall t\in[0,+\infty[ \,, \,\, \mathcal{G}_1 (t) \geq \mathcal{G}_2(t) \,. $$ We recall that we always build the capacities of the edges for different laws by coupling, using a family of i.i.d. random variables with uniform law on $]0,1[$ and the pseudo-inverse of the distribution function of these laws. Thanks to this coupling, we get this classical result of convergence (see for instance Lemma 2.10 in \cite{GaretMarchandProcacciaTheret}). \begin{lem} \label{lemcouplage} Let $G$, $(G_n)_{n\in \mathbb{N}}$ be probability measures on $[0,+\infty]$. We define the capacities $t_G(e)$ and $t_{G_n} (e)$ of each edge $e\in \mathbb{E}^d$ by coupling. If $G_n \overset{d}{\rightarrow} G$ then $$ a.s.\,, \,\, \forall e \in \mathbb{E}^d \,, \quad \lim_{n\rightarrow \infty} t_{G_n} (e) \,=\, t_G (e) \,. $$ \end{lem} By the coupling relation \eqref{e:couplage}, Theorem \ref{t:oldCV} and the definition \eqref{defnu} of $\nu_G$, we also get trivially this monotonicity result. \begin{lem} \label{lemmonotonie} Let $G$, $F$ be probability measures on $[0, +\infty]$ such that $G(\{+\infty\})<p_c(d)$ and $F(\{+\infty\})<p_c(d)$. If $G \preceq F$, then for all $\vec{v} \in \mathbb{S}^{d-1}$ we have $\nu_G(\vec{v}) \leq \nu_{F} (\vec{v})$. \end{lem} In what follows, it will be useful to be able to exhibit a probability measure that dominates stochastically (or is stochastically dominated by) any probability $G_n$ of a convergent sequence of probability measures, thus we recall this known result (see for instance Lemma 5.3 in \cite{GaretMarchandProcacciaTheret}) . \begin{lem} \label{lemsuiteprobas} Suppose that $G$ and $(G_n)_{n\in \mathbb{N}}$ are probability measures on $[0,+\infty]$ such that $G (\{+\infty\}) <p_c(d)$, $G_n (\{+\infty\}) <p_c(d)$ for every $n$ and $G_n \overset{d}{\rightarrow} G$. There exists a probability measure $F^+$ on $[0,+\infty]$ such that $F(\{+\infty\})<p_c(d)$, $G_n \preceq F^+$ for all $n\in \mathbb{N}$ and $G \preceq F^+$. \end{lem} \subsection{Upper bound} This is the easy part of the proof. It relies on the expression of $\nu_G (\vec{v})$ as the infimum of a sequence of expectations. \begin{prop} \label{propupper} Suppose that $G$ and $(G_n)_{n\in \mathbb{N}}$ are probability measures on $[0,+\infty]$ such that $G(\{+\infty\}) < p_c(d)$ and for all $n\in \mathbb{N}$, $G_n (\{+\infty\}) < p_c(d)$. If $G_n \succeq G$ for all $n\in \mathbb{N}$ and $G_n \overset{d}{\rightarrow} G$, then for any rational direction $\vec{v} \in \mathbb{S}^{d-1}$, $$\limsup_{n\rightarrow \infty} \nu_{G_n} (\vec v) \,\leq \, \nu_G (\vec v)\,.$$ \end{prop} \begin{dem} Let $\vec{v} \in \mathbb{S}^{d-1}$ be a rational vector, and let $A$ be the non-degenerate hyperrectangle normal to $\vec{v}$ given by Theorem~\ref{t:ssadd}. By Lemma \ref{lemsuiteprobas} we know that there exists a probability measure $F^+$ on $[0,+\infty]$ such that $F^+(\{+\infty\})<p_c(d)$, $G_n \preceq F^+$ for all $n\in \mathbb{N}$ and $G \preceq F^+$. Let $K_0$ be large enough such that $F^+ (]K_0,+\infty])< p_c(d)$. Let $k\in \mathbb{N}^*$. We recall that the definition of $H_{F^+,K_0} (kA)$ is given in \eqref{e:defH}. Let $E_k$ be a set of edges that cuts $kA$ from $\hyp (kA + H_{F^+,K_0} (kA) \vec{v} )$ in $\slab (kA,H_{F^+,K_0} (kA), \vec{v} )$. By coupling (see Equation \eqref{e:couplage}) we know that $\widetilde{\phi}_{G, F^+,K_0} (k A) \leq \widetilde{\phi}_{G_n, F^+,K_0} (k A)$, and by Lemma \ref{lemcouplage} we have a.s. \begin{align*} T_{G} (E_k) & \,=\, \sum_{e\in E_k} t_{G} (e) \,=\, \lim_{n\rightarrow \infty} \sum_{e\in E_k} t_{G_n} (e)\\ & \,\geq \, \limsup_{n\rightarrow \infty} \widetilde{\phi}_{G_n, F^+,K_0} (k A) \,\geq \, \liminf_{n\rightarrow \infty} \widetilde{\phi}_{G_n, F^+,K_0} (k A) \, \geq\, \widetilde{\phi}_{G, F^+,K_0} (k A) \,, \end{align*} thus by optimizing on $E_k$ we obtain that $$ \forall k\in \mathbb{N}^* \,,\quad a.s. \,, \qquad \lim_{n\rightarrow \infty} \widetilde{\phi}_{G_n, F^+,K_0} (k A) \,=\, \widetilde{\phi}_{G, F^+,K_0} (k A) \,.$$ Moreover, for all $k\in \mathbb{N}^*$, we have also by coupling (see Equation \eqref{e:couplage}) that $\widetilde{\phi}_{G_n, F^+,K_0} (k A) \leq \widetilde{\phi}_{F^+, F^+,K_0} (k A)$ which is integrable. The dominated convergence theorem implies that \begin{equation} \label{eqTCD} \forall k\in \mathbb{N}^* \,, \qquad \lim_{n\rightarrow \infty} \mathbb{E}\left[ \widetilde{\phi}_{G_n, F^+,K_0} (k A)\right] \,=\, \mathbb{E} \left[ \widetilde{\phi}_{G, F^+,K_0} (k A)\right] \,. \end{equation} By Theorem \ref{t:ssadd} we know that $\nu_G (\vec{v}) = \lim_{k\rightarrow \infty} \mathbb{E} \left[ \widetilde{\phi}_{G, F^+,K_0} (k A) \right] / \mathcal{H}^{d-1} (kA)$, thus for all $\varepsilon >0$ there exists $k_0$ such that $$ \nu_G (\vec{v}) \,\geq\, \frac{\mathbb{E} \left[ \widetilde{\phi}_{G, F^+,K_0} (k_0 A) \right] }{\mathcal{H}^{d-1} (k_0 A)} - \varepsilon \,.$$ Using \eqref{eqTCD} we know that there exists $n_0$ such that for all $n\geq n_0$ we have $$ \frac{\mathbb{E} \left[ \widetilde{\phi}_{G, F^+,K_0} (k_0 A) \right] }{\mathcal{H}^{d-1} (k_0 A)} \,\geq \, \frac{\mathbb{E} \left[ \widetilde{\phi}_{G_n, F^+,K_0} (k_0 A) \right] }{\mathcal{H}^{d-1} (k_0 A)} - \varepsilon \,. $$ By Theorem \ref{t:ssadd} we also know that $\nu_{G_n} (\vec{v}) = \inf_{k} \mathbb{E} \left[\widetilde{\phi}_{G_n, F^+,K_0} (k A) \right] / \mathcal{H}^{d-1} (k A)$, thus we obtain that for any $\varepsilon >0$, for all $n$ large enough, $$ \nu_{G} (\vec{v}) \,\geq \, \nu_{G_n} (\vec{v}) - 2 \varepsilon \,.$$ This concludes the proof of Proposition \ref{propupper}. \end{dem} \subsection{The compact case} This section is devoted to the proof of the continuity of the flow constant in the particular case where all the probability measures we consider have the same compact support $[0,R]$ for some finite $R$. \begin{prop} \label{propcompact} Suppose that $G$ and $(G_n)_{n\in \mathbb{N}}$ are probability measures on $[0,R]$ for some fixed $R\in [0,+\infty[$. If $G_n \overset{d}{\rightarrow} G$, then for any rational $\vec{v} \in \mathbb{S}^{d-1}$, $$ \lim_{n\rightarrow \infty} \nu_{G_n} (\vec v) \,= \, \nu_G (\vec v)\,.$$ \end{prop} Notice that since the probability measure $G$ (resp. $G_n$) we consider has compact support, the flow constant $\nu_G (\vec{v})$ (resp. $\nu_{G_n} (\vec v)$) is defined via Theorem \ref{t:oldCV} as the limit when $k$ goes to infinity of the rescaled maximal flows $\tau_G (kA,k)$ (resp. $\tau_{G_n} (kA,k)$) defined in \eqref{e:deftau2} for a hyperrectangle $A$ normal to $\vec v$. To prove Proposition \ref{propcompact}, we will follow the proof sketched by Kesten in the proof of the continuity of the time constant in \cite{Kesten:StFlour} to avoid the use of Cox's previous work \cite{Cox}. Let us describe briefly the proof of the main difficulty, namely that $\liminf_{n\rightarrow \infty} \nu_{G_n} (\vec{v}) \geq \nu_G (\vec{v})$. First, one reduces easily to the case where $G_n$ is stochastically dominated by $G$ for any $n$. Then, $0\leq \tau_{G_n} (kA,k) \leq \tau_{G} (kA,k)$ and $$\tau_{G} (kA,k)-\tau_{G_n} (kA,k)\leq \sum_{e\in E_n(k)}t_G(e)-t_{G_n}(e)\;,$$ where $E_n(k)$ is any minimal cutset for $\tau_{G_n} (kA,k)$. If one is able to control the size of $E_n(k)$, showing that the probability that it exceeds $\beta k^{d-1}$ decreases exponentially fast in $k^{d-1}$ for some $k$, then a standard union bound and a large deviation argument, with the fact that $\mathbb{E}[\exp(t_G(e)-t_{G_n}(e))]$ is close to $1$ for $n$ large enough, will show that $\sum_{e\in E_n(k)}t_G(e)-t_{G_n}(e)$ is less than $\varepsilon k^{d-1}$ uniformly in $n$ large enough. Thus we need a control on the size of a minimal cutset for $\tau_{ G_n} (kA,k)$ in the spirit of Theorem~1 in \cite{Zhang2017}, but uniformly in $n$ large enough. This was done in Proposition~4.2 in \cite{RossignolTheret08b}, but this proposition requires the sequence of distribution functions of $(G_n)$ to coincide on a neighborhood of $0$, at least for $n$ large enough, and this does not follow from our assumptions. Fortunately, inspecting the proof of Theorem~1 in \cite{Zhang2017} one may see that the conclusion of Proposition~4.2 in \cite{RossignolTheret08b} holds uniformly in $n$ under weaker assumtions, stated in the following lemma. \begin{lem} \label{l:zhang2017bis} Suppose that $G_n$ is a sequence of distribution functions on $[0,+\infty[$ such that \begin{equation} \label{eq:hypZhang1} \limsup_{n\rightarrow +\infty} G_n(]0,\varepsilon[)\xrightarrow[\varepsilon\rightarrow 0]{}0 \end{equation} and \begin{equation} \label{eq:hypZhang2}\limsup_{n\rightarrow +\infty} G_n(\{0\}){}<1-p_c(d)\;. \end{equation} For any $k$, let us denote by $E_n (k)$ a minimal cutset for $\tau_{\underline G_n} (kA,k)$ -- if there is more than one such cutset we choose (with a deterministic rule) one of those cutsets with minimal cardinality. Then, there exists positive constants $C$, $D_1$, $D_2$ and an integer $n_0$ such that \begin{align} \forall n\geq n_0\,, \forall \beta >0, \forall k\in\mathbb{N},\; \mathbb{P}\left[\begin{array}{c}\carde (E_n(k)) \geq \beta k^{d-1} \\ \text{ and } \tau_{G_n} (kA,k) \leq \beta Ck^{d-1} \end{array}\right] \,\leq\, D_1 e^{-D_2 k^{d-1}}\,.\nonumber \end{align} \end{lem} {\textbf{Proof of Proposition~\ref{propcompact}}~:\\} Let $\vec{v} \in \mathbb{S}^{d-1}$ be a rational direction. Let $G$, $(G_n)_{n\in \mathbb{N}}$ be probability measures on $[0,R]$ for some $R \in [0,+\infty[$. We define $\underline \mathcal{G} _n = \min (\mathcal{G}, \mathcal{G}_n)$ (resp. $\overline \mathcal{G} _n = \max (\mathcal{G}, \mathcal{G}_n)$), and we denote by $\underline G_n$ (resp. $\overline G_n$) the corresponding probability measure on $[0,R]$. Then $\underline G_n \preceq G \preceq \overline G_n $ and $\underline G_n \preceq G_n \preceq \overline G_n $ for all $n\in \mathbb{N}$, $\underline G_n \overset{d}{\rightarrow} G$ and $\overline G_n \overset{d}{\rightarrow} G$. To conclude that $\lim_{n\rightarrow \infty} \nu_{G_n} (\vec{v}) = \nu_G (\vec{v})$, it is thus sufficient to prove that \begin{itemize} \item[$(i)$] $\limsup_{n\rightarrow \infty} \nu_{\overline G_n} (\vec{v}) \leq \nu_G (\vec{v})$, and \item[$(ii)$] $\liminf_{n\rightarrow \infty} \nu_{\underline G_n} (\vec{v}) \geq \nu_G (\vec{v})$. \end{itemize} Inequality $(i)$ is a straightforward consequence of Proposition \ref{propupper}. If $\nu_G (\vec{v}) =0$, then inequality $(ii)$ is trivial and we can conclude the proof. From now on we suppose that $\nu_G (\vec{v}) >0$. By \cite{Zhang} (see Theorem \ref{t:oldnul} above), we know that $\nu_G (\vec{v}) >0 \iff G(\{0\}) < 1-p_c(d)$. Thanks to the coupling (see equation \eqref{e:couplage}), we know that for every edge $e\in \mathbb{E}^d$, we have $ t_{\underline G_n} (e) \leq t_G (e)$. Let $A$ be a non-degenerate hyperrectangle normal to $\vec{v}$ that contains the origin of the graph and such that $\mathcal{H}^{d-1} (A) = 1$. We recall that $\tau_G (kA, k)$ is defined in Equation \eqref{e:deftau2}. It denotes the maximal flow for the capacities $(t_G(e))$ from the upper half part $C_1'(kA, k)$ to the lower half part $C_2'(kA, k)$ of the boundary of $\cyl (kA, k)$ as defined in Equation \eqref{e:deftau1}, and it is equal to the minimal $G$-capacity of a set of edges that cuts the upper half part from the lower half part of the boundary of $\cyl (kA, k)$ in this cylinder. Since we work with integrable probability measures $G$ and $G_n$, we know by Theorem \ref{t:oldCV} that a.s. \begin{equation} \label{e:hop} \nu_G (\vec{v}) \,=\, \lim_{k\rightarrow \infty} \frac{\tau_G (kA, k)}{k^{d-1}} \quad \textrm{and} \quad \nu_{\underline G_n} (\vec{v})\,=\, \lim_{k\rightarrow \infty} \frac{\tau_{\underline G_n} (kA, k)}{k^{d-1}} \,. \end{equation} Now, let us denote by $E_n (k)$ a minimal cutset for $\tau_{\underline G_n} (kA,k)$ as in Lemma~\ref{l:zhang2017bis}. According to Kesten's Lemma~3.17 in \cite{Kesten:StFlour}, any such minimal cutset with minimal cardinality is associated with a set of plaquettes which is a connected subset of $\mathbb{R}^d$ - we will say that $E_n(k)$ is $\circ$-connected. Let $x\in \partial A$. There exists a constant $\hat c_d$, depending only on the dimension, such that for every $k\in \mathbb{N}^*$ there exists a path from the upper half part to the lower half part of the boundary of $\cyl (kA, k)$ that lies in the Euclidean ball of center $kx$ and radius $\hat c_d$. We denote by $F(k)$ the set of the edges of $\mathbb{E}^d$ whose both endpoints belong to this ball. Then $E_n (k)$ must contain at least one edge of $F(k)$, and $\carde (F(k)) \leq \hat c'_d$ for some constant $\hat c'_d$. Moreover, given a fixed edge $e_0$, the number of $\circ$-connected sets of $m$ edges containing $e_0$ is bounded by $\tilde c_d^m$ for some finite constant $\tilde c_d$ (see the proof of Lemma~2.1 in \cite{Kesten:StFlour}, that uses (5.22) in \cite{kesten:perco}). Thus for every $k \in \mathbb{N}^*$, for every $\beta,C, \varepsilon >0$ we have \begin{align} \label{eqcompact} \mathbb{P} [& \tau_{\underline G_n} (kA,k) \leq \tau_{G} (kA,k) - \varepsilon k^{d-1} ] \nonumber \\ & \,\leq \, \mathbb{P} [\tau_{\underline G_n} (kA,k) > \beta C k^{d-1} + \mathbb{P}[ \carde (E_n (k) ) \geq \beta k^{d-1} \textrm{ and } \tau_{\underline G_n} (kA,k) \leq \beta Ck^{d-1} ]\nonumber \\ & \quad + \sum_{{\tiny \begin{array}{c}E \textrm{ $\circ$-connected set of edges containing}\\ \textrm{some edge in } F(k) \textrm{ s.t. } \carde (E) \leq \beta k^{d-1} \end{array}}} \mathbb{P} \left[ \sum_{e\in E} t_G(e) - t_{\underline G_n} (e) \geq \varepsilon k^{d-1} \right]\nonumber \\ & \,\leq \, \mathbb{P} [\tau_{G} (kA,k) > \beta C k^{d-1} + \mathbb{P}[ \carde (E_n (k)) \geq \beta k^{d-1} \textrm{ and } \tau_{\underline G_n} (kA,k) \leq \beta Ck^{d-1} ]\nonumber \\ & \quad + c_d^{\beta k^{d-1}} \mathbb{P} \left[ \sum_{i=1}^{\lfloor \beta k^{d-1} \rfloor} t_G(e_i) - t_{\underline G_n} (e_i) \geq \varepsilon k^{d-1} \right]\,, \end{align} where $(e_i)_{i\ge 1}$ is a collection of distinct edges and $c_d$ is a constant depending only on $d$. Let us prove that the sequence $(\underline G_n)$ satisfies conditions \eqref{eq:hypZhang1} and \eqref{eq:hypZhang2} of Lemma~\ref{l:zhang2017bis}. On one hand, since $\underline G_n \overset{d}{\rightarrow} G$ we have $\limsup_{n\rightarrow \infty} \underline G_n (\{0\}) \leq G(\{0\}) < 1-p_c(d) $, thus condition \eqref{eq:hypZhang2} is satisfied. On the other hand, we know that $\underline G_n \preceq G$, {\em i.e.}, $\underline \mathcal{G}_n \leq \mathcal{G}$, thus for all $n\in \mathbb{N}$ we have $$\underline G_n (\{0\}) \,=\, 1-\lim_{p\rightarrow \infty} \underline \mathcal{G}_n (1/p) \,\geq\, 1-\lim_{p\rightarrow \infty} \mathcal{G} (1/p) \,=\, G (\{0\}) \,,$$ and we conclude that \begin{equation*} \lim_{n\rightarrow \infty}\underline G_n (\{0\}) \,=\, G(\{0\}) \,. \end{equation*} Let $\varepsilon\in ]0,+\infty[$ such that $G(\{\varepsilon\}) =0$. Then $\underline G_n \overset{d}{\rightarrow} G$ implies that $\lim_{n\rightarrow \infty} \underline \mathcal{G}_n (\varepsilon) = \mathcal{G}(\varepsilon) $, thus $$ \underline G_n (]0,\varepsilon[) \,=\, 1- \underline \mathcal{G}_n (\varepsilon) - \underline G_n (\{0\}) \, \xrightarrow[n\rightarrow \infty]{} 1- \mathcal{G} (\varepsilon) - G (\{0\}) \,=\, G (]0,\varepsilon[) \,,$$ and condition \eqref{eq:hypZhang1} follows from the fact that $\lim_{\varepsilon \rightarrow 0} G(]0,\varepsilon[) = 0$. Now, we can use Lemma~\ref{l:zhang2017bis} to obtain the following uniform control: \begin{align} \label{eqzhang} \exists C, D_1, D_2,n_0 & \textrm{ such that } \forall n\geq n_0\,, \forall \beta , \forall k\nonumber \\ & \mathbb{P}[\carde (E_n (k)) \geq \beta k^{d-1} \textrm{ and } \tau_{\underline G_n} (kA,k) \leq \beta Ck^{d-1} ] \,\leq\, D_1 e^{-D_2 k^{d-1}}\,. \end{align} \medskip We can easily bound $\tau_{G} (kA,k)$ by $\sum_{e\in E_k} t_G (e) \leq R \carde (E_k)$, where $E_k$ is a deterministic cutset of cardinality smaller than $c_d k^{d-1}$ - choose for instance $E_k$ as the set of all edges in $\cyl(kA, k)$ that are at Euclidean distance smaller than $2$ from $kA$. For any fixed $C>0$, since for every edge $e$ we have $t_{G} (e)\leq R$ there exists a constant $\beta $ such that \begin{equation} \label{eqborne} \forall k\in \mathbb{N}^* \,,\quad \mathbb{P} [\tau_{G} (kA,k) > \beta C k^{d-1}] \,=\, 0 \,. \end{equation} By Markov's inequality, for any $\alpha >0 $ we have $$ c_d^{\beta k^{d-1}} \mathbb{P} \left[ \sum_{i=1}^{\lfloor \beta k^{d-1} \rfloor} t_G(e_i) - t_{\underline G_n} (e_i) \geq \varepsilon k^{d-1} \right] \,\leq \, \left( c_d \exp \left( \frac{-\alpha \varepsilon}{\beta } \right) \mathbb{E} \left[ \exp \left( \alpha (t_{G} (e) - t_{\underline G_n} (e) ) \right) \right]\right)^{\beta k^{d-1}}\,. $$ For any fixed $\varepsilon>0$ and $\beta <\infty$, we can choose $\alpha=\alpha (\varepsilon)$ large enough so that $c_d \exp (-\alpha \varepsilon /\beta ) \leq 1/4$. Then, by Lemma \ref{lemcouplage} we know that $\lim_{n\rightarrow \infty} t_{ \underline G_n}(e) = t_G (e)$ a.s., and since $t_{G} (e) \leq R$ we can use the dominated convergence theorem to state that for $n$ large enough, $$ \mathbb{E} \left[ \exp \left( 2\alpha (t_{G} (e) - t_{ \underline G_n} (e) ) \right) \right] \,\leq\, 2\,.$$ We get \begin{equation} \label{eqsum1} \sum_{k>0} c_d^{\beta k^{d-1}} \mathbb{P} \left[ \sum_{i=1}^{\lfloor \beta k^{d-1} \rfloor} t_G(e_i) - t_{\underline G_n} (e_i) \geq \varepsilon k^{d-1} \right] \,<\, +\infty\,. \end{equation} Combining \eqref{eqcompact}, \eqref{eqzhang}, \eqref{eqborne} and \eqref{eqsum1}, we obtain that for every $\varepsilon >0$, for all $n$ large enough, $$ \sum_{k>0} \mathbb{P} [ \tau_{\underline G_n} (kA,k) \leq \tau_{G} (kA,k) - \varepsilon k^{d-1} ]\,<\,+\infty \,.$$ By Borel-Cantelli, we obtain that for every $\varepsilon >0$, for all $n$ large enough, a.s., for all $k$ large enough, $$ \tau_{\underline G_n}(kA,k) > \tau_{G}(kA,k) - \varepsilon k^{d-1}\,,$$ thus by \eqref{e:hop} for every $\varepsilon >0$, for all $n$ large enough, we have $$ \nu_{\underline{G}_n} (\vec{v}) \,\geq \,\nu_G (\vec{v}) - \varepsilon \,. $$ This proves inequality $(ii)$ and ends the proof of Proposition \ref{propcompact}. {\flushright$\blacksquare$\\} \subsection{Proof of Theorem \ref{thmcont}} \begin{dem} We first prove that convergence happens in a fixed rational direction $\vec{v} \in \mathbb{S}^{d-1}$. We follow the structure of the proof of Proposition \ref{propcompact}. Let $G$, $(G_n)_{n\in \mathbb{N}}$ be probability measures on $[0,+\infty[$. We want to prove that \begin{equation} \label{eqdirectionnelle} \lim_{n\rightarrow \infty} \nu_{G_n} (\vec{v}) = \nu_G (\vec{v})\,. \end{equation} We define $\underline G_n$ and $\overline G_n$ as in the proof of Proposition \ref{propcompact}, and we must show \begin{itemize} \item[$(i)$] $\limsup_{n\rightarrow \infty} \nu_{\overline G_n} (\vec{v}) \leq \nu_G (\vec{v})$, and \item[$(ii)$] $\liminf_{n\rightarrow \infty} \nu_{\underline G_n} (\vec{v}) \geq \nu_G (\vec{v})$. \end{itemize} Inequality $(i)$ is still a straightforward consequence of Proposition \ref{propupper}. For every $K>0$, we define as previously $G^K = {\mathds{1}}_{[0,K[} G + G([K,+\infty[) \delta_K$ and $\underline{G}_n^K = {\mathds{1}}_{[0,K[} \underline{G}_n + \underline{G}_n([K,+\infty[) \delta_K$. Since $\underline{G}_n^K \preceq \underline{G}_n$, we know by Lemma \ref{lemmonotonie} that $\nu_{\underline{G}_n^K} \leq \nu_{\underline{G}_n}$. For every $K>0$, since $\underline G_n^K \overset{d}{\rightarrow} G^K$, using Proposition \ref{propcompact} we obtain that \begin{equation} \label{eqtroncfinal} \liminf_{n\rightarrow \infty} \nu_{\underline G_n} (\vec{v}) \,\geq \, \lim_{n\rightarrow \infty} \nu_{\underline G_n^K} (\vec{v}) \,=\, \nu_{\underline G^K} (\vec{v})\,. \end{equation} By the definition \eqref{defnu} of $\nu_G (\vec{v})$ we know that $\lim_{K\rightarrow \infty} \nu_{ G^K} (\vec{v}) = \nu_{ G} (\vec{v})$. This concludes the proof of $(ii)$, thus \eqref{eqdirectionnelle} is proved. We consider the homogeneous extension of $\nu_G$ to $\mathbb{R}^d$ defined in Proposition~\ref{p:cvx}. By Proposition~\ref{p:cvx}, for all $x, y\in \mathbb{R}^d$, we have $\nu_G ( x) \leq \nu_G (x-y) + \nu_G(y)$ and $\nu_G ( y) \leq \nu_G (x-y) + \nu_G(x)$ thus \begin{equation} \label{eqpropnu1} |\nu_G (x) - \nu_G (y)| \,\leq \, \nu_G (x-y)\,. \end{equation} Moreover for all $x=(x_1,\dots ,x_d)$, we have \begin{align} \label{eqpropnu2} \nu_G (x) & \,\leq \, \nu_G ((x_1, 0, \dots , 0)) + |\nu_G ((x_1, x_2, 0 , \dots , 0)) - \nu_G ((x_1, 0, \dots , 0)) | \nonumber \\ & \qquad + \dots + |\nu_G ((x_1, \dots , x_d)) - \nu_G ((x_1, \dots , x_{d-1}, 0))|\nonumber \\ & \,\leq \, \nu_G (( x_1, 0, \dots , 0 )) + \nu_G ((0, x_2, 0 , \dots , 0 )) +\dots + \nu_G ((0 , \dots , 0, x_d ))\nonumber \\ & \,\leq \, \|x\|_1 \nu_G((1, 0, \dots , 0)) \,. \end{align} Combining \eqref{eqpropnu1} and \eqref{eqpropnu2}, we obtain that for all $x,y\in \mathbb{R}^d$, \begin{equation} \label{e:ajout} |\nu_G (x) - \nu_G (y)| \,\leq \, \|x-y\|_1\nu_G ((1, 0, \dots , 0))\,. \end{equation} The same holds for $\nu_{G_n}$. Since $\lim_{n\rightarrow \infty}\nu_{G_n}((1, 0, \dots , 0)) = \nu_G ((1, 0, \dots , 0))$, there exists $n_0$ such that for all $n\geq n_0$, we have $\nu_{G_n}((1, 0, \dots , 0)) \leq 2 \nu_G ((1, 0, \dots , 0))$. For every $n\geq n_0$, we have \begin{equation} \label{eqpropnu3} \forall \vec u, \vec{v} \in \mathbb{S}^{d-1}\,, \quad |\nu_{G_n} (\vec u) - \nu_{G_n} (\vec{v})| \,\leq \, 2 \|\vec u - \vec{v} \|_1\nu_G ((1, 0, \dots , 0))\,. \end{equation} Fix $\varepsilon >0$. Inequalities \eqref{e:ajout} and \eqref{eqpropnu3} imply that there exists $\eta >0$ such that $$ \sup \{ |\nu_F (\vec u) - \nu_F (\vec{v})| \,:\, \vec u , \, \vec{v} \in \mathbb{S}^{d-1} ,\,\, \|\vec u - \vec{v} \|_1 \leq \eta,\,\, F \in \{G,G_n, n\geq n_0\}\} \,\leq\, \varepsilon \,. $$ There exists a finite set $(\vec{v}_1, \dots , \vec{v}_m)$ of rational directions in $\mathbb{S}^{d-1}$ such that $$ \mathbb{S}^{d-1} \,\subset \, \bigcup_{i=1}^m \{\vec u \in \mathbb{S}^{d-1} \,:\, \|\vec u - \vec{v}_i \|_1 \leq \eta \} \,.$$ Thus $$ \limsup_{n\rightarrow \infty} \sup_{\vec u \in \mathbb{S}^{d-1}} | \nu_{G_n} (\vec u) - \nu_G (\vec u) | \,\leq \, 2 \varepsilon + \lim_{n\rightarrow \infty} \max_{i \in \{1,\dots , m\} } | \nu_{G_n} (\vec{v}_i) - \nu_G (\vec{v}_i) | \,, $$ and thanks to \eqref{eqdirectionnelle} this ends the proof of Theorem \ref{thmcont}. \end{dem} \paragraph{Acknowledgements.} The second author would like to thank Rapha\"el Cerf for stimulating discussions on this topic. The authors would like to thank an anonymous referee for many valuable comments that helped to improve the quality of the paper. \bibliographystyle{plain}
proofpile-arXiv_065-3490
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:level1}First-level heading} Since the introduction of renormalization group (RG) theory \cite{wilson_rg}, there has been strong interest in methods to compute the renormalized coupling constants and the critical exponents in a non-perturbative fashion. This goal has been achieved with the Monte Carlo (MC) RG approach of Swendsen. In 1979, he introduced a method to compute the critical exponents, which did not require explicit knowledge of the renormalized Hamiltonian \cite{mcrg}. A few years later, he solved the problem of calculating the renormalized coupling constants, using an equality due to Callen \cite{callen} to write the correlation functions in a form explicitly depending on the couplings. By imposing that the standard MC expression of a correlation function and its corresponding Callen form be equal, he derived equations whose iterative solution led to the coupling constants \cite{mcrg_rc}. Finding the renormalized Hamiltonian is an example of inverse statistical mechanical problem \cite{invising}. MCRG has been used successfully in many applications but difficulties related to sampling efficiency may be severe. Typically, the evaluation of the correlation functions near a critical point suffers from critical slowing down and is affected by large sampling errors in large systems. This difficulty can be alleviated with ingenious cluster algorithms \cite{clustermc}, which, however, are limited to specific models. Here we present an MCRG framework based on a variational principle for a biasing potential acting on the coarse grained degrees of freedom of a RG transformation. In our approach, the coupling constants and the critical exponents derive from the same unifying principle. Swendsen's formulae emerge as a special case, but our scheme also leads to formulations exempt from critical slowing down. In addition, it permits to estimate variationally the effect of truncating the Hamiltonian. Although the approach is rather general, here we limit ourselves, for concreteness, to lattice models with discrete spin degrees of freedom, $\{\bm \sigma\}$. A generic Hamiltonian has the form \begin{equation} \label{eq:hamiltonian} H(\bm \sigma) = \sum_{\alpha} K_\alpha S_\alpha(\bm \sigma), \end{equation} where the $K_\alpha$ are coupling constants and the $S_\alpha$ are operators acting on the spins $\bm\sigma$, such as sums or products of spins or combinations thereof. RG considers a flux in the space of Hamiltonians (\ref{eq:hamiltonian}) under scale transformations that reduce the linear size of the original lattice by a factor $b$. The rescaled degrees of freedom take the same discrete values of the original spins, to which they are related by a coarse graining transformation, $\bm\sigma' = \tau(\bm\sigma)$. For example, $\tau$ can be the block spin transformation of Kadanoff \cite{block2}. The distribution of the $\bm\sigma'$ is obtained from the distribution of the $\bm\sigma$ by tracing out the original degrees of freedom while keeping the $\bm \sigma'$ fixed: \begin{equation} \label{eq:distribution} p(\bm \sigma') = \frac{\sum_{\bm \sigma} \delta_{\bm \tau(\bm\sigma), \bm\sigma'} e^{-H(\bm \sigma)}}{Z} = \frac{e^{-H'(\bm \sigma')}}{Z'} . \end{equation} Here $\delta$ is the discrete Kroneker-delta function, $Z$ and $Z'$ are partition functions that ensure the normalization of the corresponding distributions. While the partition function $Z'$ is invariant under RG transformations, the renormalized Hamiltonian $H'$ is not, except at fixed points of the RG flow: \begin{equation} Z = \sum_{\bm \sigma} e^{-H(\bm \sigma)} = \sum_{\bm\sigma'} e^{-H(\bm \sigma')} = Z' \end{equation} and \begin{equation} H'(\bm\sigma') = -\log \sum_{\bm\sigma} \delta_{\tau(\bm \sigma), \bm\sigma'} e^{-H(\bm\sigma)} \end{equation} Repeated {\it at infinitum}, the RG transformations generate a flux in the space of Hamiltonians, in which all possible coupling terms appear, unless forbidden by symmetry. For example, in an Ising model with no magnetic field, only even spin products appear. The space of the coupling terms is, in general, infinite. However, perturbative and non-perturbative calculations suggest that only a finite number of couplings should be sufficient for a given degree of accuracy. In the proximity of a critical point, the distribution (\ref{eq:distribution}) of the block spins $\bm\sigma'$ displays a divergent correlation length, originating critical slowing down of local MC updates. This can be avoided by modifying the distribution of the $\bm\sigma'$ by adding to the Hamiltonian $H'(\bm\sigma')$ a biasing potential $V(\bm\sigma')$ to force the biased distribution of the block spins, $p_V(\bm\sigma')$, to be equal to a chosen {\it target distribution}, $p_t(\bm\sigma')$. For instance, $p_t$ can be the constant probability distribution. Then the $\bm\sigma'$ have the same probability at each lattice site and act as uncorrelated spins, even in the vicinity of a critical point. It turns out that $V(\bm\sigma')$ obeys a powerful variational principle that facilitates the sampling of the Landau free energy \cite{varyfes}. In the present context, we define the functional $\Omega[V]$ of the biasing potential $V(\bm\sigma')$ by: \begin{equation} \Omega [V] = \log\frac{ \sum_{\bm \sigma'} e^{-[H'(\bm \sigma') + V(\bm \sigma')]}}{\sum_{\bm \sigma'} e^{-H'(\bm \sigma')}}+ \sum_{\bm \sigma'} p_t (\bm \sigma') V(\bm \sigma'), \end{equation} where $p_t(\bm \sigma')$ is a normalized known target probability distribution. As demonstrated in \cite{varyfes}, the following properties hold: \begin{enumerate} \item $\Omega [V]$ is a convex functional with a lower bound. \item The minimizer, $V_{\text{min}}(\bm\sigma')$, of $\Omega$ is unique up to a constant and is such that: \begin{equation} \label{eq:fes} H'(\bm \sigma') = - V_{\text{min}}(\bm \sigma') - \log p_t (\bm \sigma') + \text{constant} \end{equation} \item The probability distribution of the $\bm\sigma'$ under the action of $V_{\text{min}}$ is: \begin{equation} p_{V_{\text{min}}}(\bm \sigma') = \frac{e^{-(H'(\bm \sigma') + V_{\text{min}}(\bm \sigma'))}}{\bm \sum_{\sigma'} e^{-(H'(\bm\sigma') + V_{\text{min}} (\bm\sigma'))}} = p_t(\bm \sigma') \end{equation} \end{enumerate} The above three properties lead to the following MCRG scheme. First, we approximate $V(\bm\sigma')$ with $V_{\vec J}(\bm\sigma')$, a linear combination of a finite number of terms $S_\alpha(\bm\sigma')$ with unknown coefficients $J_\alpha$, forming a vector $\vec J = \{J_1, ..., J_\alpha, ..., J_n\}$. \begin{equation} V_{\vec J}(\bm\sigma') = \sum_\alpha J_\alpha S_\alpha(\bm\sigma') \end{equation} Then the functional $\Omega[V]$ becomes a convex function of $\vec J$, due to the linearity of the expansion, and the minimizing vector, $\vec J_{\text{min}}$, and the corresponding $V_{\text{min}}(\bm\sigma')$ can be found with a local minimization algorithm using the gradient and the Hessian of $\Omega$: \begin{equation} \label{eq:gradient} \frac{\partial \Omega(\vec J)}{\partial J_\alpha} = - \braket{S_\alpha(\bm \sigma')}_{V_{\vec J}} + \braket{S_\alpha(\bm \sigma')}_{p_t} \end{equation} \begin{equation} \label{eq:hessian} \frac{\partial^2 \Omega(\vec J)}{\partial J_\alpha \partial J_\beta} = \braket{S_\alpha(\bm \sigma') S_\beta(\bm \sigma')}_{V_{\vec J}} - \braket{S_\alpha(\bm \sigma')}_{V_{\vec J}}\braket{S_\beta(\bm \sigma')}_{V_{\vec J}} \end{equation} Here $\braket{\cdot}_{V_{\vec J}}$ is the biased ensemble average under $V_{\vec J}$ and $\braket{\cdot}_{p_t}$ is the ensemble average under the target probability distribution $p_t$. The first average is associated to the Boltzmann factor $\exp\{-(H'(\bm \sigma') + V(\bm \sigma'))\} = \sum_{\bm\sigma} \delta_{\tau(\bm\sigma), \bm\sigma'} \exp(-H(\bm\sigma)) \exp(-V(\tau(\bm\sigma)))$ and can be computed with MC sampling. The second average can be computed analytically if $p_t$ is simple enough. $\braket{\cdot}_{V_{\vec J}}$ always has inherent random noise, or even inaccuracy, and some sophistication is required in the optimization problem. Following \cite{varyfes}, we adopt the stochastic optimization procedure of \cite{bach}, and improve the statistics by running independent MC simulations, called {\it multiple walkers}, in parallel. For further details, consult \cite{varyfes} and the Supplementary Material (SM) \cite{sm}. The renormalized Hamiltonian $H'(\bm\sigma')$ is given by Eq. \ref{eq:fes} in terms of $V_{\text{min}}(\bm\sigma')$. Taking a constant $p_t$, we have modulo a constant: \begin{equation} H'(\bm \sigma') = -V_{\text{min}}(\bm\sigma') = \sum_{\alpha} (-J_{\text{min}, \alpha}) S_\alpha(\bm \sigma') \end{equation} In this finite approximation the renormalized Hamiltonian has exactly the same terms of $V_{\text{min}}(\bm \sigma')$ with renormalized coupling constants \begin{equation} K'_\alpha = -J_{\text{min}, \alpha}. \end{equation} The relative importance of an operator $S_\alpha$ in the renormalized Hamiltonian can be estimated variationally in terms of the relative magnitude of the coefficient $J_{\text{min}, \alpha}$. When $J_{\text{min}, \alpha}$ is much smaller than the other components of $\vec J_{\text{min}}$, the corresponding $S_\alpha(\bm\sigma')$ is comparably unimportant and can be ignored. The accuracy of this approximation could be quantified by measuring the deviation of $p_{V_{\text{min}}}(\bm\sigma')$ from $p_t(\bm\sigma')$. To illustrate the method, we present a study of the Ising model on a $2D$ square lattice in the absence of a magnetic field. We adopt $3 \times 3$ block spins with the majority rule. 26 coupling terms were chosen initially, including 13 two-spin and 13 four-spin products. One preliminary iteration of variational RG (VRG) was performed on a $45\times 45$ lattice starting from the nearest-neighbor Hamiltonian. The coupling terms with renormalized coupling constants smaller than 0.001 in absolute value were deemed unimportant and dropped from further calculations. 13 coupling terms, including 7 two-spin and 6 four-spin products, survived this criterion and were kept in all subsequent calculations \cite{sm}. Each calculation consisted of 5 VRG iterations starting with nearest-neighbor coupling, $K_{nn}$, only. All the subsequent iterations used the same lattice of the initial iteration. Standard Metropolis MC sampling \cite{metropolis} was adopted, and the calculations were done at least twice to ensure that statistical noise did not alter the results significantly. In Fig. \ref{fig:300_rg}, results are shown for a $300 \times 300$ lattice with two initial $K_{nn}$, equal to $0.4355$ and to $0.4365$, respectively. When $K_{nn} = 0.4365$, the renormalized coupling constants increase over the five iterations shown, and would increase more dramatically with further iterations. Similarly, they decrease when $K_{nn} = 0.4355$. Thus, the critical coupling $K_c$ should belong to the window $0.4355-0.4365$. The same critical window is found for the $45\times45$, $90\times 90$, $150\times 150$, and $210\times 210$ lattices \cite{sm}. Because each iteration is affected by truncation and finite size errors, less iterations for the same rescaling factor would reduce the error. For example, 4 VRG iterations with a $2\times2$ block have the rescaling factor of a $16 \times 16$ block. The latter is computationally more costly than a calculation with $2\times 2$ blocks, but can still be performed with modest computational resources. Indeed, with a $16 \times 16$ block, RG iterations on a $128 \times 128$ lattice gave a critical window $0.4394-0.4398$ \cite{sm}, very close to the exact value, $K_c \sim0.4407$, due to Onsager \cite{onsager}. The statistical uncertainty of the renormalized couplings from the variational method is small. Using the standard approach, Ref. \cite{mcrg2dising} found a renormalized nearest neighbor coupling equal to $0.408 \pm 0.002$ after the first RG iteration on a $36\times 36$ lattice using a $3 \times 3$ block spin, starting with $K_{nn} = 0.4407$. This result required $5.76 \times 10^5$ MC sweeps. With our method, applied to a $300 \times 300$ lattice, starting with $K_{nn} = 0.4365$, we found a renormalized nearest-neighbor coupling equal to $0.38031 \pm 0.00002$ after $3.398 \times 10^5$ MC sweeps. The standard error in our case was computed with the block averaging method \cite{block_method}. Because \cite{mcrg2dising} used only seven coupling terms and a different initial $K_{nn}$, the renormalized couplings should not be expected to be the same in the two calculations, but a comparison of the corresponding statistical uncertainties should be meaningful. \begin{figure}[hth] \centering \includegraphics[scale=1]{300_04365} \includegraphics[scale=1]{300_04355} \caption{(color online). Variation of the renormalized coupling constants over five VRG iterations on a $300\times300$ lattice. Each iteration has 1240 variational steps, each consisting of 20 MC sweeps. 16 multiple walkers are used for the ensemble averages in Eqs. \ref{eq:gradient} and \ref{eq:hessian}. For clarity, we only show the four largest renormalized couplings after the first iteration. Full plots are reported in the SM \cite{sm}. Top: Simulation starting with $K_{nn} = 0.4365$. Bottom: Simulation starting with $K_{nn} = 0.4355$.} \label{fig:300_rg} \end{figure} According to theory \cite{wilsonkondo}, the critical exponents are obtained from the leading eigenvalues of $\frac{\partial K'_\alpha}{\partial K_\beta}$, the Jacobian matrix of the RG transformation, at a critical fixed point. In order to find $\frac{\partial K'_\alpha}{\partial K_\beta}$ near a fixed point, we need to know how the renormalized coupling constants $K'_\alpha$ from a RG iteration on the Hamiltonian $H = \sum_\beta K_\beta S_\beta$, change when $K_\beta$ is perturbed to $K_\beta + \delta K_\beta$, for fixed target probability $p_t$ and operators $S_\alpha$. The minimum condition, Eq. \ref{eq:gradient}, implies $\frac{d\Omega}{d J_\alpha} = 0$, i.e. for all $\gamma$: \begin{equation} \frac{\sum_{\bm \sigma} S_\gamma(\bm\sigma') e^{- \sum_\beta (K_\beta S_\beta(\bm\sigma) - K'_\beta S_\beta(\bm\sigma'))}}{\sum_{\bm\sigma} e^{- \sum_\beta (K_\beta S_\beta(\bm\sigma) - K'_\beta S_\beta(\bm\sigma'))}} = \braket{S_\gamma(\bm \sigma')}_{p_t}, \end{equation} and \begin{equation} \label{eq:2ndcondition} \begin{split} \frac{\sum_{\bm \sigma} S_\gamma(\bm\sigma') e^{- \sum_\beta ((K_\beta + \delta K_\beta) S_\beta(\bm\sigma) - (K'_\beta + \delta K'_\beta) S_\beta(\bm\sigma'))}}{\sum_{\bm\sigma} e^{- \sum_\beta ((K_\beta + \delta K_\beta) S_\beta(\bm\sigma) - (K'_\beta + \delta K'_\beta)S_\beta(\bm\sigma'))}} \\ = \braket{S_\gamma(\bm \sigma')}_{p_t}. \end{split} \end{equation} Expanding Eq. \ref{eq:2ndcondition} to linear order in $\delta K'_\alpha$ and $\delta K_\beta$, we obtain (\cite{sm}) \begin{equation} \label{eq:matrix_eq} A_{\beta\gamma} = \sum_\alpha \frac{\partial K'_\alpha}{\partial K_\beta} \cdot B_{\alpha\gamma}, \end{equation} where \begin{equation} \label{eq:A} A_{\beta\gamma} = \braket{S_\beta(\bm \sigma) S_\gamma(\bm \sigma')}_{V} - \braket{S_\beta(\bm \sigma)}_{V}\braket{S_\gamma(\bm\sigma')}_{V}, \end{equation} and \begin{equation} \label{eq:B} B_{\alpha\gamma} = \braket{S_\alpha(\bm \sigma') S_\gamma(\bm \sigma')}_{V} - \braket{S_\alpha(\bm\sigma')}_{V}\braket{S_\gamma(\bm \sigma')}_{V}. \end{equation} Here $\braket{\cdot}_V$ denotes average under the biased Hamiltonian, $\widetilde{H} = \sum_\beta K_\beta S_\beta(\bm\sigma) - K'_\beta S_\beta(\bm\sigma')$. If we require the target average of $S_\gamma(\bm\sigma')$ to coincide with the unbiased average under $H = \sum_\beta K_\beta S_\beta$, $K'$ would necessarily vanish and Eqs. \ref{eq:A}-\ref{eq:B} would coincide with Swendsen's formulae \cite{mcrg}. If we use a uniform target probability, the $\bm \sigma'$ at different sites would be uncorrelated, and critical slowing down would be absent. In practice, in order to compute the critical exponents, we first need to locate $K_c$. From the above calculations on the $45 \times 45$, $90 \times 90$, and $300 \times 300$ lattices with a $3 \times 3$ block spin, we expect that $K_c = 0.436$ should approximate the critical nearest-neighbor coupling in our model. Indeed an RG iteration starting from this value gives couplings that remain essentially constant, as illustrated in Figs. S11-S13 of the SM \cite{sm}. Then, we use Eqs. \ref{eq:matrix_eq}-\ref{eq:B} to compute the Jacobian of the RG transformation by setting $K_c = 0.436$. The renormalized coupling constants after the first RG iteration represent $K_\alpha$, and those after the second RG iteration represent $K'_\alpha$. The results for biased and unbiased ensembles are shown in Table \ref{table:critical}, which reports the leading even ($e$) and odd ($o$) eigenvalues of $\frac{\partial K'_\alpha}{\partial K_\beta}$ when including 13 coupling terms for the three $L\times L$ lattices with $L = 45, 90$, and $300$. As seen from the table, biased and unbiased calculations give slightly different eigenvalues, as one should expect, given that the respective calculations are different embodiments of the truncated Hamiltonian approximation. For $L = 300$ the results are well converged in the biased ensemble. By contrast, we were not able to obtain converged results for this lattice in the unbiased ensemble on the time scale of our simulation. The absence of critical slowing down in the biased simulation is demonstrated in Fig. \ref{fig:cortime}, which displays time decay of a correlation function in the biased and unbiased ensembles. See also Figs. S14-S15 of the SM \cite{sm}. \begin{table}[htb] \setlength{\tabcolsep}{1em} \begin{tabular}{l l l l} \hline \hline &$L$ & $\lambda_1^e$ & $\lambda_1^o$\\ \hline unbiased & 45 & $2.970(1)$ & 7.7171(2) \\ &90 & $2.980(3)$ & 7.7351(1)\\ biased & 45 & $3.045(5)$ & 7.858(4) \\ &90 & $3.040(7)$ & 7.870(2)\\ &300 & $3.03(1)$ & 7.885(5)\\ Exact & & 3 & 7.8452 \\ \hline \hline \end{tabular} \caption{Leading even (e) and odd (o) eigenvalues of $\frac{\partial K'_\alpha}{\partial K_\beta}$ at the approximate fixed point found with VRG, in both the unbiased and biased ensembles. The number in parentheses is the statistical uncertainty on the last digit, obtained from the standard error of 16 independent runs. 13 (5) coupling terms are used for even (odd) interactions. The calculations used $10^6$ MC sweeps for the $45 \times 45$ and $90\times 90$ lattices, and $5 \times 10^5$ sweeps for the $300 \times 300$ lattice. } \label{table:critical} \end{table} \begin{figure}[hth] \centering \includegraphics[scale=1]{cortime} \caption{(color online). Time correlation of the estimator $A = S_0(\bm\sigma)S_0(\bm\sigma')$ on $45\times45$ and $90\times 90$ lattices (Eq. \ref{eq:A}). $S_0$ is the nearest neighbor term in the simulations of Table \ref{table:critical}.} \label{fig:cortime} \end{figure} The fixed point used for Table \ref{table:critical} is approximate, and we did not make any effort to fine tune the approximation. Refinements could be done iteratively using Eqs. \ref{eq:matrix_eq}-\ref{eq:B}, as we will discuss in a future paper. There is an important benefit in knowing accurately the location of the fixed point, because then a single RG iteration, instead of multiple implicit iterations would suffice to compute the Jacobian. Moreover, one could use small block spins, having a smaller statistical uncertainty than larger block spins. In summary, we have unified the calculation of critical exponents and renormalized couplings within the same framework. A key feature of our approach is that we adopt a biased ensemble, $\braket{\cdot}_V$, for the averages. This not only simplifies the algorithm, but also enhances the sampling. In fact, the original motivation for the variational principle \cite{varyfes} was to overcome the long correlation time in first-order phase transitions. The bias potential constructed by optimizing the functional acquires a history-dependence that discourages the sampling of previously visited configurations \cite{varyfes}, thereby breaking the long correlation time of the unbiased simulation. In the RG context, enhanced sampling eliminates critical slowing down. We expect that it should be also helpful in systems with deep local free energy minima, as the variational method was originally designed to deal precisely with such systems. The finite size of the numerical samples is a source of error. If the RG iterations are carried out on a single $L\times L$ lattice, the coarse grained lattice will have size $\frac{L}{b} \times \frac{L}{b}$. Then, as noted in \cite{mcrg2dising}, the calculated renormalized couplings will have different size errors on the $L \times L$ and $\frac{L}{b} \times \frac{L}{b}$ lattices. A better way, as suggested in \cite{two_lattice}, would be to perform calculations on two lattices, $L \times L$ and $\frac{L}{b} \times \frac{L}{b}$, so that the coarse grained lattice rescaled by $b^n$, at the $n$th iteration starting from $L \times L$, would coincide with the lattice rescaled by $b^{n-1}$, at the $(n-1)$th iteration starting from $\frac{L}{b} \times \frac{L}{b}$. In this way, two successive RG iterations have the same lattice size, with a significant cancellation of finite size errors. We plan to discuss in a future paper how this idea could be implemented within VRG. In the present paper we have used a constant probability distribution $p_t$, but there is no reason to always do so. For example, in systems with continuous and unbounded degrees of freedom, like molecular systems or lattice field theory, it may be convenient to use a Gaussian distribution for $p_t$. Finally, we note that a regular term $g(K)$ always appears as the inhomogeneous part of a RG transformation \cite{nauenberg}: \begin{equation} \exp{[H'(K'; \bm \sigma') + Ng(K)]} = \sum_{\bm\sigma} \delta_{\tau(\bm\sigma), \bm\sigma'}\exp{[H(K; \bm\sigma)]} \end{equation} The $g(K)$ in this equation is precisely the thermodynamic free energy per site in the biased ensemble $\braket{\cdot}_V$, as shown in the SM \cite{sm}. It is then interesting, and somewhat surprising, that the information on the critical behavior is fully contained in the statistical behavior of $\braket{\cdot}_V$, even though $g(K)$ is a regular function and $\braket{\cdot}_V$ does not show singular behavior. All the codes used in this project were written in C\texttt{++}, and would be available upon request. The authors would like to thank C. Castellani and L. Pietronero for discussions. Partial support for this work was provided by the Department of Energy under Grant no. DE-FG02-05ER46201. \bibliographystyle{apsrev}
proofpile-arXiv_065-3506
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Convection occurs in the interiors of many astrophysical bodies and must be sustained against viscous and ohmic dissipation. This dissipation is often neglected in astrophysical models, e.g., in standard stellar 1D evolution codes \citep[e.g.,][]{ChabrierBaraffe1997,Paxtonetal2011} though its effects have lately been considered in a few specific contexts \citep[e.g.,][]{BatyginStevenson2010,Browningetal2016}. Astrophysical convection often occurs over many scale heights. While for incompressible fluids the contribution of dissipative heating to the internal energy budget is negligible \citep{Kundu1990}, \citet{Hewittetal1975} (hereafter HMW) showed that in strongly stratified systems, it is theoretically possible for the rate of dissipative heating to exceed the luminosity. This was supported numerically by \citet{JarvisMcKenzie1980} for the case of a compressible liquid with infinite Prandtl number, $Pr$, (the ratio of viscous and thermal diffusivities), appropriate for models of the Earth's interior. In this study we aim to establish the magnitude of dissipation for conditions more akin to those encountered in stellar interiors. Specifically, we consider dissipation in a stratified gas at finite Pr, and examine how the total heating changes as system parameters are varied. To begin, we briefly review some relevant thermodynamic considerations that underpin our work. \subsection{Thermodynamic constraints on dissipative heating}\label{Hewitt} For a volume $V$ of convecting fluid enclosed by a surface $S$ with associated magnetic field $\mathbf{B}$, in which the normal component of the fluid velocity $\mathbf{u}$ vanishes on the surface, and either all components of $\mathbf{u}$, or the tangential stress, also vanish on the surface, local conservation of energy gives that the rate of change of total energy is equal to the sum of the net inward flux of energy and the rate of internal heat generation (e.g., by radioactivity or nuclear reactions). This implies \begin{align}\label{consofE}\frac{\partial}{\partial{t}}\left(\rho{e}+\frac{1}{2}\rho{u}^2\right.&\left.+\frac{B^2}{2\mu_0}-\rho\Psi\right)=-\nabla\cdot\left(\rho\left(e+\frac{1}{2}u^2-\Psi\right)\mathbf{u}\right.\nonumber\\&\left.+\frac{(\mathbf{E}\times\mathbf{B})}{\mu_0}+P\mathbf{u}-\bm\tau\cdot\mathbf{u}-k\nabla{T}\right)+H\end{align} where $\rho$ is the fluid density, $e$ is the internal energy of the fluid, $\Psi$ is the gravitational potential that satisfies $\mathbf{g}=\nabla\Psi$, $P$ is the pressure, $\tau_{ij}$ is the contribution to the total stress tensor from irreversible processes, $k$ is the thermal conductivity, $T$ is the temperature, $H$ is the rate of internal heat generation, and $\frac{\mathbf{E}\times\mathbf B}{\mu_0}$ is the Poynting flux ($\mathbf{E}$ is the electric field and $\mu_0$ is the permeability of free space). Integrating (\ref{consofE}) over $V$ gives the global relation \begin{equation}\label{Fbal} \int_Sk\frac{\partial{T}}{\partial{x_i}}\,dS_i+\int_VH\,dV=0, \end{equation} assuming both a steady state and that the electric current, $\mathbf{j}$, vanishes everywhere outside $V$. Equation (\ref{Fbal}) implies that the net flux out of $V$ is equal to the total rate of internal heating. Viscous and ohmic heating do not contribute to the overall heat flux: dissipative heating terms do not appear in equation (\ref{Fbal}). To examine dissipative heating, we consider the internal energy equation: \begin{equation}\label{internal} \rho\left(\frac{\partial{e}}{\partial{t}}+(\mathbf{u}\cdot\nabla)e\right)=\nabla(k\nabla{T})-P(\nabla\cdot\mathbf{u})+\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}+\frac{j^2}{\sigma}+H \end{equation} where $\sigma$ is the conductivity of the fluid. Integrating over $V$, and assuming a steady state, (\ref{internal}) becomes \begin{equation}\label{Phibal} \int_V(\mathbf{u}\cdot\nabla)P\,dV+\Phi =0. \end{equation} Here \begin{equation}\label{Phi} \Phi=\int_V\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}+\frac{j^2}{\sigma}\,dV \end{equation} is the total dissipative heating rate including viscous and ohmic heating terms. Equation (\ref{Phibal}) implies that the global rate of dissipative heating is cancelled by the work done against the pressure gradient. Equation (\ref{Phibal}) is only equivalent to HMW's equation (22) when considering an ideal gas (so that $\alpha{T}=1$, where $\alpha$ is the coefficient of thermal expansion); however, in arriving at (\ref{Phibal}), we made no assumption about the fluid being a gas. \citet{AlboussiereRicard2013,AlboussiereRicard2014} note that this inconsistency arises because HMW assume $c_p$ to be constant in their derivation, which is not valid when $\alpha T\neq1$. Alternatively, from the first law of thermodynamics, we have \begin{equation} Tds=de-\frac{P}{\rho^2}d\rho \end{equation} where $s$ is the specific entropy, so (\ref{Phibal}) can also be written as \begin{equation}\label{Phi2} \Phi=\int_V\rho{T}(\mathbf{u}\cdot\nabla)s\,dV=-\int_V\rho{s}(\mathbf{u}\cdot\nabla)T\,dV \end{equation} where we have invoked mass continuity in a steady state ($\nabla\cdot(\rho\mathbf{u})=0$). Hence the global dissipation rate can also be thought of as being balanced by the work done against buoyancy \citep{JonesKuzanyan2009}. HMW used the entropy equation to derive an upper bound for the dissipative heating rate in a steadily convecting fluid that is valid for any equation of state or stress-strain relationship. For the case of convection in a plane layer, that upper bound is \begin{equation}\label{bound} \frac{\Phi}{L_u}<\frac{T_{max}-T_u}{T_u} \end{equation} where $L_u$ is the luminosity at the upper boundary, $T_{max}$ is the maximum temperature and $T_u$ is the temperature on the upper boundary. One consequence of this bound is that, for large enough thermal gradients, the dissipative heating rate may exceed the heat flux through the layer; this is perhaps counter-intuitive, but is thermodynamically permitted, essentially because the dissipative heating remains in the system's internal energy \citep[see e.g.,][]{Backus1975}. The above considerations should hold for both ohmic and viscous dissipation. However, HMW further considered the simple case of viscous heating in a liquid (neglecting magnetism) and showed that the viscous dissipation rate is not only bounded by (\ref{bound}) but that \begin{equation}\label{Hewittliq} E\equiv\frac{\Phi}{L_u}=\frac{d}{H_T}\left(1-\frac{\mu}{2}\right) \end{equation} where $d$ is the height of the convective layer, $H_T$ is the (constant) thermal scale height and $0\leq\mu\leq1$ is the fraction of internal heat generation. Interestingly, the theoretical expression (\ref{Hewittliq}) is dependent only on the ratio of the layer depth to the thermal scale height and the fraction of internal heat generation. As expected, (\ref{Hewittliq}) implies that the dissipative heating rate is negligible when compared with the heat flux in cases where the Boussinesq approximation is valid (i.e., when the scale heights of the system are large compared to the depth of the motion). But it follows from (\ref{Hewittliq}) that $\Phi$ is significant compared to $L_u$ if $d$ is comparable to $H_T$, i.e., if the system has significant thermal stratification. Stellar convection often lies in this regime, so it is not clear that dissipative heating can be ignored. This paper explores these theoretical predictions using simulations of stratified convection under conditions akin to those encountered in stellar interiors. Previous numerical simulations conducted by HMW considered only 2D Boussinesq convection and neglected inertial forces (infinite $Pr$ approximation); later work by \citet{JarvisMcKenzie1980} within the so-called anelastic liquid approximation considered stronger stratifications but likewise assumed a liquid at infinite $Pr$. We extend these by considering an ideal gas (so that $\alpha{T}=1$) at finite $Pr$, so inertial effects are important and compressibility is not negligible. In section \ref{model}, we describe the model setup before presenting results from numerical simulations. In section \ref{discussion} we offer a discussion of the most significant results that emerge before providing conclusions. \section{Simulations of dissipative convection}\label{model} \subsection{Model setup}\label{modelsec} We consider a layer of convecting fluid lying between impermeable boundaries at $z=0$ and $z=d$. We assume thermodynamic quantities to be comprised of a background, time-independent, reference state and perturbations to this reference state. The reference state is taken to be a polytropic, ideal gas with polytropic index $m$ given by \begin{equation}\label{refstate} \bar{T}=T_0(1-\beta z),\,\bar\rho=\rho_0(1-\beta z)^m,\,\bar{p}=\mathcal{R}\rho_0T_0(1-\beta z)^{m+1}, \end{equation} where $\beta=\frac{g}{c_{p,0}T_0}$. Here, $g$ is the acceleration due to gravity, $c_p$ is the specific heat capacity at constant pressure, $\mathcal{R}$ is the ideal gas constant and a subscript $0$ represents the value of that quantity on the bottom boundary. $\beta$ is equivalent to the inverse temperature scale height and so is a measure of the stratification of the layer, although we shall use the more conventional \begin{equation} N_{\rho}=-m\ln(1-\beta d) \end{equation} to quantify the stratification, with $N_{\rho}$ the number of density scale heights across the layer. We assume a polytropic, monatomic, adiabatic, ideal gas, therefore $m=1.5$. Here we consider only the hydrodynamic problem; i.e., all dissipation is viscous. We use anelastic equations under the Lantz-Braginsky-Roberts (LBR) approximation \citep{Lantz1992,BraginskyRoberts1995}; these are valid when the reference state is nearly adiabatic and when the flows are subsonic \citep{OguraPhillips1962,Gough1969,LantzFan1999}, as they are here. The governing equations are then \begin{align}\frac{\partial\mathbf u}{\partial{t}}&+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla\tilde{p}+\frac{gs}{c_p}\hat{\mathbf{e_z}}\nonumber\\&+\nu\left[\frac{1}{\bar\rho}\frac{\partial}{\partial{x_j}}\left(\bar\rho\left(\frac{\partial{u_i}}{\partial{x_j}}+\frac{\partial{u_j}}{\partial{x_i}}\right)\right)-\frac{2}{3\bar\rho}\frac{\partial}{\partial{x_i}}\left(\bar\rho\frac{\partial{u_j}}{\partial{x_j}}\right)\right]\end{align} \begin{equation} \nabla\cdot(\bar\rho\mathbf u)=0 \end{equation} \begin{equation}\label{energyeq} \bar\rho\bar{T}\left(\frac{\partial{s}}{\partial{t}}+(\mathbf{u}\cdot\nabla)s\right)=\nabla\cdot(\kappa\bar\rho\bar{T}\nabla{s})+\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}+H, \end{equation} where $\mathbf{u}$ is the fluid velocity, $\tilde{p}=\frac{p}{\bar\rho}$ is a modified pressure and $\nu$ is the kinematic viscosity. The specific entropy, $s$, is related to pressure and density by \begin{equation} s=c_v\ln{p}-c_p\ln\rho. \end{equation} We assume the perturbation of the thermodynamic quantities to be small compared with their reference state value. Therefore the entropy is obtained from \begin{equation} s=c_v\frac{p}{\bar{p}}-c_p\frac{\rho}{\bar\rho} \end{equation} and the linearised equation of state is \begin{equation} \frac{p}{\bar{p}}=\frac{T}{\bar{T}}+\frac{\rho}{\bar\rho}. \end{equation} In (\ref{energyeq}) $\kappa$ is the thermal diffusivity and \begin{equation}\label{tau} \tau_{ij}=\nu\bar\rho\left(\frac{\partial{u_i}}{\partial{x_j}}+\frac{\partial{u_j}}{\partial{x_i}}-\frac{2}{3}\delta_{ij}\nabla\cdot\mathbf{u}\right) \end{equation} is the viscous stress tensor ($\delta_{ij}$ is the Kronecker delta). Here, we only consider cases with $H=0$ (i.e., no internal heat generation), and instead impose a flux ($F$) at the bottom boundary. Note the LBR approximation diffuses entropy (not temperature); see \cite{Lecoanetetal2014} for a discussion of the differences. We assume a constant $\nu$ and $\kappa$. We solve these equations using the Dedalus pseudo-spectral code \citep{dedalus} with fixed flux on the lower boundary and fixed entropy on the upper boundary. We assume these boundaries to be impermeable and stress-free. We employ a sin/cosine decomposition in the horizontal, ensuring there is no lateral heat flux. We employ the semi-implicit Crank-Nicolson Adams-Bashforth numerical scheme and typically use 192 grid points in each direction with dealiasing (so that 128 modes are used). In some cases, 384 (256) grid points (modes) were used to ensure adequate resolution of the solutions. For simplicity, and to compare our results with those of HMW, we consider 2D solutions so that $\mathbf{u}=(u,0,w)$ and $\frac{\partial}{\partial{y}}\equiv0$. This also allows us to reach higher supercriticalities and $N_{\rho}$ with relative ease. As we neglect magnetism, the total dissipation rate, $\Phi$, is given by (\ref{Phi}) with $\mathbf j=0$ and $\tau_{ij}$ as given by (\ref{tau}). An appropriate non-dimensionalisation of the system allows the parameter space to be collapsed such that the dimensionless solutions (in particular $E$) are fully specified by $m$, $N_{\rho}$, $Pr$, together with $\hat{F_0}= \frac{Fd}{\kappa{c_{p,0}}\rho_0T_0}$ (a dimensionless measure of the flux applied at the lower boundary) and a flux-based Rayleigh number \citep[e.g.,][]{Duarteetal2016} \begin{equation}\label{Ra} Ra=\frac{gd^4F_{u}}{\nu\kappa^2\rho_0c_{p,0}T_0}. \end{equation} The parameters used in our simulations are given in Table \ref{table1}. In a steady state, an expression for the luminosity $L$ at each depth $z=z'$ can be obtained by integrating the internal energy equation (\ref{energyeq}) over the volume contained between the bottom of the layer and the depth $z=z'$: \begin{align}L=&FA=\int_{V_{z'}}\nabla\cdot(\bar\rho\bar{T}s\mathbf{u})\,dV+\int_{V_{z'}}-\nabla\cdot(\kappa\bar\rho\bar{T}\nabla{s})\,dV\nonumber\\&+\int_{V_{z'}}-s\bar\rho(\mathbf{u}\cdot\nabla)\bar{T}\,dV+\int_{V_{z'}}-\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}\,dV,\label{Feqpre}\end{align} where $A$ is the surface area. The divergence theorem allows the first two integrals to be transformed into surface integrals giving \begin{align}L=&FA=\underbrace{\int_{S_{z'}}\bar\rho\bar{T}sw\,dS}_\text{$L_{conv}=AF_{conv}$}+\underbrace{\int_{S_{z'}}-\kappa\bar\rho\bar{T}\frac{\partial{s}}{\partial{z}}\,dS}_\text{$L_{cond}=AF_{cond}$}\nonumber\\&+\underbrace{\int_{V_{z'}}-s\bar\rho(\mathbf{u}\cdot\nabla)\bar{T}\,dV}_\text{$L_{buoy}=A\int_0^{z'}Q_{buoy}\,dz$}+\underbrace{\int_{V_{z'}}-\tau_{ij}\frac{\partial{u_i}}{\partial{x_j}}\,dV}_\text{$L_{diss}=A\int_0^{z'}Q_{diss}\,dz$},\label{Feq}\end{align} where the surface integrals are over the surface at height $z=z'$. The first and second terms define the horizontally-averaged heat fluxes associated with convection ($F_{conv}$) and conduction ($F_{cond}$) respectively, along with associated luminosities. The third and fourth terms define additional sources of heating and cooling ($Q_{diss}$ and $Q_{buoy}$) associated with viscous dissipation and with work done against the background stratification, respectively. These two terms must cancel in a global sense i.e., when integrating from $z=0$ to $z=d$, but they do not necessarily cancel at each layer depth. An alternative view of the heat transport may be derived by considering the total energy equation (\ref{consofE}), which includes both internal and mechanical energy. In a steady state (with entropy diffusion), the local balance gives \begin{equation} \nabla\cdot\left(\bar\rho\left(e+\frac{1}{2}u^2-\Psi\right)\mathbf{u}+p\mathbf{u}-\bm\tau\cdot\mathbf{u}-\kappa\bar\rho{T}\nabla{s}\right)=H \end{equation} which when integrated over the volume for an ideal gas gives \citep[see e.g.,][]{Vialletetal2013} \begin{align}L=&FA=\underbrace{\int_{S_{z'}}\bar\rho{c_p}wT'\,dS}_\text{$L_e=AF_{e}$}+\underbrace{\int_{S_{z'}}-\kappa\bar\rho\bar{T}\frac{\partial{s}}{\partial{z}}\,dS}_\text{$L_{cond}=AF_{cond}$}\nonumber\\&+\underbrace{\int_{S_{z'}}\frac{1}{2}\bar\rho|u^2|w\,dS}_\text{$L_{KE}=AF_{KE}$}+\underbrace{\int_{S_{z'}}-(\tau_{ij}{u_i})\cdot{\mathbf{\hat{e}_z}}\,dS}_\text{$L_{visc}=AF_{visc}$},\label{FHeq}\end{align} defining the horizontally-averaged enthalpy flux ($F_e$), kinetic energy flux ($F_{KE}$) and viscous flux ($F_{visc}$). Note that (\ref{Feq}) and (\ref{FHeq}) are equivalent; whether decomposed in the manner of (\ref{Feq}) or the complementary fashion of (\ref{FHeq}), the transport terms must sum to the total luminosity $L$. $L_{visc}$ represents the total work done by surface forces, whereas $L_{diss}$ represents only the (negative-definite) portion of this that goes into deforming a fluid parcel and hence into heating. \subsection{Relations between global dissipation rate and convective flux} For the model described in section \ref{modelsec}, equation (\ref{Phi2}) becomes \begin{align}\Phi=&-\int_V\bar\rho{s}(\mathbf{u}\cdot\nabla)\bar{T}\,dV\nonumber\\ =&\frac{g}{c_{p,0}}\int_Vs\bar\rho{w}\,dV=\frac{gA}{c_{p,0}}\int_{0}^{d}\frac{F_{conv}}{\bar{T}}\,dz,\label{phiFconv}\end{align} Often it is assumed that in the bulk of the convection zone, the total heat flux is just equal to the convective flux as defined above (i.e., $F_{conv}\approx{F}$). We show later that this a poor assumption in strongly stratified cases, but it is reasonable for approximately Boussinesq systems. In the case $F_{conv}\approx{F}$, (\ref{phiFconv}) becomes \begin{equation} \Phi=\frac{gAF}{c_{p,0}T_0}\int_0^d\frac{1}{1-\beta{z}}\,dz=-L_u\ln(1-\beta{d}) \end{equation} and \begin{equation}\label{lower} E=-ln(1-\beta{d})=\beta{d}+\ldots\approx\frac{d}{H_{T,0}}. \end{equation} However, in strongly stratified cases $F\approx{F_{conv}}+F_{other}$ where $F_{other}=\int_0^{z'}(Q_{buoy}+Q_{diss})\,dz$ from (\ref{Feq}), or alternatively, $F_{other}=F_{p}+F_{KE}+F_{visc}$ from (\ref{FHeq}) (the conductive flux is small in the bulk convection zone). Here $F_{p}=\frac{1}{A}\int_{S_{z'}}wp\,dS$ is the difference between the enthalpy flux $F_e$ and the convective flux $F_{conv}$. Physically, $F_{other}$ is equivalent to the steady-state transport associated with processes other than the convective flux as defined above. In this case, (\ref{phiFconv}) becomes \begin{equation}\label{phiFother} \Phi=\frac{gAF}{c_{p,0}}\int_0^d(1-\frac{F_{other}}{F})\frac{1}{\bar{T}}\,dz, \end{equation} where we note that in general $F_{other}$ is a function of depth and $(1-\frac{F_{other}}{F})\geq1$. A complete theory of convection would specify $F_{other}$ a priori, and thereby constrain the dissipative heating everywhere. In the absence of such a theory, we turn to numerical simulations to determine the magnitude of $\Phi$ for strong stratifications. \subsection{Dissipation in simulations: determined by stratification}\label{res1} We examine the steady-state magnitude of $\Phi$ for different values of $N_{\rho}$ and $Ra$. Figure \ref{fig1} shows the ratio of the global dissipation rate to the luminosity through the layer, $E=\frac{\Phi}{L_u}$, for varying stratifications. First, we highlight the difference between simulations in which the dissipative heating terms were included (red squares) and those where they were not (black circles). At weak stratification, there is not much difference in the dissipative heating rate between these cases, but differences become apparent as $N_{\rho}$ is increased. Including the heating terms in a self-consistent calculation leads to a much larger value of $E$ than if $\Phi$ is only calculated after the simulation has run (i.e., if heating is not allowed to feedback on the system). When heating terms are included, the global dissipative heating rate exceeds the flux passing through the system (i.e., $E>1$) when $N_{\rho}>1.22$. As expected, the expression for $E$, in the Boussinesq limit, given by (\ref{lower}), is a good approximation to $E$ for small $N_{\rho}$, but vastly underestimates $E$ at large $N_{\rho}$ (see Figure \ref{fig1}, dash-dot line). In the cases where the heating terms are not included, $E$ cannot exceed unity for all $N_{\rho}$. This might have been expected, since in this case none of the dissipated heat is returned to the internal energy of the system; instead, the dissipated energy is simply lost (i.e., energy is not conserved). This has the practical consequence that the flux emerging from the top of the layer is less than that input at the bottom. In these cases $E$ is very well described by the dashed line which is given by $\frac{d}{H_{{T},0}}$, the leading order term from the expression for $E$ in (\ref{lower}). The theoretical upper bound derived by HMW is shown on Figure \ref{fig1} by the solid black line. It is clear that all of our cases fit well within this upper bound, even at strong stratifications. This upper bound is equivalent to $\frac{d}{H_{{T},u}}$ in this system, where $H_{{T},u}$ is the value of $H_{T}$ on the upper boundary. Cases in which the heating terms were included are well described by \begin{equation}\label{myE} E=\frac{d}{\tilde{H_T}}, \end{equation} where \begin{equation}\label{htdef} \tilde{H_T} = \frac{H_{T,0}H_{T,u}}{H_{T,z^*}} \end{equation} is a modified thermal scale height involving $H_T$ at the top, bottom and at a height $z^*$, defined such that half the fluid (by mass) lies below $z^*$ and half sits above; for a uniform density fluid, $z^*=\frac{d}{2}$. This expression resembles that originally proposed by HMW, on heuristic grounds, for a gas ($E\approx\frac{d}{H_T}$); in our case $H_T$ is not constant across the layer and we find that the combination $\tilde{H_T}$ is the appropriate ``scale height" instead. Like HMW's suggestion, it depends only on the layer depth and temperature scale heights of the system. For 2D convection, at $Pr=1$ and the $Ra$ considered here, the solutions are steady (time-independent) \citep{VincentYuen1999}; the convection takes the form of a single stationary cell occupying the layer. To assess if the same behaviour occurs for chaotic (time-dependent) solutions, we have included some cases at $Pr=10$ (orange triangles), since then the flow is unsteady. In the cases included here, this unsteady flow is characterised by the breakup of the single coherent convection cell (seen at $Pr=1$); these time-dependent solutions seem also to be well described by the line given by (\ref{myE}). This behaviour is sampled in Figure \ref{figA1}, Supplementary Material, which shows the velocity and entropy fields in a simulation with $Pr=10$, $N_{\rho}=1.31$, $Ra=4.13\times10^8$ and $\hat F_0=0.14$. At higher $Ra$, the solutions transition to turbulence \citep[see visualisations in e.g.,][]{Rogersetal2003}. \begin{figure} \includegraphics[scale=1.03]{f1.eps} \caption{$E$ (global dissipative heating rate normalised by the luminosity) against $N_{\rho}$ for $Pr=1$ (red squares) and $Pr=10$ (orange triangles). Cases in which the dissipative heating terms were not included in equation (\ref{energyeq}) are denoted by black circles. The dash-dot line shows the expression given by (\ref{lower}) and the dotted line shows the leading order term of this expression. The solid black line shows the upper bound given by (\ref{bound}) and the dashed red line shows the expression given by (\ref{myE}). The cases with heating agree well with the dashed red line and the cases without heating agree with the dotted black line.}\label{fig1} \end{figure} \subsection{Dissipation in simulations: independent of diffusivities}\label{2p4} The results of section \ref{res1}, specifically equation (\ref{myE}), suggest that the amount of dissipative heating is determined by the stratification, not by other parameters such as $Ra$. To probe this further, we consider how/if $E$ changes as $Ra$ is varied. Figure \ref{fig2} shows the results for three different stratifications. For $N_{\rho}\approx0.1$, the fluid is close to being Boussinesq and it is clear that $E$ remains constant (and equal to the value given by (\ref{myE})) for many decades increase in $Ra$. This result complements that of HMW obtained from Boussinesq simulations at infinite $Pr$. For increasing $N_{\rho}$, we find that for large enough $Ra$, $E$ approaches the constant given by (\ref{myE}). That $E$ becomes independent of $Ra$ at large enough $Ra$ for all $N_{\rho}$ was also found by \citet{JarvisMcKenzie1980}, albeit for liquids at infinite $Pr$. Figure \ref{fig2} indicates that the solutions have to be sufficiently supercritical in order for the theory to be valid. It also suggests that stronger stratifications require simulations to be more supercritical in order to reach the asymptotic regime. (All the simulations displayed in Figure \ref{fig1} approach this asymptotic regime, \emph{except} possibly the uppermost point at $N_{\rho}=2.8$. That simulation has $Ra/Ra_c \approx 9 \times10^{5}$, but it is likely that still higher $Ra$ would yield somewhat greater values of $E$ at this stratification.) \begin{figure} \includegraphics[scale=1.03]{f2.eps} \caption{$E$ as a function of $\frac{Ra}{Ra_c}$ (where $Ra_c$ is the value of $Ra$ at which convection onsets) for $N_{\rho}=0.105$ (circles), $N_{\rho}=0.706$ (triangles) and $N_{\rho}=2.085$ (squares). In each case, for large enough $Ra$ the value of $E$ asymptotes to the value given by (\ref{myE}), indicated for each $N_{\rho}$ by the horizontal lines. The level of stratification (given by $N_{\rho})$, rather then the diffusion, determines the magnitude of the dissipative heating rate compared to the flux through the layer.}\label{fig2} \end{figure} \section{Discussion and conclusion}\label{discussion} We have demonstrated explicitly that the amount of dissipative heating in a convective gaseous layer can, for strong stratifications, equal or exceed the luminosity through the layer. A principal conclusion is that the ratio of the global viscous heating rate to the emergent luminosity is approximated by a theoretical expression dependent only on the depth of the layer and its thermal scale heights. This ratio, akin to one originally derived for a simpler system by HMW, is given (for the cases studied here) by (\ref{myE}). Interestingly, this relation does not depend on other parameters such as the Rayleigh number. Our simulations confirm that this expression holds for 2D convection in an anelastic gas, provided the convection is sufficiently supercritical. This regime is attainable in our 2D simulations, and is surely reached in real astrophysical objects, but may be more challenging to obtain in (for example) 3D global calculations \citep[e.g.,][]{FeatherstoneHindman2016,Aubertetal2017}. The dissipative heating appears in the local internal energy (or entropy) equation, in the same way as heating by fusion or radioactive decay. Where it is large, we therefore expect it will modify the thermal structure, just as including a new source of heating or cooling would have done. It must be reiterated, though, that in a global sense this heating is balanced by equivalent cooling terms; i.e., $L_{diss}$ and $L_{buoy}$ in equation (\ref{Feq}) cancel in a global sense; no additional flux emerges from the upper boundary. Stars are not brighter because of viscous dissipation. Locally, however, these terms do \emph{not} necessarily cancel, as explored in Figure \ref{fig3}. There we show the net heating and cooling at each depth in two simulations; in Figure \ref{fig3}$a$, the fluid is weakly stratified, and in (b) is has a stratification given by $N_{\rho}=2.08$. In both cases the sum of the terms must be zero at the top and bottom of the layer, but not in between. Furthermore, in (a) the terms are small compared to the flux through the layer (typically a few \%) but in the strongly stratified case, the local heating and cooling become comparable to the overall luminosity. In general, stronger stratifications lead to stronger local heating and cooling in the fluid. \begin{figure} \includegraphics[scale=1,trim = {0mm 0mm 0mm 0mm}, clip]{f3.eps} \caption{Local heating and cooling. $F_{other}$ as a fraction of the total flux through the layer as a function of layer depth for $N_{\rho}=0.1$ in (a) and $N_{\rho}=2.08$ in (b). In (a) the local heating and cooling is only a few percent of the total flux whereas in (b) the local heating and cooling is comparable to the flux through the layer in some parts.}\label{fig3} \end{figure} In a steady state the imbalance between this local heating and cooling is equivalent to certain transport terms as discussed in section \ref{modelsec}; these are assessed for our simulations in figure \ref{fig4} where the terms are plotted as luminosities and labelled correspondingly. Turning first to Figure \ref{fig4}$a$, we show the components of the total flux of thermal energy (as described by (\ref{Feq})), namely $L_{conv}$, $L_{cond}$, $L_{buoy}$ and $L_{diss}$. The conductive flux is small throughout the domain except in thin boundary layers and the dissipative heating ($L_{diss}$) is comparable to the convective flux ($L_{conv}$) throughout the domain. The sum of the four transport terms is shown as the black line ($L$) and is constant across the layer depth, indicating thermal balance. Figure \ref{fig4}$b$ assesses the total energy transport using the complementary analysis of (\ref{FHeq}), using $L_{KE}$, $L_{cond}$, $L_e$ and $L_{visc}$. The primary balance is between the positive $L_e$ and the negative $L_{KE}$. Viewed in this way, the viscous flux ($L_{visc}$) is small except near the lower boundary, but (as discussed in section \ref{modelsec}) this does not necessarily mean the effect of viscous dissipation is also small. In figure \ref{fig4}$c$ we highlight the equivalence of some transport terms, by showing the term $AF_{other}$ together with its different constituent terms from either the total or thermal energy equations. As expected, $AF_{other}$ is the same in both cases; it is the sum of $L_{diss}$ and $L_{buoy}$, or equivalently, it is the sum of $L_{p}$, $L_{KE}$ and $L_{visc}$. That is, changes in the dissipative heating are reflected not just in $Q_{diss}$ (if analysing internal energy) or $F_{visc}$ (if analysing total energy); the other transport terms ($F_{KE}$, $F_p$, $F_e$, $F_{conv}$, $Q_{buoy}$) also change in response. To emphasise the importance of dissipative heating in modifying the transport terms, we include in Figure \ref{fig4}$d$, $L_{KE}^{nh}$ , $L_{e}^{nh}$ , $L_{cond}^{nh}$ and $L_{visc}^{nh}$ i.e., the kinetic energy, enthalpy, conductive and viscous fluxes (expressed as luminosities) respectively, in the case where heating terms were not included. It is clear that these are much smaller than in the equivalent simulation with heating (Figure \ref{fig4}$b$), demonstrating explicitly that the inclusion of dissipative heating influences the other transport terms. In particular, the maximum value of the kinetic energy flux is 3.2 times larger when the heating terms are included. The black line in Figure \ref{fig4}$d$ shows that when heating is not included the flux emerging at the upper boundary is smaller than the flux imposed at the lower boundary; in this case it is approximately $27\%$ of $L$. The local heating and cooling (or, equivalently, the transport term $F_{other}$ that must arise from this in a steady state) described above is not included in standard 1D stellar evolution models, and we do not yet know what effects (if any) would arise from its inclusion. In some contexts they may be negligible; the total internal energy of a star is enormously greater than its luminosity $L\star$, so even internal heating that exceeds $L\star$ may not have a noticeable effect on the gross structure. If, however, this heating is concentrated in certain regions (e.g., because of spatially varying conductivity) or occurs in places with lower heat capacity, its impact may be more significant. \begin{figure*} \includegraphics[scale=1,trim = {0mm 0mm 0mm 0mm}, clip ]{f4.eps} \caption{(a) Luminosities $L_i$ defined in (\ref{Feq}) and their sum normalised by the total luminosity $L$. (b) Luminosities $L_i$ defined in (\ref{FHeq}) and their sum normalised by the total luminosity $L$. (c) The constituents of $L_{other}=AF_{other}=A\int_0^{z'}(Q_{buoy}+Q_{diss})\,dz=A(F_p+F_{KE}+F_{visc})$. (d) Luminosities $L_i$ defined in (\ref{FHeq}) and their sum normalised by the total luminosity at the bottom boundary $L_0$ in the case where heating terms are not included. The luminosities in (d) are significantly smaller than the equivalent ones when heating terms were included (see (b)).}\label{fig4} \end{figure*} If the results explored here also apply to the full 3D problem with rotation and magnetism -- which clearly must be checked by future calculation -- then the total dissipative heating is determined non-locally, dependent as it is on the total layer depth. Simple modifications to the mixing-length theory (which is determined locally) may not then suffice to capture it. We have begun to explore these issues by modification of a suitable 1D stellar evolution code, and will report on this in future work. \acknowledgments We acknowledge support from the European Research Council under ERC grant agreements No. 337705 (CHASM). The simulations here were carried out on the University of Exeter supercomputer, a DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS and the University of Exeter. We also acknowledge PRACE for awarding us access to computational resources Mare Nostrum based in Spain at the Barcelona Supercomputing Center, and Fermi and Marconi based in Italy at Cineca. We thank the referee for a thoughtful review that helped to improve the manuscript.
proofpile-arXiv_065-3523
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Legal probabilism and its troubles} According to \emph{legal probabilism} (LP), degrees of conviction in juridical fact-finding are to be modeled exactly the way degrees of beliefs are modeled in standard bayesian epistemology: by means of probabilistic distributions satisfying the standard axioms of probability theory. \emph{Classical legal probabilism} (CLP), which originated with Bernoulli \cite{Bernoulli1713Ars-conjectandi} adds on top of that the view according to which the criminal standard of proof beyond reasonable doubt should be equated with a certain high threshold probability of guilt (albeit, some variants of the view admit that thresholds for different cases might be different).\footnote{Nowadays, legal scholars fond of probabilism usually inscribe to LP but not to CLP.} LP (and generally, the use of probabilistic methods in judiciary contexts) is criticized from various angles (see for example \cite{tribe1970further}, \cite{tribe1971trial}, \cite{Cohen1977The-probable-an}, \cite{Underwood1977The-thumb-on-th}, \cite{Nesson1979Reasonable-doub}, \cite{cohen1981subjective}, \cite{dant1988gambling}, \cite{wells1992naked}, \cite{Stein2005Foundations-of-}, \cite{allen2007problematic}, \cite{ho2008philosophy}, \cite{haack2011legal}). The critics of LP argue that the view is blind to various phenomena that an adequate philosophical account of legal fact-finding should explain. Some of them pertain to procedural issues \cite{Stein2005Foundations-of-}: proceedings are back-and-forth between opposing parties, cross-examination is crucial, and yet CLP seems to take no notice of this dynamics. Some have to do with reasoning methods which are not only evidence-to-hypothesis, but also hypotheses-to-evidence \cite{wells1992naked,allen2007problematic} and involve inference to the best explanation \cite{dant1988gambling}. A better account, arguably, is one in which the proceedings are seen as an interplay of evidence and various explanations (often called \emph{narratives}) presented by opposing parties \cite{ho2008philosophy}. Accordingly, the no plausible alternative story (NPAS) theory \cite{allen2010no} is that the courtroom is a confrontation of competing narrations offered by the defendant and by the prosecutor and the narrative to be selected should be the most plausible one. The view is conceptually plausible \cite{di2013statistics} and finds support in psychological evidence \cite{pennington1991cognitive,pennington1992explaining}. The approach is also better at capturing the already listed phenomena that CLP is claimed to be blind to. From the perspective of a formal epistemologist, the key disadvantage of NPAS is that it abandons the rich toolbox of probabilistic methods and takes the key notion of plausibility to be a primitive notion which should be intuitively understood.\footnote{``I have been asked to elaborate on the meaning of “plausibility” in this theoretical structure. The difficulty in doing so is that the relative plausibility theory is a positive rather than a normative theory. For reasons I will elaborate below, “plausibility” can serve as a primitive theoretical term the meaning of which is determined by its context in the explanation of trials.'' \cite[p. 10]{allen2010no}} The goal of this paper is to develop a bayesian approach to NPAS, showing that one can embrace NPAS without giving up on probabilistic methods. \section{Classical legal probabilism and the gatecrasher paradox} CLP is also susceptible to criticism directed at its specific claim: that there is a guilt probability threshold. One of the most well-known conceptual arguments against the idea is the following. The \emph{paradox of the gatecrasher} \cite{Cohen1977The-probable-an}\footnote{It is analogous the \emph{prisoners in a yard scenario} \cite{Nesson1979Reasonable-doub}, where a group of prisoners commits a group killing, and it's impossible to identify the single innocent prisoner. Mathematically, the examples are pretty much the same.} has been developed to indicate that mere high probability of guilt is not enough for a conviction. A variant of the paradox goes as follows: \begin{quote} Suppose our guilt threshold is high, say at 0.99. Consider the situation in which 1000 fans enter a football stadium, and 999 of them avoid paying for their tickets. A random spectator is tried for not paying. The probability that the spectator under trial did not pay exceeds 0.99. Yet, intuitively, a spectator cannot be considered guilty on the sole basis of the number of people who did and did not pay. \end{quote} CLP fails to handle the gatecrasher paradox: whatever $\neq 1$ threshold you propose, I can give you a variant of the gatecrasher with sufficiently many people for the guilt statement of an arbitrary spectator to be above the guilt threshold. NPAS, on the other hand, seems to avoid the paradox: conviction of a random attendee is unjustified, because there is no real plausible story of guilt of that particular person. After all, merely quoting the statistics, on this approach, hardly counts as giving a narrative. The problem is, such considerations are left at an informal level, the requirements on what should count as a narration aren't too clear, and so the considerations remain somewhat indecisive. After all, a stubborn philosopher might still insist: \emph{look, all that is needed is a theory of what happened that is more plausible than the alternative --- my theory is, the suspect is guilty, and, given the evidence, it is more plausible than the alternative, which says that he isn't.} \section{New legal probabilism} \emph{New legal probabilism} (NLP) is an attempt to improve on the underspecificity of NPAS \cite{di2013statistics}. While still being informal, the approach is more specific about the conditions that a successful accusing narration is to satisfy for the conviction beyond reasonable doubt to be justified. Di Bello identifies four key requirements. \begin{center} \begin{tabular}{lp{8.5cm}} (Evidential support) &The defendant's guilt probability on the evidence should be sufficiently supported by the evidence, and a successful accusing narration should explain the relevant evidence. \\ (Evidential completeness) & The evidence available at trial should be complete as far as a reasonable fact-finders' expectations are concerned. \\ (Resiliency)& The prosecutor's narrative, based on the available evidence, should not be susceptible to revision given reasonably possible future arguments and evidence. \\ (Narrativity) & The narrative offered by the prosecutor should answer all the natural or reasonable questions one may have about what happened, given the content of the prosecutor's narration and the available evidence. \end{tabular}\end{center} How would NLP handle the gatecrasher? Di Bello's discusses an analogous \emph{prisoners} scenario, in which a footage indicates that all but one prisoners participated in a group killing, but the footage doesn't allow for identification of the innocent prisoner \cite[pp. 219-222]{di2013statistics}. He argues that the conviction is unjustified on two counts. First, the narrative is grossly incomplete: ``What is the initiating event? What psychological responses did it trigger? Who participated in the killing? What were the different participants doing?\dots'' Second, the convicting scenario, Di Bello argues, isn't resilient, for it is quite plausible that more evidence might become available. Unfortunately, it is not quite clear whether the resources that NLP helps herself to when formulating the requirements, indeed fall into the realm of bayesian methodology: \begin{quote} The probabilists can enrich their framework by adding probability-based accounts of evidential completeness, resiliency, and narrativity. To my knowledge, no legal probabilist has undertaken the task in any systematic way. \cite[p. 75]{di2013statistics} \end{quote} In what follows we undertake this task. \section{Preliminary technicalities} \subsection{The language and its interpretation} The \emph{object language} is a standard propositional language $\mathcal{L}$ (I assume $\wedge$ and $\neg$ are the primitive connectives, nothing serious hinges on the choice) extended to a language $\mathcal{L}^{+}$ with primitive unary operators $E, N^A_1, \dots, N^A_k$, $N^D_1, \dots, N^D_k$, and the guilt statement constant $G$. The content of the guilt statement is given in terms of a list of conditions in the background language that need to be established for a conviction to be justified. This is modeled by conditioning on the definition of guilt $\mathtt{G}$ which has the form of $G\equiv g_1\wedge \cdots \wedge g_l$ for appropriate $g_1, \dots, g _l\in \mathcal{L}$. The intended interpretation of $Ep$ is \emph{$p$ is part of evidence}. The idea is that after all the evidence and all the arguments have been presented in court, the background knowledge is to be enriched by the pieces of evidence presented, thought of as sentences of $\mathcal{L}$: $\mathtt{E} = \{e_1,\dots, e_j\}\subset \mathcal{L}$. However, we are not only to extend our beliefs by $e_1,\dots, e_j$, but also by the corresponding claims about these sentences being part of evidence: $Ee_1, \dots, E e_j$. $N^A_ip$ means \emph{$p$ is part of an accusing narration $\mathtt{N}^A_i$} and $N^D_ip$ means \emph{$p$ is part of a defending narration $\mathtt{N}^D_i$}. Each narration $\mathtt{N}_i$ (in contexts in which it is irrelevant whether a narration is an accusing one or not, I will suppress the superscripts) is taken to be a finite set of sentences $n_{i1}, n_{i2}, \dots, n_{ik_{i}}$ of $\mathcal{L}^+$. $\mathtt{E}$ stands ambiguously for the set of all sentences constituting evidence and for the conjunction thereof. Which reading is meant will always be clear from the context (this convention applies to all finite sets of sentences considered in this paper). $\mathtt{E}^d$ stands for $\mathtt{E}^d=\{E\varphi \vert \varphi \in \mathtt{E}\}$ and $\mathtt{E}^-$ is $\{\neg E\varphi \vert \varphi \not \in \mathtt{E}\}$. The distinction is needed, because there is a difference between knowing that certain sentences are pieces of evidence, and knowing that no other sentence is. For any narration $\mathtt{N}_i$, symbols $\mathtt{N}_i$, $\mathtt{N}_i^d$, and $\mathtt{N}_i^-$ are to be understood analogously to $\mathtt{E}$, $\mathtt{E}^d$ and $\mathtt{E}^-$. $\mathtt{N}^d$ is the (positive) \emph{description} of all the narrations, $\bigcup_i\mathtt{N}_i^d$, and $\mathtt{N}^-$= $\bigcup_i \mathtt{N}^-_i$ adds that this description is complete. \subsection{Partial probability and four thresholds} The current framework diverges from the standard bayesianism in using partial probability functions rather than full probabilistic discributions to model credences. This is motivated by noticing that the fact-finders, on one hand, are supposed to rely on their background knowledge when assessing the plausibility of a given narration, but on the other hand, they clearly cannot rely on all biases and assumptions that they have.\footnote{Suspending our conviction about $p$ cannot be easily modeled in the standard bayesian framework, since even the most sensible candidate, $1/2$, doesn't do the job. Just to give a simple example, there is a difference between knowing that a given coin is fair and assigning probability of $1/2$ to heads, and not knowing how fair a coin is at all and assigning probability of $1/2$ to heads for this reason.} A \emph{ partial conditional credence function} $\mathtt{P}$ (partially) maps $\mathcal{L^+}\times \mathcal{P}(\mathcal{L^+})$ to $[0,1]$.\footnote{Partial credence functions for conditional probabilities have been introduced in \cite{lepage2003probabilistic} and \cite{lepage2012partial}; my definition differs from that account in a few inessential aspects.} (I often write $\pr{h\vert \mathtt{E}}$ instead of $\pr{\langle h, \mathtt{E}\rangle}$). Let $\downarrow$ and $\uparrow$ stand for \emph{being defined} and \emph{being undefined} respectively. A partial probability distribution has to have an extension to a total conditional probability distribution over $\mathcal{L^+}$ satisfying the standard axioms of conditional probability, Moreover, it additionally has to satisfy the following conditions for any $\Gamma \subseteq \mathcal{L}^+$, and any $\varphi, \psi\in \mathcal{L^+}$: \begin{align} \tag{Part-1} \label{partpr1} \pr{\top\vert \Gamma}=1 & & \pr{\bot\vert \Gamma}=0\\ \tag{Part-2} \label{partpr2} \varphi \in \Gamma \Rightarrow \df{\varphi\vert \Gamma} \\ \tag{Part-3} \label{partpr3} \df{\varphi\vert \Gamma} \Leftrightarrow \df{\neg \varphi\vert \Gamma} & & \df{\varphi \wedge \psi\vert \Gamma} \Leftrightarrow \df{\psi \wedge \varphi\vert \Gamma} \\ \tag{Part-4}\label{partpr4}\pr{\varphi \wedge \psi\vert \Gamma}>0 \Rightarrow \df{\varphi\vert \Gamma}, \df{\psi, \vert \Gamma} & & \pr{\varphi\vert \Gamma}=0 \Rightarrow \df{\varphi \wedge \psi\vert \Gamma}\\ \tag{Part-5}\label{partpr5} \ndf{\varphi\vert \Gamma} \Rightarrow \ndf{\varphi \wedge \psi\vert \Gamma} & & \mbox{ unless } \pr{\psi\vert \Gamma}=0 \\ \tag{Part-6}\label{partpr6} \mbox{If } \pr{\varphi\vert \Gamma}>0, \pr{\varphi \wedge \psi\vert \Gamma}=0, & & \mbox{then }\pr{\psi\vert \Gamma}=0 \end{align} \eqref{partpr1} requires that logical truths have probability 1 and logical contradictions always have probability 0. \eqref{partpr2} requires that the probability of a claim given a set of premises that includes it is always defined. \eqref{partpr3} states that the conditional probability of a claim is defined just in case the conditional probability of its negation is, and that the order of conjuncts has no impact on whether the conditional probability of a conjunction is defined. According to \eqref{partpr4}, the conditional non-zero probability of a conjunction is defined only if the conditional probability of both conjuncts is. Moreover, if the conditional probability of a conjunct is 0, the conditional probability of the conjunction is defined (and by the fact that the credence has an extension to a total conditional probability satisfying the standard axioms, we also know that it will be 0 as well). \eqref{partpr5} says that, unless this unusual circumstance occurs, the conditional probability of a conjunction is undefined if the conditional probability of at least one conjunct is. Finally, \eqref{partpr6} demands that if a conjunct has a conditional probability $>0$, then the conjunction has conditional probability 0 only if the other conjunct does. Since the fact-finders are supposed not to be biased and aren't informed about the trial yet, we additionally assume that the priors of guilt, of what the evidence and what the narrations are, are undefined: $\ndf{G}$, $\ndf{g_1\wedge \cdots \wedge g_l}$, $\ndf{E\varphi}, \ndf{N_i\varphi}$ for any $\varphi\in \mathcal{L}^+$, and any $1<i<k$. Four types of stances that a fact-finder might take towards a claim will be considered. First, a fact-finder might consider a claim completely uncontroversial, and accept it without any further argument. Such stance will be modeled by the credence in a given claim reaching the \emph{uncontroversial acceptability threshold}, $\mathtt{a}$. On the opposite side of spectrum we have the \emph{negligibility threshold}, $\mathtt{n}=_{df} 1-\mathtt{a}$. Notice that $\mathtt{a}$ and $\mathtt{n}$ can't be respectively 1 and 0, for this would require complete unrevisable certainty. One more type of stance needs to be incorporated into the framework --- that of \emph{strong plausibility}. Usually there are claims that are strongly supported while not being as close to certainty as the uncontroversially acceptable ones. This kind of credence that we would normally find sufficient for acting upon in our uncertain world will be denoted by $\mathtt{s}$. The opposite will be called \emph{rejectability}, $\mathtt{r}=_{df} 1-\mathtt{s}$. Clearly we should require $\mathtt{a}>\mathtt{s}> \mathtt{r}>\mathtt{n}$. \subsection{Information and updates} After all the evidence and all the arguments have been presented in court, the background knowledge obtained now consists of the pieces of evidence presented, $\mathtt{E}$, information about what is not part of evidence, $\mathtt{E}^-$, the content of the guilt statement $\mathtt{G}$, and the description of the content of a certain finite assembly of finite theories meant to defend or accuse the defendant, $\mathtt{N}^D_1,\dots, \mathtt{N}^D_k, \mathtt{N}^A_1, \mathtt{N} ^A_m$ (and what is not part of which narration). When making various assessments in the fact-finding process, at various stages one needs to conditionalize on various parts of the available information, depending on what is being assessed. The variants used are listed in the table below. \begin{center} \begin{tabular}{|p{3.2cm}|l|l|} \hline \hline \textbf{name} & \textbf{notation} & \textbf{meaning}\\ \hline \footnotesize full & \footnotesize $\mathtt{P}^f(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{E}, \mathtt{E}^d, \mathtt{E}^-,\mathtt{N}^d, \mathtt{N}^-, \mathtt{G}, \Gamma)$\\ \footnotesize n-full & \footnotesize $\mathtt{P}^{nf}(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{E}, \mathtt{E}^d,\mathtt{N}^d, \mathtt{N}^-, \mathtt{G}, \Gamma)$\\ \footnotesize informed & \footnotesize $\mathtt{P}^i(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{E}, \mathtt{E}^d, \mathtt{N}^d, \mathtt{G}, \Gamma)$\\ \footnotesize evidential & \footnotesize $\mathtt{P}^e(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{E}, \mathtt{E}^d, \mathtt{E}^-, \mathtt{G}, \Gamma)$\\ \footnotesize argued & \footnotesize $\mathtt{P}^a(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{N}^d, \mathtt{G}, \Gamma)$\\ \footnotesize play-along & \footnotesize $\mathtt{P}^{N_j}(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{N}_j, \mathtt{N}^d, \mathtt{N}^-, \mathtt{G}, \Gamma)$\\ \footnotesize n-extended play-along & \footnotesize $\mathtt{P}^{nN_j}(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{N}_j, \mathtt{E}, \mathtt{E}^d, \mathtt{N}^d, \mathtt{N}^-, \mathtt{G}, \Gamma)$\\ \footnotesize e-extended play-along & \footnotesize $\mathtt{P}^{eN_j}(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{N}_j, \mathtt{E}, \mathtt{E}^d, \mathtt{E}^-, \mathtt{N}^d, \mathtt{G}, \Gamma)$\\ \footnotesize f-extended play-along & \footnotesize $\mathtt{P}^{fN_j}(\varphi \vert \Gamma)$ & \footnotesize $\mathtt{P}(\varphi\vert \mathtt{N}_j, \mathtt{E}, \mathtt{E}^d, \mathtt{E}^-, \mathtt{N}^d, \mathtt{N}^-, \mathtt{G}, \Gamma)$\\ \hline \hline \end{tabular} \end{center} \section{Defining conditions on a set of narrations} Think of $\mathtt{N}$ as the set of attacking and defending narrations seriously considered in the fact-finding process. In this section I list some basic conditions on a set of sets of sentences to count as a set of narration. In the next section, I explicate the requirements used in the evaluation of narrations. \begin{align} \tag{Exclusion} \label{Exclusion} \mathtt{P}^f(\neg( \mathtt{N}_i \wedge \mathtt{N}_j))\geq \mathtt{a}, \mbox{ for } i\neq j\\ \tag{Decision} \label{Decision} \mathtt{P}^{f\mathtt{N}^A_i}(G)\geq \mathtt{a} \wedge \mathtt{P}^{f\mathtt{N}^D_k}(\neg G)\geq \mathtt{a} \\ \tag{Initial plausibility} \label{Initial plausibility} \mathtt{P}^e(\mathtt{N}_k)\geq \mathtt{n} \\ \tag{Exhaustion} \label{Exhaustion} \mathtt{P}^f( \mathtt{N}_1\vee \cdots \vee \mathtt{N_k}) \geq s \end{align} \eqref{Exclusion} requires that narrations under consideration should pairwise exclude each other given what we know about the case. According to \eqref{Decision}, a defense narration should clearly state that, given all that is known, the accused is not guilty, and an accusing narration should clearly state that, given all that is known, they are. \eqref{Initial plausibility} says that we shouldn't consider narrations that are uncontroversially excluded by sensible background knowledge or by evidence. \eqref{Exhaustion} requires that it should be strongly plausible that at least one of the narrations hold, given all that we know about the case. \section{Evaluation criteria} \paragraph{Explaining evidence.} Now we're ready to look at the explication of the criteria involved in the evaluation of competing narratives. Let's start with the requirement that they should explain the evidence. After all the narrations have been presented and deployed, an accusing narration $\mathtt{N}^A_i$ should ``make sense'' of evidence in the following sense. For any item of evidence presented, $e$, if, according to $\mathtt{N}^A_i$, it is not excluded as evidence, it should be strongly plausible given $\mathtt{N}^A_i$. \begin{align} \tag{Explaining evidence A} \label{Explaining evidence A} \mbox{For any } e\in \mathtt{E}, [ \neg \mathtt{P}^{\mathtt{N}^A_i}(\neg E e) \geq \mathtt{s} \Rightarrow \mathtt{P}^{\mathtt{N}^A_i}(e) \geq \mathtt{s} ] \end{align} The sense in which a defending narration is supposed to explain evidence is somewhat different. After all, if the defense story is rather minimal and mostly constitutes in rebutting the accusations, it isn't reasonable to expect the defense to explain all pieces of evidence, as long as they aren't really used to support the opposing accusing narration. Rather, the defense should argue that the possibility of the evidence being as it is while the defense's narration is true hasn't been rejected. Thus, we put the condition on a defending narration $\mathtt{N}^D_k$ as follows: \begin{align} \tag{Explaining evidence D}\label{Explaining evidence D} \mbox{For any } e\in \mathtt{E}, \mbox{ if there is } N^A_i \\ \nonumber \mbox{such that } \mathtt{P}(\mathtt{N}^A_i\vert e)> \mathtt{P}(\mathtt{N}^A_i), \\ \nonumber \mbox{then } \mathtt{P}^{\mathtt{N}^D_k}(e)\geq \mathtt{r}. \end{align} \paragraph{Missing evidence.} The intuition here is that sometimes, given the narration and whatever evidence we already have, certain evidence should be available, but it isn't. For instance, in a drunk driving case the fact-finders would naturally expect a breathalyzer result, and in a murder case evidence as to how the victim was killed is needed. \begin{align} \tag{Missing evidence} \label{Missing evidence} \mathbf{ME}(\mathtt{N}_i) \Leftrightarrow & \mbox{ for some } \varphi_1, \dots, \varphi_u \not \in \mathtt{E}: \\ \nonumber & [\mathtt{P}^{nN_i}(E(\varphi_1)\vee \cdots \vee E( \varphi_u)) \geq \mathtt{s} ] \end{align} The disjunction above is there to ensure generality: it might be the case that some evidence from among a group of possible pieces of evidence would be needed, without any particular piece of evidence being expected. \paragraph{Gaps.} Sometimes a narration should be more specific, given what it says and what we already know. For instance, an accusing narration might be required to specify how the victim was attacked, or a defending narration should specify where the defendant was at the time of the crime. Accordingly, we say that $\mathtt{N}_i$ is \textbf{gappy} ($\mathbf{G}(\mathtt{N}_i)$) just in case there are claims that the narration should choose from and yet it doesn't: \begin{align} \tag{Gap} \label{Gap} \mathbf{G}(\mathtt{N}_i) \Leftrightarrow & \mbox{ for some } \varphi_1, \dots, \varphi_u \not \in \mathtt{N}_i\\ \nonumber & \mathtt{P}^{f\mathtt{N}_i}(\varphi_1 \vee\cdots \vee \varphi_u)\geq \mathtt{s} \wedge\\ \nonumber & \mathtt{P}^{eN_i}(N_i(\varphi_1)\vee \cdots \vee N_i(\varphi_u))\geq \mathtt{s} \end{align} \paragraph{Dominating accusing narration.} An accusing narration $\mathtt{N}^A_i$ \emph{dominates} the set of all accusing narrations $\mathbb{N}^A$ just in case it doesn't miss any evidence, it doesn't contain any gap, in light of all available information and evidence it is at least as likely any other accusing narration, and it is strongly plausible, given all available information: \begin{align} \tag{Domination} \label{Domination} \mathbf{D}(\mathtt{N}^A_i) \Leftrightarrow & \neg \mathbf{ME}(\mathtt{N}^A_i) \wedge \neg \mathbf{G}(\mathtt{N}^A_i) \wedge \\ \nonumber & \mathtt{P}^f( \mathtt{N}^A_i)\geq \mathtt{P}^f( \mathtt{N}^A_j ) \mbox{ for all } j \neq i \wedge\\ \nonumber & \mathtt{P}^f(\mathtt{N}^A_i) \geq \mathtt{s} \end{align} \paragraph{Resiliency.} A dominating narration $\mathtt{N}^A_i$ is \emph{resilient} ($\mathbf{R}(\mathtt{N}^A_i)$) just in case there is no non-negligible potential evidence that might undermine it, at least in light of all we know (minus the negative description of the evidence, to avoid triviality) --- that is, no $\varphi$ with $\mathtt{P}^{nf}(E\varphi)\geq \mathtt{n}$ -- such that if $\mathtt{E}$ was modified to $\mathtt{E}\cup\{\varphi\}$, $\mathtt{N}^A_i$ would no longer dominate. \paragraph{Conviction beyond reasonable doubt.} A defense narration $\mathtt{N}^D_k$ \emph{raises reasonable doubt} ($\mathbf{RD}(\mathtt{N}^D_k)$) if it has no gaps, and hasn't been rejected given all that we know: \begin{align} \tag{Reasonable doubt} \label{Reasonable doubt} \mathbf{RD}(\mathtt{N}^D_k) \Leftrightarrow & \neg \mathbf{G}(\mathtt{N}^D_k) \wedge \mathtt{P}^f(\mathtt{N}^D_k) \geq \mathtt{r} \end{align} Accordingly, we say that a conviction is \emph{beyond reasonable doubt} if it is justified by a resilient dominating narration and no defense narration raises reasonable doubt. \section{Looking at the gatecrasher paradox} Once we've formulated the framework, it will be instructive to use it to look at the gatecrasher paradox, and to observe how the requirements and distinctions introduced help us obtain better insight. Let's creatively call the suspects 1, 2, 3, \dots, 1000. Consider the situation in which the accusing narration is simply \emph{$1$ gatecrashed}, $\mathtt{g}_1$, and the defending narration is simply \emph{1 didn't gatecrash}, $\neg \mathtt{g}_1$. Suppose we have $\prcon{g_1}{e}{}=999/1000$ and $\prcon{\neg g_1}{e}{}=1/1000$. \begin{center}\begin{tabular}{|ll|} \hline \footnotesize \textbf{Variant 1} & \footnotesize $\mathtt{N}^A=\mathtt{g}_1$, $\mathtt{N}^D=\neg \mathtt{g}_1$. \\ \hline \end{tabular}\end{center} One might try to counter this formulation with the following strategy: \begin{center}\begin{tabular}{|lp{6cm}|} \hline \footnotesize \textbf{Strategy 1: Extreme thresholds}& \footnotesize Take $\mathtt{P}^f(\cdot)$ to be $\mathtt{P}(\cdot \vert \mathtt{e})$, and claim $\mathtt{r}<1/1000$ and $\mathtt{s}>999/1000$.\\ \hline \end{tabular}\end{center} The strategy isn't too successful, though --- it's \emph{ad hoc} and it isn't immune to other versions of the paradox, where the number of people is tweaked to get the guilt probability threshold about whatever threshold was picked in Strategy 1. Perhaps, a better approach would be this: \begin{center} \begin{tabular}{|lp{6cm}|}\hline \footnotesize \textbf{Strategy 2: Full credence vs. statistics}& \footnotesize Take $\mathtt{P}^f(\cdot)$ to \textbf{not} be $\mathtt{P}(\cdot \vert \mathtt{e})$, and claim $\mathtt{P}^f(g_1)$ is insufficiently high. \\ \hline \end{tabular} \end{center} Perhaps there is something to saying that the posterior credences don't have to match the statistical probabilities (as Kaye \cite{kaye1979paradox} suggests). But merely saying so doesn't provide a principled explanation: the fact that we're not inclined to accept a claim that has high statistical probability is the \emph{explanandum}, not the \emph{explanans}. \begin{center} \begin{tabular}{|lp{7cm}|}\hline \footnotesize \textbf{Strategy 3: realistic gaps and non-resiliency} & There are multiple natural questions that the accusing narration fails to answer; it also fails to ensure that no future evidence is likely which might overturn the decision.\\ \hline \end{tabular} \end{center} This is, pretty much, the strategy pursued in \cite{di2013statistics}. While it is a perfectly valid strategy if the question is how a case like gatecrasher would be handled in reality, this is not the end of the story. The gatecrasher, or at least a variant thereof, can be viewed rather as an abstract thought experiment formulated to make a conceptual point. And when a philosopher formulates an abstract thought experiment, they're free to set it up any way they want, without too much attention paid to the level of realism, as long as appropriate conceptual restraints are satisfied. In particular, one might simply make it part of the description of the situation that no further evidence can be obtained despite everyone's best attempts, and no realistic considerations about further details can play a role. No witnesses can come and testify as to the character of 1, because he's been living under a rock and knows nobody. No question about the exact location is natural, because the stadium is somewhat unusual in having no gates, and in fact the 999 people who didn't pay for tickets jumped the fence, in an evenly distributed manner. No surveillance was possible because an accident in the local atomic plant fried all electronics in the area, etc. Call such a variant of the paradox \emph{abstract gatecrasher}. The question now is: putting the issues with realism and supposing the abstract gatecrasher is immune to these, can we think of more purely epistemic or deontic reasons why the accusing narration in the abstract gatecrasher is not sufficient for a conviction beyond reasonable doubt? One issue that may come to mind even from a more abstract perspective, is that it doesn't seem that the accusing narration $\mathtt{N}_1=\mathtt{g}_1$ explains the evidence. \begin{center} \begin{tabular}{|lp{7cm}|}\hline \footnotesize \textbf{Strategy 5: Unexplained evidence} &\footnotesize The narration relies on $\mathtt{e}$ without entailing that it shouldn't be evidence, and yet $\mathtt{P}^{N_1}(\mathtt{e} ) \not \geq \mathtt{s}$. \\ \hline \end{tabular} \end{center} The mere fact that 1 decided to gatecrash, while suggesting that he might've done it with a bunch of friends, certainly doesn't make the claim that so did other 998 people strongly plausible. This, however, can be fairly easily fixed by modifying the accusing narration to \emph{1 gatecrashed, and so did 998 other people}. This, formally comes to using \begin{center}\begin{tabular}{|ll|} \hline \footnotesize \textbf{Variant 2} & $\mathtt{N}'_1=\{\mathtt{g}_1, \mathtt{e}\}$ \\ \hline\end{tabular}\end{center} Indeed, the mere fact that 1 decided to gatecrash, while suggesting that he might've done it with a bunch of friends, certainly doesn't make the claim that so did other 998 people strongly plausible. Quite clearly, $\mathtt{P}^{N_1}(e\vert \mathtt{N}'_1 ) \geq \mathtt{s}$, and so Strategy 5 doesn't beat the paradox with this narration as the accusing narration. Can we do any better? \section{Epistemological commitment issues} Imagine the sides have the following exchange: \begin{center}\begin{tabular}{lp{10cm}} Defense: & So you claim that the suspect is responsible for the damage?\\ Persecution: & Yes.\\ D: & And you agree that the claim that the sidewalk was unusually slippery that day due to an oil truck failure is at least as likely as that the suspect is responsible? \\ P: & Of course. \\ D: & And, given that you're accusing my client, you think you're in position to claim you know he is responsible, correct?\\ P: & Yes, that's correct.\\ D: & Are you not, then, in position to know that the sidewalk was unusually slippery that day due to an oil truck failure?\\ P: & $\dots$ \end{tabular} \end{center} I hope the reader shares the intuition that responding ``no'' to the above question would be a sign of a serious cognitive failure. In general, it seems that we'd normally expect the persecution to accept any claim relevant to the case that, given the evidence and the persecution's narration, is at least as likely as the guilt statement that they're putting forward. This motivates the following requirement: \begin{center} \begin{tabular}{lp{9cm}} (Commitment) & For any $\varphi$ relevant to the case, if $\prtext{\varphi}{f\mathtt{N}^A_i}\geq \prtext{G}{f\mathtt{N}^A_i} $, then $\mathtt{P}^{eN_i}(N_i(\varphi))\geq \mathtt{s}$. \end{tabular} \end{center} If the reader prefers to do so, we might leave the notion of relevance at the intuitivel level. We don't have to though --- it could be explicated along the following lines. First, a set of sentences is relevant for the case if it is consistent with the background knowledge and there is a narration such that its posterior probability given all background knowledge together with that set is different from its posterior probability given all background knowledge only. A set of sentences is a minimal relevant set if no proper subset thereof is a relevant set. A sentence is relevant if it or its negation is a member of a minimal relevant subset. Now, how does (Commitment) apply to the gatecrasher? Take any $\mathtt{g}_i$ where $i\neq 1$. Consider an argument analogous to the previous one: \begin{center}\begin{tabular}{lp{9cm}} Defense: & So you claim that 1 is guilty?\\ Persecution: & Yes.\\ D: & And you agree 2 is at least as likely to be guilty as 1? \\ P: & Of course. \\ D: & So you're in position to claim you know 1 is guilty, correct?\\ P: & Yes, that's correct.\\ D: & Are you not, then, in position to know that 2 is guilty as well?\\ P: & $\dots$ \end{tabular} \end{center} Again, it seems that the negative answer to the above question would be irrational. (If you worry about $\mathtt{g}_i$ being relevant, wait for the development of the argument.) Observe now that in the case of the gatecrasher, the accusing narration already discussed, $\mathtt{N}'_1$ is in the somewhat unusual lottery-paradox-like situation that given all that is known (and the narration itself) any other suspect is at least as likely to be guilty as suspect 1. Now we can run the argument to the effect that $\mathtt{N}'_1$ fails to satisfy conditions for conviction beyond reasonable doubt. Given the relevance of all $g_i$ ($i\neq 1$), (Commitment) entails that for any $i\neq 1$ we have \begin{align} \tag{Step 1} \label{Step 1} \mathtt{P}^{e{\mathtt{N}'}_1}({\mathtt{N}'}_1(\mathtt{g}_i))\geq \mathtt{s}. \end{align} Since $\mathtt{N}'_1$ is an accusing narration delivering $\mathtt{g}_1$, by \eqref{Decision} and the fact that $\mathtt{a}>\mathtt{s}$ we have \begin{align} \tag{Step 2} \label{Step 2} \mathtt{P}^{f\mathtt{N'}_1}(\mathtt{g}_i)\geq \mathtt{a}> \mathtt{s}. \end{align} The very description of $\mathtt{N}'_1$ entails: \begin{align} \tag{Step 3} \label{Step 3} \mathtt{g}_i \not \in \mathtt{N}'_1. \end{align} Steps (1-3) taken together, however, constitute the defining elements of \eqref{Gap} (for the slightly degenerate case where $\varphi_1 \vee \cdots \vee \varphi_u$ simply is $\mathtt{g}_i$). This means that $\mathtt{N}'_1$ is gappy, $\mathbf{G}(\mathtt{N}'_1)$, and so it fails to justify a conviction beyond reasonable doubt. What happens, on the other hand, if $\mathtt{N}'_1$ is replaced with $\mathtt{N}^+_1$: an accusing narration which results with taking the least narration containing $\mathtt{N}'_1$ and closed under (Commitment) with respect to all $\mathtt{g}_i$, $i\neq 1$, so that: \begin{align} \tag{Bite the bullet} \label{Bite the bullet} \mathtt{N}^+_1 = \{\mathtt{g}_1, \mathtt{e}\}\cup \{ \mathtt{g}_k\vert k\neq 1\}? \end{align} Then, the resulting narration simply becomes highly implausible: $\mathtt{e}$, and therefore also $\mathtt{N}^+_1$, entails that exactly one person is innocent, but also for each particular suspect $u$, $\mathtt{N}^+_1$ insists on $u$ being guilty. Such a narration doesn't even satisfy \eqref{Initial plausibility}, not to mention failing to be a dominating one (for which strong plausibility is required). By the way, this is why each $\mathtt{g}_i$ is relevant: together they change the outcome. This is also the reason why they're relevant in the more technical sense: the set of all $\mathtt{g}_i$s contradicts evidence, while no proper subset thereof does. So, to wrap up the discussion of the gatecrasher, in a sense there is no single simple diagnosis of the gatecrasher, for the following reasons. First of all, there is no single gatecrasher, but two main variants thereof: a realistic one and a very abstract one.\footnote{To be honest, multiple variants thereof depending on how unrealistic we're required to be. But let's stay content with two extreme cases.} Second, some supposed solutions don't work against any of them. Third, sometimes there is more than one reason why a variant fails. Four, different variants can be seen as failing for different reasons (although reasons that apply to the abstract one apply to the realistic one; it's just that deploying them to the realistic gatecrasher is an overkill). \bibliographystyle{eptcs}
proofpile-arXiv_065-3530
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Properties of rare isotopes that inhabit remote regions of the nuclear landscape at and beyond the particle driplines are in the forefront of nuclear structure and reaction research \cite{Transactions,RISAC,Dob07,For13,Bal14,NSACLRP2015}. The next-generation of rare isotope beam facilities will provide unique data on dripline systems that will test theory, highlight shortcomings, and identify areas for improvement. The challenge for nuclear theory is to develop methodologies to reliably calculate and understand the properties and dynamics of new physical systems with different properties due to large neutron-to-proton asymmetries and low-lying reaction thresholds. Here, dripline systems are of particular interest as they can exhibit exotic radioactive decay modes such as two-nucleon emission~\cite{Pfutzner12,Pfutzner13,Thoennessen04,Blank08,Grigorenko11,Olsen13,Kohley2013}. Theories of such nuclei must take into account their open quantum nature. Theoretically, a powerful suite of $A$-body approaches based on inter-nucleon interactions provides a quantitative description of light and medium-mass nuclei and their reactions \cite{Elhatisari15,Navr16,Kumar17}. To unify nuclear bound states with resonances and scattering continuum within one consistent framework, advanced continuum shell-model approaches have been introduced \cite{Michel09,Hagen12,Papadimitriou13}. Microscopic models of exotic nuclear states have been supplemented by a suite of powerful, albeit more phenomenological models, based on effective degrees of freedom such as cluster structures. While such models provide a ``lower resolution" picture of the nucleus, they can be extremely useful when interpreting experimental data, providing guidance for future measurements, and provide guidance for more microscopic approaches. The objective of this work is to develop a new three-body method to describe both reaction and structure aspects of two-particle emission. A prototype system of interest is the two-neutron-unbound ground state of $^{26}$O \cite{Lunderberg2012,Kohley2013,Kondo2016}. According to theory, $^{26}$O exhibits the dineutron-type correlations \cite{Grigorenko2015,Kondo2016,Hagino2016,Hagino16a,Fossez2017}. To describe such a system, nuclear model should be based on a fine-tuned interaction capable of describing particle-emission thresholds, a sound many-body method, and a capability to treat simultaneously bound and unbound states. If one considers bound three-body systems, few-body models are very useful~\cite{Braaten06}, especially models based on the Lagrange-mesh technique~\cite{Baye1994} or cluster-orbital shell model (COSM)~\cite{Suzuki1988}. However, for the description of resonances, the outgoing wave function in the asymptotic region need to be treated very carefully. For example, one can divide the coordinate space into internal and asymptotic regions, where the R-matrix theory~\cite{Descouvemont2006,Lovell17}, microscopic cluster model \cite{Damman09}, and the diagonalization of the Coulomb interaction~\cite{Grigorenko2009} can be used. Other useful techniques include the Green function method~\cite{Hagino2016} and the complex scaling \cite{Aoyama06,Kruppa2014}. Our strategy is to construct a precise three-body framework to weakly bound and unbound systems similar to that of the GSM~\cite{Michel2002}. The attractive feature of the GSM is that -- by employing the Berggren ensemble \cite{Berggren1968} -- it treats bound, scattering, and outgoing Gamow states on the same footing. Consequently, energies and decay widths are obtained simultaneously as the real and imaginary parts of the complex eigenenergies of the shell model Hamiltonian \cite{Michel09}. In this study, we develop a three-body Gamow coupled-channel (GCC) approach in Jacobi coordinates with the Berggren basis. Since the Jacobi coordinates allow for the exact treatment of nuclear wave functions in both nuclear and asymptotic regions, and as the Berggren basis explicitly takes into account continuum effects, a comprehensive description of weakly-bound three-body systems can be achieved. As the GSM is based on the COSM coordinates, a recoil term appears due to the center-of-mass motion. Hence, it is of interest to compare Jacobi- and COSM-based frameworks for the description of weakly bound and resonant nuclear states. This article is organized as follows. Section~\ref{model} contains the description of models and approximations. In particular, it lays out the new GCC approach and GSM model used for benchmarking, and defines the configuration spaces used. The results for $A=6$ systems and $^{26}$O are contained in Sec.~\ref{results}. Finally, the summary and outlook are given in Sec.~\ref{summary}. \section{The Model}\label{model} \subsection{Gamow Coupled Channel approach} In the three-body GCC model, the nucleus is described in terms of a core and two valence nucleons (or clusters). The GCC Hamiltonian can be written as: \begin{equation} \hat{H} = \sum^3_{i=1}\frac{ \hat{\vec{p}}^2_i}{2 m_i} +\sum^3_{i>j=1} V_{ij}(\vec{r}_{ij})-\hat{ T}_{\rm c.m.}, \end{equation} where $V_{ij}$ is the interaction between clusters $i$ and $j$, including central, spin-orbit and Coulomb terms, and $\hat{T}_{\rm c.m.}$ stands for the kinetic energy of the center-of-mass. The unwanted feature of three-body models is the appearance of Pauli forbidden states arising from the lack of antisymmetrization between core and valence particles. In order to eliminate the Pauli forbidden states, we implemented the orthogonal projection method \cite{Saito69,Kuk78,Descouvemont2003} by adding to the GCC Hamiltionan the Pauli operator \begin{equation} \label{Pauli} \hat{Q}= \Lambda \sum_c |\varphi ^{j_c m_c} \rangle \langle \varphi ^{j_c m_c}|, \end{equation} where $\Lambda$ is a constant and $| \varphi^{j_c m_c} \rangle$ is a 2-body state involving forbidden s.p. states of core nucleons. At large values of $\Lambda$, Pauli-forbidden states appear at high energies, so that they are effectively suppressed. \begin{figure}[htb] \includegraphics[width=0.3\textwidth]{jacobi.pdf} \caption{\label{Jacobi} Jacobi coordinates in a three-body system.} \end{figure} In order to describe three-body asymptotics and to eliminate the spurious center-of-mass motion exactly, we express the GCC model in the relative (Jacobi) coordinates \cite{Nav00,Descouvemont2003,Navr16,Lovell17}: \begin{equation} \begin{aligned} \vec{x} &= \sqrt{\mu _{ij}} (\vec{r}_i - \vec{r}_j),\\ \vec{y} &= \sqrt{\mu _{(ij)k}} \left(\vec{r}_k - \frac{A_i\vec{r}_i + A_j\vec{r}_j}{A_i + A_j}\right),\\ \end{aligned} \end{equation} where {$\vec{r}_i$} is the position vector of the i-th cluster, $A_i$ is the i-th cluster mass number, and $\mu _{ij}$ and $\mu _{(ij)k}$ are the reduced masses associated with $\vec{x}$ and $\vec{y}$, respectively: \begin{equation} \begin{aligned} \mu _{ij} &= \frac{A_iA_j}{A_i+A_j},\\ \mu _{(ij)k} &= \frac{(A_i+A_j)A_k}{A_i+A_j+A_k}.\\ \end{aligned} \end{equation} As one can see in Fig.~\ref{Jacobi}, Jacobi coordinates can be expressed as T- and Y-types, each associated with a complete basis set. In practice, it is convenient to calculate the matrix elements of the two-body interaction individually in T- and Y-type coordinates, and then transform them to one single Jacobi set. To describe the transformation between different types of Jacobi coordinates, it is convenient to introduce the basis of hyperspherical harmonics (HH) \cite{Ripelle83,Kievsky08}. The hyperspherical coordinates are constructed from a five-dimensional hyperangular coordinates $\Omega_{5}$ and a hyperradial coordinate $\rho=\sqrt{x^2 + y^2}$. The transformation between different sets of Jacobi coordinates is given by the Raynal-Revai coefficients~\cite{Raynal1970}. Expressed in HH, the total wave-function can be written as \cite{Descouvemont2003}: \begin{equation} \Psi ^{JM\pi} (\rho, \Omega_5) = \rho ^{-5/2} \sum_{\gamma K} \psi ^{J\pi}_{\gamma K}(\rho) \mathcal {Y} ^{JM}_{\gamma K} (\Omega_5), \end{equation} where $K$ is the hyperspherical quantum number and $\gamma = \{s_1,s_2,s_3,S_{12},S,\ell_x,\ell_y,L\}$ is a set of quantum numbers other than $K$. The quantum numbers $s$ and $\ell$ stand for spin and orbital angular momentum, respectively, $\psi ^{J\pi}_{\gamma K}(\rho)$ is the hyperradial wave function, and $\mathcal {Y} ^{JM}_{\gamma K} (\Omega_5)$ is the hyperspherical harmonic. The resulting Schr\"{o}dinger equation for the hyperradial wave functions can be written as a set of coupled-channel equations:\begin{widetext} \begin{equation} \label{CC} \begin{aligned} \left[ -\frac{\hbar^2}{2m}\left(\frac{d^2}{d\rho^2} - \frac{(K+3/2)(K+5/2)}{\rho^2} \right)-\tilde{E} \right] \psi ^{J\pi}_{\gamma K}(\rho)& \\ + \sum_{K'\gamma'} V^{J\pi}_{K'\gamma', K\gamma}(\rho) \psi ^{J\pi}_{\gamma'K'}(\rho) &+\sum_{K'\gamma'}\int_0^\infty W_{K'\gamma', K\gamma}(\rho,\rho')\psi ^{L\pi}_{\gamma'K'}(\rho')d\rho'=0, \end{aligned} \end{equation} \end{widetext} where \begin{equation} V^{L\pi}_{K'\gamma', K\gamma}(\rho) = \langle\mathcal {Y} ^{JM}_{\gamma' K'}| \sum^3_{i>j=1} V_{ij}(\vec{r}_{ij})| \mathcal {Y} ^{JM}_{\gamma K} \rangle \end{equation} and \begin{equation} W_{K'\gamma', K\gamma}(\rho,\rho') = \langle\mathcal {Y} ^{JM}_{\gamma' K'} | \hat{Q} |\mathcal {Y} ^{JM}_{\gamma K} \rangle \end{equation} is the non-local potential generated by the Pauli projection operator (\ref{Pauli}). In order to treat the positive-energy continuum space precisely, we use the Berggren expansion technique for the hyperradial wave function: \begin{equation} \label{Berggren_exp_GCC} \psi ^{J\pi}_{\gamma K}(\rho) = \sum_{\rm n} C^{J\pi M}_{\gamma {\rm n} K} \mathcal{B} ^{J\pi}_{\gamma {\rm n}}(\rho), \end{equation} where $\mathcal{B} ^{J\pi}_{\gamma {\rm n}}(\rho)$ represents a s.p. state belonging to to the Berggren ensemble~\cite{Berggren1968}. The Berggren ensemble defines a basis in the complex momentum plane, which includes bound, decaying, and scattering states. The completeness relation for the Berggren ensemble can be written as: \begin{equation} \begin{aligned} \sum_{{\rm n}\in b,d} \mathcal{B}_{\rm n}(k_{\rm n},\rho)\mathcal{B}_{\rm n}(k_{\rm n},\rho^\prime) + & \int_{L^+}\mathcal{B}(k,\rho)\mathcal{B}(k,\rho^\prime)dk \\ & = \delta(\rho-\rho^\prime), \end{aligned} \end{equation} where $b$ are bound states and $d$ are decaying resonant (or Gamow) states lying between the real-$k$ momentum axis in the fourth quadrant of the complex-$k$ plane, and the $L^+$ contour representing the complex-$k$ scattering continuum. For numerical purposes, ${L^+}$ has to be discretized, e.g., by adopting the Gauss-Legendre quadrature~\cite{Hagen2006a}. In principle, the contour ${L^+}$ can be chosen arbitrarily as long as it encompasses the resonances of interest. If the contour $L^+$ is chosen to lie along the real $k$-axis, the Berggren completeness relation reduces to the Newton completeness relation \cite{Newton82} involving bound and real-momentum scattering states. To calculate radial matrix elements with the Berggren basis, we employ the exterior complex scaling~\cite{Gyarmati1971}, where integrals are calculated along a complex radial path: \begin{align} \langle\mathcal{B}_{\rm n}|&V(\rho)|\mathcal{B}_{\rm m}\rangle =\int_0^R\mathcal{B}_{\rm n}(\rho)V(\rho)\mathcal{B}_{\rm m}(\rho)d\rho \\ &+ \int_0^{+\infty}\mathcal{B}_{\rm n}(R+\rho e^{i\theta})V(R+\rho e^{i\theta})\mathcal{B}_{\rm m}(R+\rho e^{i\theta})d\rho.\nonumber \end{align} For potentials that decrease as $O(1/\rho^2)$ (centrifugal potential) or faster (nuclear potential), $R$ should be sufficiently large to bypass all singularities and the scaling angle $\theta$ is chosen so that the integral converges, see Ref.~\cite{Michel2003} for details. As the Coulomb potential is not square-integrable, its matrix elements diverge when $k_n = k_m$. A practical solution is provided by the so-called ``off-diagonal method" proposed in Ref.~\cite{Michel2011}. Basically, a small offset $\pm \delta k$ is added to the linear momenta $k_n$ and $k_m$ of involved scattering wave-functions, so that the resulting diagonal Coulomb matrix element converges. \subsection{Gamow Shell Model} In the GSM, expressed in COSM coordinates, one deals with the center-of-mass motion by adding a recoil term ($\hat{\vec{p}}_1\cdot\hat{\vec{p}}_2/m_nA_{\rm core}$)~\cite{Suzuki1988,Michel2002}. The GSM Hamiltonian is diagonalized in a basis of Slater determinants built from the one-body Berggren ensemble. In this case, it is convenient to deal with the Pauli principle by eliminating spurious excitations at a level of the s.p. basis. In practice, one just needs to construct a valence s.p. space that does not contain the orbits occupied in the core. It is equivalent to the projection technique used in GCC wherein the Pauli operator (\ref{Pauli}) expressed in Jacobi coordinates has a two-body character. The treatment of the interactions is the same in GSM and GCC. In both cases, we use the complex scaling method to calculate matrix elements \cite{Michel2003} and the ``off-diagonal method" to deal with the Coulomb potential~\cite{Michel2011}. The two-body recoil term is treated in GSM by expanding it in a truncated basis of harmonic oscillator (HO). The HO basis depends on the oscillator length $b$ and the number of states used in the expansion. As it was demonstrated in Refs.~\cite{Hagen2006a,GSMisospin}, GSM eigenvalues and eigenfunctions converge for a sufficient number of HO states, and the dependence of the results on $b$ is very weak. Let us note in passing that one has to be careful when using arguments based on the variational principle when comparing the performance of GSM with GCC. Indeed, the treatment of the Pauli-forbidden states is slightly different in the two approaches. Moreover, the recoil effect in the GSM is not removed exactly. (There is no recoil term in GCC as the center-of-mass motion is eliminated through the use of Jacobi coordinates.) \subsection{Two-nucleon correlations} In order to study the correlations between the two valence nucleons, we utilize the two-nucleon density \cite{Bertsch91,Hagino05,George11} $\rho_{nn'}(r,r',\theta) = \langle \Psi|\delta(r_1-r)\delta(r_2-r^{\prime})\delta(\theta_{12} - \theta)|\Psi\rangle$, where $r_1$, $r_2$, and $\theta_{12}$ are defined in Fig.~\ref{Jacobi}(a). In the following, we apply the normalization convention of Ref.~\cite{George11} in which the Jacobian $8\pi^2 r^2 r'^2 \sin\theta$ is incorporated into the definition of $\rho_{nn'}$, i.e., it does not appear explicitly. The angular density of the two valence nucleons is obtained by integrating $\rho_{nn'}(r,r',\theta)$ over radial coordinates: \begin{equation}\label{rhonn} \rho(\theta) = \int \rho_{nn'}(r,r^\prime,\theta) dr dr^\prime. \end{equation} The angular density is normalized to one: $\int\rho(\theta) d\theta = 1.$ While it is straightforward to calculate $\rho_{nn'}$ with COSM coordinates, the angular density cannot be calculated directly with the Jacobi T-type coordinates used to diagonalize the GCC Hamiltonian. Consequently, one can either calculate the density distribution $\rho_{\rm T}(x,y,\varphi)$ in T-type coordinates and then transform it to $\rho(r_1,r_2,\theta_{12})$ in COSM coordinates by using the geometric relations of Fig.~\ref{Jacobi}(a), or -- as we do in this study -- one can apply the T-type-to-COSM coordinate transformation. This transformation~\cite{Raynal1970}, provides an analytical relation between hyperspherical harmonics in COSM coordinates $\mathcal {Y} ^{JM}_{\gamma^\prime K^\prime} (\vec{r}_1^\prime, \vec{r}_2^\prime )$ and the T-type Jacobi coordinates $\mathcal {Y} ^{JM}_{\gamma K} (\vec{x}^\prime, \vec{y}^\prime )$, where $\vec{r}_1^\prime$, $\vec{r}_2^\prime$, $\vec{x}^\prime$ and $\vec{y}^\prime$ are: \begin{equation} \begin{aligned} \vec{r}_1^\prime &= \sqrt{A_i} \vec{r}_1,\\ \vec{r}_2^\prime &= \sqrt{A_j} \vec{r}_2,\\ \vec{x}^\prime &= \vec{x}= \sqrt{\mu_{ij}}(\vec{r}_1-\vec{r}_2),\\ \vec{y}^\prime &= \sqrt{\frac{A_i+A_j}{\mu_{(ij)k}}}\vec{y} = \frac{A_i\vec{r}_1+A_j\vec{r}_2}{\sqrt{A_i+A_j}}. \end{aligned} \end{equation} \subsection{Model space and parameters} In order to compare approaches formulated in Jacobi and COSM coordinates, we consider model spaces defined by the cutoff value $\ell_{\rm max}$, which is the maximum orbital angular momentum associated with ($\vec{r}_1$, $\vec{r}_2$) in GSM and ($\vec{x}$, $\vec{y}$) in GCC. The remaining truncations come from the Berggren basis itself. The nuclear two-body interaction between valence nucleons has been approximated by the finite-range Minnesota force with the original parameters of Ref.~\cite{Thompson1977b}. For the core-valence Hamiltonian, we took a Woods-Saxon (WS) potential with parameters fitted to the resonances of the core+$n$ system. The one- and two-body Coulomb interaction has been considered when valence protons are present. In the case of GSM, we use the Berggren basis for the $spd$ partial waves and a HO basis for the channels with higher orbital angular momenta. For $^6$He, $^6$Li and $^6$Be we assume the $^4$He core. For $^6$He and $^6$Be, GSM we took a complex-momentum contour defined by the segments $k= 0 \rightarrow 0.17-0.17i \rightarrow 0.34 \rightarrow 3$ (all in \,fm$^{-1}$) for the $p_{3/2}$ partial wave, and $0 \rightarrow 0.5 \rightarrow 1 \rightarrow 3$\,fm$^{-1}$ for the remaining $spd$ partial waves. For $^6$Li, we took the contours $0 \rightarrow 0.18-0.17i \rightarrow 0.5 \rightarrow 3$\,fm$^{-1}$ for $p_{1/2}$; $0 \rightarrow 0.15-0.14i \rightarrow 0.5 \rightarrow 3$\,fm$^{-1}$ for $p_{3/2}$; and $0 \rightarrow 0.25 \rightarrow 0.5 \rightarrow 3$\,fm$^{-1}$ for the $sd$ partial waves. Each segment was discretized with 10 points. This is sufficient for the energies and most of other physical quantities, but one may need more points to describe wave functions precisely, especially for the unbound resonant states that are affected by Coulomb interaction. Hence, we choose 15 points for each segment to calculate the two-proton angular correlation of the unbound $^6$Be. The HO basis was defined through the oscillator length $b = 2$\,fm and the maximum radial quantum number $n_{\rm max}=10$. The WS parameters for the $A = 6$ nuclei are: the depth of the central term $V_{0}= 47$\,MeV; spin-orbit strength $V_{\rm s.o.} = 30$\,MeV; diffuseness $a=0.65$\,fm; and the WS (and charge) radius $R=2$\,fm. With these parameters we predict the $3/2^-$ ground state (g.s.) of $^5$He at $E=0.732$\,MeV ($\Gamma=0.622$\,MeV), and its first excited $1/2^-$ state at $E=2.126$\,MeV ($\Gamma=5.838$\,MeV). For $^{26}$O, we consider the $^{24}$O core~\cite{Kanungo09,Hoffman09,Hagino2016}. In the GSM variant, we used the contour $0 \rightarrow 0.2-0.15i \rightarrow 0.4 \rightarrow 3$\,fm$^{-1}$ for $d_{3/2}$, and $0 \rightarrow 0.5 \rightarrow 1 \rightarrow 3$\,fm$^{-1}$ for the remaining $spd$ partial waves. For the HO basis we took $b = 1.75$\,fm and $n_{\rm max}=10$. The WS potential for $^{26}$O has fitted in Ref.~\cite{Hagino2016} to the resonances of $^{25}$O. Its parameters are: $V_{0}= 44.1$\,MeV, $V_{\rm s.o.}= 45.87$\,MeV, $a= 0.73$\,fm, and $R = 3.6$\,fm. The GCC calculations have been carried out with the maximal hyperspherical quantum number $K_{\rm max}$ = 40, which is sufficient for all the physical quantities we study. We checked that the calculated energies differ by as little as 2 keV when varying $K_{\rm max}$ from 30 to 40. Similar as in GSM, in GCC we used the Berggren basis for the $K \leqslant$ 6 channels and the HO basis for the higher angular momentum channels. The complex-momentum contour of the Berggren basis is defined as: $k = 0 \rightarrow 0.3-0.2i \rightarrow 0.5 \rightarrow 0.8 \rightarrow 1.2 \rightarrow 4$ (all in fm$^{-1}$), with each segment discretized with 10 points. We took the HO basis with $b = 2$\,fm and $n_{\rm max} = 20$. As $k_\rho^2 = k_x^2 + k_y^2$, the energy range covered by the GCC basis is roughly doubled as compared to that of GSM. For the one-body Coulomb potential, we use the dilatation-analytic form \cite{Saito77,IdBetan08,GSMisospin}: \begin{equation} U^{(Z)}_{\rm c}(r) = e^2Z_{\rm c} \frac{{\rm erf}(r/\nu_{\rm c})}{r}, \label{coul} \end{equation} where $\nu_c=4R_0/(3\sqrt{\pi})$\,fm, $R_0$ is the radius of the WS potential, and $Z_{\rm c}$ is the number of core protons. We emphasize that the large continuum space, containing states of both parities, is essential for the formation of the dineutron structure in nuclei such as $^6$He or $^{26}$O \cite{Catara84,Pillet07,George11,Hagino14,Hagino16a,Fossez2017}. In the following, we shall study the effect of including positive and negative parity continuum shells on the stability of threshold configurations. \section{Results}\label{results} \subsection{Structure of $A$=6 systems} We begin with the GCC-GSM benchmarking for the $A=6$ systems. Figure~\ref{Convergence2} shows the convergence rate for the g.s. energies of $^6$He, $^6$Li, and $^6$Be with respect to $\ell_{\rm max}$. (See Ref.~\cite{Masui14} for a similar comparison between GSM and complex scaling results.) While the g.s. energies of $^6$He and $^6$Be are in a reasonable agreement with experiment, $^6$Li is overbound. This is because the Minnesota interaction does not explicitly separate the $T$ = 0 and $T$ = 1 channels. The structure of $^6$He and $^6$Be is given by the $T=1$ force, while the $T=0$ channel that is crucial for $^6$Li has not been optimized. This is of minor importance for this study, as our goal is to benchmark GCC and GSM not to provide quantitative predictions. As we use different coordinates in GCC and GSM, their model spaces are manifestly different. Still for $\ell_{\rm max}=10$ both approaches provide very similar results, which is most encouraging. \begin{figure}[htb] \includegraphics[width=0.7\linewidth]{Convergence.pdf} \caption{\label{Convergence2} Comparison between GSM and GCC results for for the two-nucleon separation energies of $^6$Be, $^6$Li, and $^6$He obtained in different model spaces defined by $\ell_{\rm max}$. The bars in the panel (a) represent decay widths.} \end{figure} One can see in Fig. ~\ref{Convergence2} that the calculations done with Jacobi coordinates converge faster than those with COSM coordinates. This comes from the attractive character of the nucleon-nucleon interaction, which results in the presence of a di-nucleon structure (see discussion below). Consequently, as T-type Jacobi coordinates well describe the di-nucleon cluster, they are able to capture correlations in a more efficient way than COSM coordinates. This is in agreement with the findings of Ref.~\cite{Kruppa2014} based on the complex scaling method with COSM coordinates, who obtained the g.s. energy $^6$He that was slightly less bound as compared to results of Ref.~\cite{Descouvemont2003} using Jacobi coordinates. In any case, our calculations have demonstrated that one obtains very similar results in GCC and GSM when sufficiently large model spaces are considered. As shown in Table~\ref{Convergence}, the energy difference between GCC and GSM predictions for $A=6$ systems is very small, around 20\,keV for majority of states. The maximum deviation of $\sim$70\,keV is obtained for the 3$^+$ state of $^6$Li. However, because of the attractive character of the $T=0$ interaction, the GSM calculation for this state has not fully converged at $\ell_{\rm max} = 10$. \begin{table}[!htb] \caption{Comparison between energies (in MeV) and widths (in keV) predicted for $^6$He, $^6$Li, and $^6$Be in GSM and GCC in the $\ell_{\rm max}= 10$ model space.} \begin{ruledtabular} \begin{tabular}{cccc} Nucleus & $J^\pi$ & {GSM} & {GCC} \\ \hline \\[-8pt] $^6$He & $ 0^+ $ & $-0.933$ & $-0.934$ \\ & $ 2^+ $ & ~0.800(98) & ~0.817(42) \\ $^6$Li & $ 1^+ $ & $-5.680$ & $-5.698$ \\ & $ 3^+ $ & $-2.097$ & $-2.167$ \\ & $ 0^+ $ & $-0.041$ & $-0.048$ \\ $^6$Be & $ 0^+ $ & ~1.314(25) & ~1.275(54) \end{tabular} \end{ruledtabular}\label{Convergence} \end{table} Motivated by the discussion in Ref.~\cite{Descouvemont2003}, we have also studied the effect of the $\ell$-dependent core-nucleus potential. To this end, we changed the WS strength $V_0$ from 47 MeV to 49 MeV for the $\ell=1$ partial waves while keeping the standard strength for the remaining $\ell$ values. As seen in Fig.~\ref{Convergence3}, the convergence behavior obtained with Jacobi and COSM coordinates is fairly similar to that shown in Fig.~\ref{Convergence2}, where the WS strength $V_0$ is the same for all partial waves. For $\ell_{\rm max}=12$, the difference between GSM and GCC energies of $^6$He becomes very small. This result is consistent with the findings of Ref.~\citep{Zhukov1993} that the recoil effect can indeed be successfully eliminated using COSM coordinates at the expense of reduced convergence. \begin{figure}[htb] \includegraphics[width=0.7\linewidth]{Convergence2.pdf} \caption{\label{Convergence3} Similar as in Fig.~\ref{Convergence2} but for the two-neutron separation energy of $^{6}$He obtained with the angular-momentum dependent Hamiltonian, see text for details.} \end{figure} In order to see whether the difference between the model spaces of GCC and GSM can be compensated by renormalizing the effective Hamiltonian, we slightly readjusted the depth of the WS potential in GCC calculations to reproduce the g.s. GSM energy of $^6$He at model space $\ell_{\rm max}=7$. As a result, the strength $V_0$ changed from 47 MeV to 46.9 MeV. Except for the 2$^+$ state of $^6$He, the GSM and GCC energies for $A=6$ systems got significantly closer as a result of such a renormalization. This indicates that the differences between Jacobi coordinates and COSM coordinates can be partly accounted for by refitting interaction parameters, even though model spaces and asymptotic behavior are different. GCC is also in rough agreement with GSM when comparing decay widths, considering that they are very sensitive to the asymptotic behavior of the wave function, which is treated differently with Jacobi and COSM coordinates. Also, the presence of the recoil term in GSM, which is dealt with by means of the HO expansion, is expected to impact the GSM results for decay widths. In order to check the precision of decay widths calculated with GCC, we adopted the current expression \cite{Humblet}: \begin{equation} \label{width2} \Gamma =i \frac{\int( \Psi^{\dagger} ~\hat{\bf H}~ \Psi - \Psi ~\hat{\bf H} ~\Psi^{\dagger} )~ d{\vec{x}}d{\vec{y}}}{\int|\Psi|^2 d{\vec{x}}d{\vec{y}}}, \end{equation} which can be expressed in hyperspherical coordinates as \cite{Grigorenko2000,Grig07}: \begin{equation} \label{width3} \Gamma =i \frac{\hbar^2}{m} \frac{ \left. \int d\Omega_5 {\rm Im}[\psi \frac{\partial}{\partial \rho} \psi^{\dagger}]\right|_{\rho=\rho_{\rm max}} }{\int^{\rho_{\rm max}}_0 |\psi|^2 d\rho d\Omega_5}, \end{equation} where $\rho_{\rm max}$ is larger than the nuclear radius (in general, the decay width should not depend on the choice of $\rho_{\rm max}$). By using the current expression, we obtain $\Gamma$=42\,keV for 2$^+$ state of $^6$He and $\Gamma$=54\,keV for 0$^+$ state of $^6$Be, which are practically the same as the GCC values of Table~\ref{Convergence} obtained from the direct diagonalization. \begin{figure}[htb] \includegraphics[width=0.8\linewidth]{2n_correlation_6He.pdf} \caption{\label{He6_cor}Comparison between GSM and GCC results for the two-neutron angular correlation in $^{6}$He for different model spaces defined by $\ell_{\rm max}$.} \end{figure} \begin{figure*}[htb] \includegraphics[width=0.8\textwidth]{2n_correlation.pdf} \caption{\label{A6_cor} Two-nucleon angular densities (total and in the $S=1$ channel) in the g.s. configurations of $^6$He (a), $^6$Li (b), and $^6$Be (c) obtained in GSM and GCC with $\ell_{\rm max}=10$.} \end{figure*} We now discuss the angular correlation of the two valence neutrons in the g.s. of $^{6}$He. Figure~\ref{He6_cor} shows GSM and GCC results for model spaces defined by different values of $\ell_{\rm max}$. The distribution $\rho(\theta)$ shows two maxima \cite{Zhukov1993,Hagino05,Horiuchi07,Kikuchi10,George11,Kruppa2014,Hagino16a}. The higher peak, at a small opening angle, can be associated with a dineutron configuration. The second maximum, found in the region of large angles, represents the cigarlike configuration. The GCC results for $\ell_{\rm max}=2$ and 10 are already very close. This is not the case for the GSM, which shows sensitivity to the cutoff value of $\ell$. This is because the large continuum space, including states of positive and negative parity is needed in the COSM picture to describe dineutron correlations \cite{Catara84,Pillet07,George11,Hagino14,Fossez2017}. Indeed, as $\ell_{\rm max}$ increases, the angular correlations obtained in GSM and GCC are very similar. This indicates that Jacobi and COSM descriptions of $\rho(\theta)$ are essentially equivalent provided that the model space is sufficiently large. In order to benchmark GCC and GSM calculations for the valence-proton case, in Fig.~\ref{A6_cor} we compare two-nucleon angular correlations for $A = 6$ nuclei $^6$He, $^6$Li, and $^6$Be. Similar to Refs. \cite{Hagino05,George11}, we find that the $T=1$ configurations have a dominant $S = 0$ component, in which the two neutrons in $^6$He or two protons in $^6$Be are in the spin singlet state. The amplitude of the $S = 1$ density component is small. For all nuclei, GCC and GSM angular correlations are close. Similar to $^6$He, the two peaks in $^6$Be indicate diproton and cigarlike configurations~\cite{Oishi14} (see also Refs.~\cite{,Garrido07,Grigorenko09,Grig12,Egorova12,Alvarez12}). It is to be noted that the dineutron peak in $^6$He is slightly higher than the diproton maximum in $^6$Be. This is due to the repulsive character of the Coulomb interaction between valence protons. The large maximum at small opening angles seen in $^6$Li corresponds to the deuteron-like structure. As discussed in Ref.~\cite{Horiuchi07}, this peak is larger that the the dineutron correlation in $^6$He. Indeed, the valence proton-neutron pair in $^6$Li is very strongly correlated because the $T=0$ interaction is much stronger than the $T=1$ interaction. The different features in the two-nucleon angular correlations in the three $A=6$ systems shown in Fig.~\ref{A6_cor} demonstrate that the angular correlations contain useful information on the effective interaction between valence nucleons. \subsection{Structure of unbound $^{26}$O} After benchmarking GSM and GCC for $A=6$ systems, we apply both models to $^{26}$O, which is believed to be a threshold dineutron structure \cite{Lunderberg2012,Kohley2013,Grigorenko2015,Kondo2016,Hagino2016,Hagino16a,Fossez2017}. \begin{figure}[htb] \includegraphics[width=0.7\linewidth]{O26_0.pdf} \caption{\label{O26_0} Two-neutron separation energy of the g.s. of $^{26}$O computed with GSM and GCC for different values of $\ell_{\rm max}$.} \end{figure} It is a theoretical challenge to reproduce the resonances in $^{26}$O as both continuum and high partial waves must be considered. As $^{24}$O can be associated with the subshell closure in which the $0d_{5/2}$ and $1s_{1/2}$ neutron shells are occupied \cite{Tshoo12}, it can be used as core in our three-body model. Figure~\ref{O26_0} illustrates the convergence of the g.s. of $^{26}$O with respect to $\ell_{\rm max}$ in GSM and GCC calculations. It is seen that in the GCC approach the energy converges nearly exponentially and that the stable result is practically reached at $\ell_{\rm max}=7$. While slightly higher in energy, the GSM results are quite satisfactory, as they differ only by about 30\,keV from the GCC benchmark. Still, it is clear that $\ell_{\rm max}=12$ is not sufficient to reach the full convergence in GSM. The calculated energies and widths of g.s. and 2$^+$ state of $^{26}$O are displayed in Table~\ref{O26}; they are both consistent with the most recent experimental values~\cite{Kondo2016}. \begin{table}[!htb] \caption{Energies and widths (all in keV) predicted for $^{26}$O in GSM and GCC in the $\ell_{\rm max}= 12$ model space. Also shown are the dominant GSM ($\ell_1$, $\ell_2$) and GCC ($\ell_x$, $\ell_y$) configurations.} \begin{ruledtabular} \begin{tabular}{ccrcr} $J^\pi$ & \multicolumn{2}{c}{GSM} & \multicolumn{2}{c}{GCC} \\ \hline \\[-8pt] $ 0^+ $ & 101 & 81\% ($d,d$) & 69 & 46\% ($p,p$)\\ & & 11\% ($f,f$) & & 44\% ($s,s$)\\ & & 7\% ($p,p$) & & 3\% ($d,d$) \\ $ 2^+ $ & 1137(33) & 77\% ($d,d$) & 1150(14) & 28\% ($f,p$)\\ & & 7\% ($p,p$) & & 27\% ($p,f$)\\ & & 7\% ($d,s$) & & 10\% ($d,d$) \end{tabular} \end{ruledtabular}\label{O26} \end{table} The amplitudes of dominant configurations listed in Table~\ref{O26} illustrate the importance of considering partial waves of different parity in the GSM description of a dineutron g.s. configuration in $^{26}$O \cite{Fossez2017}. \begin{figure}[htb] \includegraphics[width=\linewidth]{O26_WaveFunction.pdf} \caption{\label{O26_WF} GCC wave function of the g.s. of $^{26}$O in the Jacobi coordinates $nn$ and $^{24}$O$-2n$. } \end{figure} The g.s. wave function of $^{26}$O computed in GCC is shown in Fig.~\ref{O26_WF} in the Jacobi coordinates. The corresponding angular distribution is displayed in Fig.~\ref{O26_cor}. \begin{figure}[htb] \includegraphics[width=0.8\linewidth]{O26_cor2.pdf} \caption{\label{O26_cor} Two-neutron angular correlation for the 0$^+$ g.s. (a) and 2$^+_1$ state (b) configuration of $^{26}$O computed with GCC (solid line) and GSM (dashed line) with $\ell_{\rm max}=10$. The dash-dotted curve labeled GCC' in panel (a) shows GCC results obtained with the strength of the neutron-neutron interaction reduced by 50\%.} \end{figure} Three pronounced peaks associated with the dineutron, triangular, and cigarlike configurations~\cite{Hagino2016,Hove2017} can be identified. In GCC, the ($\ell_x$, $\ell_y$) = ($s, s$), ($p, p$) components dominate the g.s. wave function of $^{26}$O; this is consistent with a sizable clusterization of the two neutrons. In COSM coordinates, it is the ($\ell_1$, $\ell_2$) = ($d,d$) configuration that dominates, but the the negative-parity ($f, f$) and ($p, p$) channels contribute with $\sim$20\%. Again, it is encouraging to see that with $\ell_{\rm max}=10$ both approaches predict very similar two-nucleon densities. In Table~\ref{O26} we also display the predicted structure of the excited 2$^+$ state of $^{26}$O . The predicted energy is close to experiment~\cite{Kondo2016} and other theoretical studies, see, e.g., \cite{Hagino2016,Grigorenko2015,Volya2006,Tsukiyama2015,Bogner2014}. We obtain a small width for this state, which is consistent with the GSM+DMRG calculations of Ref.~\cite{Fossez2017}. The GCC occupations of Table~\ref{O26} indicate that the wave function of the 2$^+$ state is spread out in space, as the main three configurations, of cluster type, only contribute to the wave function with only 65\%. When considering the GSM wave function, the ($d, d$) configuration dominates. The corresponding two-neutron angular correlation shown in Fig.~\ref{O26_cor}(b) exhibits a broad distribution with a maximum around 90$^\circ$. This situation is fairly similar to what has been predicted for the 2$^+$ state of $^6$He \cite{George11,Kruppa2014}. Finally, it is interesting to study how the neutron-neutron interaction impacts the angular correlation. To this end, Fig.~\ref{O26_cor}(a) shows $\rho(\theta)$ obtained with the Minnesota neutron-neutron interaction whose strength has been reduced by 50\%. While there are still three peaks present, the distribution becomes more uniform and the dineutron component no longer dominates. We can this conclude that the $nn$ angular correlation can be used as an indicator of the interaction between valence nucleons. \section{Conclusions}\label{summary} We developed a Gamow coupled-channel approach in Jacobi coordinates with the Berggren basis to describe structure and decays of three-body systems. We benchmarked the performance of the new approach against the Gamow Shell Model. Both methods are capable of considering large continuum spaces but differ in their treatment of three-body asymptotics, center-of-mass motion, and Pauli operator. In spite of these differences, we demonstrated that the Jacobi-coordinate-based framework (GCC) and COSM-based framework (GSM) can produce fairly similar results, provided that the continuum space is sufficiently large. For benchmarking and illustrative examples we choose $^6$He, $^6$Li, and $^6$Be, and $^{26}$O -- all viewed as a core-plus-two-nucleon systems. We discussed the spectra, decay widths, and nucleon-nucleon angular correlations in these nuclei. The Jacobi coordinates capture cluster correlations (such as dineutron and deuteron-type) more efficiently; hence, the convergence rate of GCC is faster than that of GSM. For $^{26}$O, we demonstrated the sensitivity of $nn$ angular correlation to the valence-neutron interaction. It will be interesting to investigate this aspect further to provide guidance for future experimental investigations of di-nucleon correlations in bound and unbound states of dripline nuclei. In summary, we developed an efficient approach to structure and decays of three-cluster systems. The GCC method is based on a Hamiltonian involving a two-body interaction between valence nucleons and a one-body field representing the core-nucleon potential. The advantage of the model is its ability to correctly describe the three-body asymptotic behavior and the efficient treatment of the continuum space, which is of particular importance for the treatment of threshold states and narrow resonances. The model can be easily extended along the lines of the resonating group method by introducing a microscopic picture of the core \cite{Navr16,Jaganathen14}. Meanwhile, it can be used to elucidate experimental findings on dripline systems, and to provide finetuned predictions to guide $A$-body approaches. \begin{acknowledgments} We thank K{\'e}vin Fossez, Yannen Jaganathen, Georgios Papadimitriou, and Marek P{\l}oszajczak for useful discussions. This material is based upon work supported by the U.S.\ Department of Energy, Office of Science, Office of Nuclear Physics under award numbers DE-SC0013365 (Michigan State University), DE-SC0008511 (NUCLEI SciDAC-3 collaboration), DE-SC0009971 (CUSTIPEN: China-U.S. Theory Institute for Physics with Exotic Nuclei), and also supported in part by Michigan State University through computational resources provided by the Institute for Cyber-Enabled Research. \end{acknowledgments}
proofpile-arXiv_065-3535
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \baselineskip 17pt All multigraphs in this paper are finite and loopless; and all graphs are finite and without loops or multiple edges. Given a multigraph $G$, let $c: E(G)\rightarrow [k]$ be a proper edge-coloring of $G$, where $k\ge1$ is an integer and $[k]:=\{1,2, \dots, k\}$. We say that $c$ is a \dfn{star $k$-edge-coloring} of $G$ if no path or cycle of length four in $G$ is bi-colored under the coloring $c$; and $G$ is \dfn{star $k$-edge-colorable} if $G$ admits a star $k$-edge-coloring. The \dfn{star chromatic index} of $G$, denoted $\chi'_{s}(G)$, is the smallest integer $k$ such that $G$ is star $k$-edge-colorable. As pointed out in \cite{DMS2013}, the definition of star edge-coloring of a graph $G$ is equivalent to the star vertex-coloring of its line graph $L(G)$. Star edge-coloring of a graph was initiated by Liu and Deng \cite{DL2008}, motivated by the vertex version (see \cite{ACKKR2004, BCMRW2009, CRW2013, KKT2009, NM2003}). Given a multigraph $G$, we use $|G|$ to denote the number of vertices, $e(G)$ the number of edges, $\delta(G)$ the minimum degree, and $\Delta(G)$ the maximum degree of $G$, respectively. We use $K_n$ and $P_n$ to denote the complete graph and the path on $n$ vertices, respectively. A multigraph $G$ is \dfn{subcubic} if all its vertices have degree less than or equal to three. The \dfn{maximum average degree} of a multigraph $G$, denoted $\text{mad}(G)$, is defined as the maximum of $2 e(H)/|H|$ taken over all the subgraphs $H$ of $G$. The following upper bound is a result of Liu and Deng \cite{DL2008}. \medskip \begin{thm}[\cite{DL2008}] For any graph $G$ with $\Delta(G)\geq7$, $\chi'_{s}(G)\leq \lceil16(\Delta(G)-1)^\frac3 2\rceil.$ \end{thm} Theorem~\ref{Kn} below is a result of Dvo\v{r}\'ak, Mohar and \v{S}\'amal~\cite{DMS2013}, which gives an upper and a lower bounds for complete graphs. \begin{thm} [\cite{DMS2013}]\label{Kn} The star chromatic index of the complete graph $K_n$ satisfies $$2n(1+o(1))\leq \chi'_{s}(K_n)\leq n\, \frac{2^{2\sqrt{2}(1+o(1))\sqrt{\log n}}}{(\log n)^{1/4}}.$$ In particular, for every $\epsilon>0$, there exists a constant $c$ such that $\chi'_{s}(K_n)\le cn^{1+\epsilon}$ for every integer $n\ge1$. \end{thm} The true order of magnitude of $\chi'_{s}(K_n)$ is still unknown. From Theorem~\ref{Kn}, an upper bound in terms of the maximum degree for general graphs is also derived in~\cite{DMS2013}, i.e., $\chi'_{s}(G)\leq \Delta\cdot 2^{O(1)\sqrt{\log \Delta}}$ for any graph $G$ with maximum degree $\Delta$. In the same paper, Dvo\v{r}\'ak, Mohar and \v{S}\'amal~\cite{DMS2013} also considered the star chromatic index of subcubic multigraphs. To state their result, we need to introduce one notation. A graph $G$ \dfn{covers} a graph $H$ if there is a mapping $f: V(G)\rightarrow V(H)$ such that for any $uv\in E(G)$, $f(u)f(v)\in E(H)$, and for any $u\in V(G)$, $f$ is a bijection between $N_G(u)$ and $N_{H}(f(u))$. They proved the following. \begin{thm} [\cite{DMS2013}]\label{s=7} Let $G$ be a multigraph. \begin{enumerate}[(a)] \item If $G$ is subcubic, then $\chi'_s(G)\le7$.\vspace{-8pt} \item If $G$ is cubic and has no multiple edges, then $\chi'_s(G)\ge4$ and the equality holds if and only if $G$ covers the graph of $3$-cube. \end{enumerate} \end{thm} As observed in~\cite{DMS2013}, $K_{3,3}$ is not star $5$-edge-colorable but star $6$-edge-colorable. No subcubic multigraphs with star chromatic index seven are known. Dvo\v{r}\'ak, Mohar and \v{S}\'amal~\cite{DMS2013} proposed the following conjecture. \begin{conj} [\cite{DMS2013}]\label{cubic} Let $G$ be a subcubic multigraph. Then $\chi'_s(G)\leq 6$. \end{conj} It was shown in~\cite{BLM2016} that every subcubic outerplanar graph is star $5$-edge-colorable. Lei, Shi and Song~\cite{LSS2017} recently proved that every subcubic multigraph $G$ with $\text{mad}(G)<24/11$ is star $5$-edge-colorable, and every subcubic multigraph $G$ with $\text{mad}(G)<5/2$ is star $6$-edge-colorable. Kerdjoudj, Kostochka and Raspaud~\cite{KKP2017} considered the list version of star edge-colorings of simple graphs. They proved that every subcubic graph is star list-$8$-edge-colorable, and further proved the following stronger results. \medskip \begin{thm} [\cite{KKP2017}]\label{KKP} Let $G$ be a subcubic graph. \begin{enumerate}[(a)] \item If $\text{mad}(G)<7/3$, then $G$ is star list-$5$-edge-colorable.\vspace{-10pt} \item If $\text{mad}(G)<5/2$, then $G$ is star list-$6$-edge-colorable. \end{enumerate} \end{thm} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{k4Sub.eps} \caption{ A graph with maximum average degree $14/5$ and star chromatic index $6$.}\label{prism} \end{center} \end{figure} As mentioned above, $K_{3,3}$ has star chromatic index $6$, and is bipartite and non-planar. The graph, depicted in Figure~\ref{prism}, has star chromatic index $6$, and is planar and non-bipartite. We see that not every bipartite, subcubic graph is star $5$-edge-colorable; and not every planar, subcubic graph is star $5$-edge-colorable. It remains unknown whether every bipartite, planar subcubic multigraph is star $5$-edge-colorable. In this paper, we improve Theorem~\ref{KKP}(a) by showing the following main result. \begin{thm}\label{mainthm} Let $G$ be a subcubic multigraph with $mad(G)<12/5$. Then $\chi'_{s}(G)\leq 5$. \end{thm} We don't know if the bound $12/5$ in Theorem~\ref{mainthm} is best possible. The graph depicted in Figure~\ref{prism} has maximum average degree $14/5$ but is not star $5$-edge-colorable. \medskip The \dfn{girth} of a graph $G$ is the length of a shortest cycle in $G$. It was observed in~\cite{girth} that every planar graph with girth $g$ satisfies $\text{mad}(G)< \frac {2g}{g-2}$. This, together with Theorem~\ref{mainthm}, implies the following. \begin{cor} Let $G$ be a planar subcubic graph with girth $g$. If $g\geq12$, then $\chi'_{s}(G)\leq 5$.\medskip \end{cor} We need to introduce more notation. Given a multigraph $G$, a vertex of degree $k$ in $G$ is a \dfn{k-vertex}, and a \dfn{k-neighbor} of a vertex $v$ in $G$ is a $k$-vertex adjacent to $v$ in $G$. A \dfn{$3_k$-vertex} in $G$ is a $3$-vertex incident to exactly $k$ edges $e$ in $G$ such that the other end-vertex of $e$ is a $2$-vertex. For any proper edge-coloring $c$ of a multigraph $G$ and for any $u\in V(G)$, let $c(u)$ denote the set of all colors such that each is used to color an edge incident with $u$ under the coloring $c$. For any two sets $A, B$, let $A\backslash B := A-B$. If $B=\{b\}$, we simply write $A\backslash b$ instead of $A\backslash B$. \medskip \section{Properties of star $5$-critical subcubic multigraphs }\label{prop} A multigraph $G$ is \dfn{star $5$-critical} if $\chi'_s(G)>5$ and $\chi'_s(G-v)\le 5$ for any $v\in V(G)$. In this section, we establish some structure results on star $5$-critical subcubic multigraphs. Clearly, every star $5$-critical multigraph must be connected. \medskip Throughout the remainder of this section, let $G$ be a star $5$-critical subcubic multigraph, and let $N(v)$ and $d(v)$ denote the neighborhood and degree of a vertex $v$ in $G$, respectively. Since every multigraph with maximum degree at most two or number of vertices at most four is star $5$-edge-colorable, we see that $\Delta(G)=3$ and $|G|\ge5$. As observed in \cite{LSS2017}, any $2$-vertex in $G$ must have two distinct neighbors. The following Lemma~\ref{deg=1} and Lemma~\ref{deg=2} are proved in \cite{LSS2017} and will be used in this paper. \begin{lem}[\cite{LSS2017}]\label{deg=1} For any $1$-vertex $x$ in $G$, let $N(x)=\{y\}$. The following are true. \begin{enumerate}[(a)] \item $|N(y)|=3$. \vspace{-10pt} \item $N(y)$ is an independent set in $G$, $d(y_1)=3$ and $d(y_2)\ge 2$, where $N(y)=\{x, y_1,y_2\}$ with $d(y_1)\ge d(y_2)$.\vspace{-10pt} \item If $d(y_2)=2$, then for any $i\in\{1,2\}$ and any $v\in N_G(y_i)\backslash y$, $|N(v)|\ge2$, $|N(y_1)|=3$, $|N(y_2)|=2$, and $N[y_1]\cap N[y_2]=\{y\}$.\vspace{-10pt} \item If $d(y_2)=2$, then $d(w_1)=3$, where $w_1$ is the other neighbor of $y_2$ in $G$.\vspace{-10pt} \item If $d(y_2)=3$, then either $d(v)\ge2$ for any $v\in N(y_1)$ or $d(v)\ge2$ for any $v \in N(y_2)$. \end{enumerate} \end{lem} \begin{lem}[\cite{LSS2017}]\label{deg=2} For any $2$-vertex $x$ in $G$, let $N(x)=\{z, w\}$ with $|N(z)|\le |N(w)|$. The following are true. \begin{enumerate}[(a)] \item If $zw\in E(G)$, then $|N(z)|=|N(w)|=3$ and $d(v)\ge2$ for any $v\in N(z)\cup N(w)$.\vspace{-10pt} \item If $zw\notin E(G)$, then $|N(w)|=3$ or $ |N(w)|=|N(z)|=2$, and $d(w)=d(z)=3$.\vspace{-10pt} \item If $d(z)=2$ and $z^*w\in E(G)$, then $|N(z^*)|=|N(w)|=3$, and $d(u)=3$ for any $u\in (N[w] \cup N[z^*])\backslash \{x,z\}$, where $z^*$ is the other neighbor of $z$ in $G$.\vspace{-10pt} \item If $d(z)=2$, then $|N(z^*)|=|N(w)|=3$, and $|N(v)|\geq2$ for any $v\in N(w)\cup N(z^*)$, where $N(z)=\{x, z^*\}$. \end{enumerate} \end{lem} \medskip Let $H$ be the graph obtained from $G$ by deleting all $1$-vertices. By Lemma~\ref{deg=1}(a,b), $H$ is connected and $\delta(H)\geq2$. Throughout the remaining of the proof, a $2$-vertex in $H$ is \dfn{bad} if it has a $2$-neighbor in $H$, and a $2$-vertex in $H$ is \dfn{good} if it is not bad. For any $2$-vertex $r$ in $H$, we use $r'$ to denote the unique $1$-neighbor of $r$ in $G$ if $d_G(r)=3$. By Lemma~\ref{deg=1}(a) and the fact that any $2$-vertex in $G$ has two distinct neighbors in $G$, we obtain the following two lemmas. \begin{lem}\label{2nbr} For any $2$-vertex $x$ in $H$, $|N_{H}(x)|=2$. \end{lem} \begin{lem}\label{3nbr} For any $3_k$-vertex $x$ in $H$ with $k\ge2$, $|N_{H}(x)|=3$. \end{lem} \medskip Proofs of Lemma~\ref{noC3} and Lemma~\ref{noC4} below can be obtained from the proofs of Claim 11 and Lemma 12 in~\cite{KKP2017}, respectively. Since a star $5$-critical multigraph is not necessarily the edge minimal counterexample in the proof of Theorem 4.1 in~\cite{KKP2017}, we include new proofs of Lemma~\ref{noC3} and Lemma~\ref{noC4} here for completeness. \begin{lem}\label{noC3} $H$ has no $3$-cycle such that two of its vertices are bad. \end{lem} \noindent {\it Proof.} Suppose that $H$ does contain a $3$-cycle with vertices $x, y, z$ such that both $y$ and $z$ are bad. Then $x$ must be a $3$-vertex in $G$ because $G$ is $5$-critical. Let $w$ be the third neighbor of $x$ in $G$. Since $G$ is $5$-critical, let $c: E(G\backslash \{y,z\})\rightarrow [5]$ be any star $5$-edge-coloring of $G\backslash \{y,z\}$. Let $\alpha$ and $\beta$ be two distinct numbers in $[5]\backslash c(w)$ and $\gamma\in [5]\backslash \{\alpha, \beta, c(xw)\}$. Now coloring the edges $xy, xz, yz$ by colors $\alpha, \beta, \gamma $ in order, and further coloring all the edges $yy', zz'$ by color $c(xw)$ if $y'$ or $z'$ exists, we obtain a star $5$-edge-coloring of $G$, a contradiction. \hfill\vrule height3pt width6pt depth2pt\medskip \begin{lem}\label{noC4} $H$ has no $4$-cycle with vertices $x,u,v,w$ in order such that all of $u,v,w$ are bad. Furthermore, if $H$ contains a path with vertices $x,u,v,w,y$ in order such that all of $u, v,w$ are bad, then both $x$ and $y$ are $3_1$-vertices in $H$. \end{lem} \noindent {\it Proof.} Let $P$ be a path in $H$ with vertices $x,u,v,w,y$ in order such that all of $u, v,w$ are bad, where $x$ and $y$ may be the same. Since all of $u, v,w$ are bad, by the definition of $H$, $uw\notin E(G)$. By Lemma~\ref{deg=1}(b,c,e) applied to the vertex $v$, $d_G(v)=2$. By Lemma~\ref{deg=2}(b) applied to $v$, $d_G(u)=d_G(w)=3$. Thus both $w'$ and $u'$ exist. Now by Lemma~\ref{deg=1}(c) applied to $u'$ and $w'$, $d_{H}(x)=d_{H}(y)=3$, and $x\ne y$. This proves that $H$ has no $4$-cycle with vertices $x,u,v,w$ in order such that all of $u,v,w$ are bad. \medskip We next show that both $x$ and $y$ are $3_1$-vertices in $H$. Suppose that one of $x$ and $y$, say $y$, is not a $3_1$-vertex in $H$. Then $y$ is either a $3_2$- vertex or $3_3$-vertex in $H$. By Lemma~\ref{3nbr}, $|N_{H}(y)|=3$. Let $N_{H}(y)=\{w,y_1,y_2\}$ with $d_{H}(y_1)=2$. Then $y_1\neq u$, otherwise $H$ would have a $4$-cycle with vertices $y,u,v,w$ in order such that all of $u,v,w$ are bad. Note that $y_2$ and $x$ are not necessarily distinct. By Lemma~\ref{2nbr}, let $r$ be the other neighbor of $y_1$ in $H$. Since $G$ is $5$-critical, let $c: E(G\backslash \{v,u',w'\})\rightarrow [5]$ be any star $5$-edge-coloring of $G\backslash \{v,u',w'\}$. We may assume that $c(wy)=3$, $c(yy_1)=1$ and $c(yy_2)=2$. We first color $uv$ by a color $\alpha$ in $[5]\backslash (c(x)\cup\{3\})$ and $uu'$ by a color $\beta$ in $[5]\backslash (c(x)\cup\{\alpha\})$. Then $3\in c(y_1)\cap c(y_2)$, otherwise, we may assume that $3\notin c(y_i)$ for some $i\in\{1,2\}$, now coloring $vw$ by a color $\gamma$ in $\{i,4,5\}\backslash\alpha$ and $ww'$ by a color in $\{i,4,5\}\backslash\{\alpha,\gamma\}$ yields a star $5$-edge-coloring of $G$, a contradiction. It follows that $4, 5\in c(y_1)\cup c(y_2)$, otherwise, say $\theta\in\{4,5\}$ is not in $c(y_1)\cup c(y_2)$, now recoloring $wy$ by color $\theta$, $uv$ by a color $\alpha'$ in $\{\alpha,\beta\}\backslash\theta$, $uu'$ by $\{\alpha,\beta\}\backslash\alpha'$, and then coloring $ww'$ by a color in $\{1,2\}\backslash\alpha'$ and $vw$ by a color in $\{3,9-\theta\}\backslash\alpha'$, we obtain a star $5$-edge-coloring of $G$, a contradiction. Thus $c(y_1)=\{1,3,\theta\}$ and $c(y_2)=\{2,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(y_1y_1')\neq3$ or $c(y_1r)=\theta$ and $1\notin c(r)$, then we obtain a star $5$-edge-coloring of $G$ by recoloring $wy$ by color $\theta$, $uv$ by a color $\alpha'$ in $\{\alpha,\beta\}\backslash\theta$, $uu'$ by $\{\alpha,\beta\}\backslash\alpha'$, and then coloring $ww'$ by a color $\gamma$ in $\{2,3,9-\theta\}\backslash\alpha'$, and $vw$ by a color in $\{2,3,9-\theta\}\backslash\{\alpha',\gamma\}$. Therefore, $c(y_1y_1')=3$ and $1\in c(r)$. Now recoloring $y_1y_1'$ by a color in $\{2,9-\theta\}\backslash c(r)$, we obtain a star $5$-edge-coloring $c$ of $G\backslash \{v,u',w'\}$ satisfying $c(wy)=3$, $c(yy_1)=1$ and $c(yy_2)=2$ but $3\notin c(y_1)\cap c(y_2)$, a contradiction. Consequently, each of $x$ and $y$ must be a $3_1$-vertex in $H$. This completes the proof of Lemma \ref{noC4}. \hfill\vrule height3pt width6pt depth2pt\medskip \begin{lem}\label{noBad} For any $3_3$-vertex $u$ in $H$, no vertex in $N_{H}(u)$ is bad. \end{lem} \noindent {\it Proof.} Let $N_{H}(u)=\{ x, y, z\}$ with $d_{H}(x)=d_{H}(y)=d_{H}(z)=2 $. By Lemma~\ref{3nbr}, $u,x,y,z$ are all distinct. By Lemma~\ref{2nbr}, let $x_1$, $ y_1$ and $z_1$ be the other neighbors of $x, y, z$ in $H$, respectively. Suppose that some vertex, say $x$, in $N_{H}(u)$ is bad. Then $d_{H}(x_1)=2$. By Lemma~\ref{2nbr}, let $w$ be the other neighbor of $x_1$ in $H$. By Lemma~\ref{noC3} and Lemma~\ref{noC4}, $N_{H}(u)$ is an independent set and $x_1\notin \{y, z,y_1, z_1\}$. Notice that $y_1$, $z_1$ and $w$ are not necessarily distinct. Let $A:=\{x\}$ when $d_G(x_1)=2$ and $A:= \{x,x_1'\}$ when $d_G(x_1)=3$. Let $c: E(G\backslash A)\rightarrow [5]$ be any star $5$-edge-coloring of $G\backslash A$. We may assume that $ c(uy)=1$ and $c(uz)=2$. We next prove that \medskip \noindent ($*$) $1 \in c(y_{1})$ and $2\in c(z_1)$.\medskip Suppose that $1 \notin c(y_{1})$ or $2\notin c(z_1)$, say the former. If $ c(w) \cup \{1,2\}\ne [5]$, then we obtain a star $5$-edge-coloring of $G$ from $c$ by coloring the remaining edges of $G$ as follows (we only consider the worst scenario when both $x'$ and $x_1'$ exist): color the edge $xx_1$ by a color $\alpha$ in $[5] \backslash (c(w) \cup \{1,2\})$, $x_1x_1'$ by a color $\beta$ in $[5] \backslash (c(w) \cup \{\alpha\})$, $ux$ by a color $\gamma$ in $[5] \backslash \{1,2, \alpha, c(zz_{1})\}$ and $xx'$ by a color in $[5] \backslash \{1,2, \alpha, \gamma\}$, a contradiction. Thus $ c(w) \cup \{1,2\}= [5]$. Then $c(w)=\{3,4,5\}$. We may assume that $c(x_1w)=3$. If $c(z) \cup \{1,3\}\ne [5]$, then $\{4,5\}\backslash c(z)\ne\emptyset$ and we obtain a star $5$-edge-coloring of $G$ from $c$ by coloring the edge $xx_1$ by color $2$, $x_1x_1'$ by color $1$, $ux$ by a color $\alpha$ in $\{4,5\}\backslash c(z)$ and $xx'$ by a color in $\{4,5\} \backslash\alpha$, a contradiction. Thus $c(z) \cup \{1,3\} = [5]$ and so $c(z) =\{2,4,5\}$. In particular, $z'$ must exist. We again obtain a star $5$-edge-coloring of $G$ from $c$ by coloring $ux,xx', xx_1, x_1x_1'$ by colors $3, c(zz_{1}), 2,1$ in order and then recoloring $uz,zz'$ by colors $c(zz'),2$ in order, a contradiction. Thus $1 \in c(y_{1})$ and $2\in c(z_1)$. This proves ($*$). \medskip By ($*$), $1 \in c(y_{1})$ and $2\in c(z_1)$. Then $y_1\ne z_1$, and $c(yy_1), c(zz_1)\notin \{1,2\}$. We may further assume that $ c(zz_1)=3$. Let $\alpha, \beta\notin c(z_{1})$ and let $\gamma, \lambda \notin c(y_{1})$, where $\alpha, \beta, \gamma, \lambda\in [5]$. Since $\alpha, \beta\notin c(z_{1})$, we may assume that $ c(yy_1)\ne\alpha$. We may further assume that $\gamma\ne \alpha$. If $\lambda\ne\alpha$ or $\gamma\notin\{3,\beta\}$, then we obtain a star $5$-edge-coloring, say $c'$, of $G\backslash A$ from $c$ by recoloring the edges $uz, zz', uy, yy'$ by colors $\alpha, \beta, \gamma, \lambda$, respectively. Then $c'$ is a star $5$-edge-coloring of $G\backslash A$ with $c'(uz)\notin c'(z_1)$, contrary to ($*$). Thus $\lambda=\alpha$ and $\gamma\in\{3,\beta\}$. By ($*$), $1\in c(y_1)$ and so $\alpha=\lambda\ne1$ and $\gamma\ne1$. Let $c'$ be obtained from $c$ by recoloring the edges $uz, zz', yy'$ by colors $\alpha, \beta, \gamma$, respectively. Then $c'$ is a star $5$-edge-coloring of $G\backslash A$ with $c'(uz)\notin c'(z_1)$, which again contradicts ($*$). \medskip This completes the proof of Lemma~\ref{noBad}. \hfill\vrule height3pt width6pt depth2pt\bigskip \begin{lem}\label{main} For any $3$-vertex $u$ in $H$ with $N_{H}(u) = \{x, y, z\}$, if both $x$ and $y$ are bad, then $zx_1, zy_1\notin E(H)$, and $z$ must be a $3_0$-vertex in $H$, where $x_1$ and $ y_1$ are the other neighbors of $x$ and $y$ in $H$, respectively. \end{lem} \noindent {\it Proof.} Let $u, x, y, z, x_1, y_1$ be given as in the statement. Since $d_{H}(x)=d_{H}(y)=2$, by Lemma~\ref{3nbr}, $u, x, y, z$ are all distinct. By Lemma~\ref{noBad}, $d_{H}(z)=3$. Clearly, both $x_1$ and $y_1$ are bad and so $z\ne x_1, y_1$. By Lemma~\ref{noC3}, $xy\notin E(G)$ and so $N_{H}(u)$ is an independent set in $H$. By Lemma~\ref{noC4}, $x_1\ne y_1$. It follows that $u, x, y, z, x_1, y_1$ are all distinct. We first show that $zx_1, zy_1\notin E(H)$. Suppose that $zx_1\in E(H)$ or $zy_1\in E(H)$, say the latter. Then $zy_1$ is not a multiple edge because $d_H(y_1)=2$. Let $z_1$ be the third neighbor of $z$ in $H$. By Lemma~\ref{2nbr}, let $v$ be the other neighbor of $x_1$ in $H$. Then $v\ne y_1$. Notice that $x_1$ and $z_1$ are not necessarily distinct. Let $A=\{u, x, y, y_1, x_1'\}$. Since $G$ is $5$-critical, let $c: E(G\backslash A)\rightarrow [5]$ be any star $5$-edge-coloring of $G\backslash A$. We may assume that $1,2\notin c(z_1)$ and $c(zz_1)=3$. Let $\alpha \in [5]\backslash( c(v)\cup \{1\})$ and $\beta\in [5]\backslash(c(v)\cup\{\alpha\})$. Then we obtain a star $5$-edge-coloring of $G$ from $c$ by first coloring the edges $uz, zy_1, xx_1, x_1x_1'$ by colors $1, 2,\alpha, \beta$ in order, and then coloring $ux$ by a color $\gamma$ in $[5]\backslash \{1,\alpha,\beta,c(x_1v)\}$, $xx'$ by a color in $[5]\backslash \{1,\alpha,\gamma,c(x_1v)\}$, $uy$ by a color $\theta$ in $[5]\backslash \{1,2,3, \gamma\}$, $yy_1$ by a color $\mu$ in $[5]\backslash \{1,2,\gamma,\theta\}$, $yy'$ by a color in $[5]\backslash \{2,\gamma,\theta,\mu\}$, $y_1y_1'$ by a color in $[5]\backslash \{1,2, \mu\}$, a contradiction. This proves that $zx_1, zy_1\notin E(H)$. \medskip It remains to show that $z$ must be a $3_0$-vertex in $H$. Suppose that $z$ is not a $3_0$-vertex in $H$. Since $d_{H}(u)=3$, we see that $z$ is either a $3_1$-vertex or a $3_2$-vertex in $H$. Let $N_{H}(z)=\{u,s,t\}$ with $d_{H}(s)=2$. By Lemma~\ref{2nbr} applied to the vertex $s$, $s\ne t$. Since $zx_1, zy_1\notin E(H)$, we see that $x_1, y_1, s,t$ are all distinct. By Lemma~\ref{2nbr}, let $v, w, r$ be the other neighbor of $x_1, y_1, s$ in $H$, respectively. Note that $r$, $t$, $v$, $w$ are not necessarily distinct. By Lemma~\ref{noC4}, both $v$ and $w$ must be $3$-vertices in $H$. We next prove that \medskip \noindent (a) if $x'$ or $y'$ exists, then for any star $5$-edge-coloring $c^*$ of $G\backslash \{x', y'\}$, $c^*(xx_1)\in c^*(v)$ or $c^*(yy_1)\in c^*(w)$.\medskip To see why (a) is true, suppose that there exists a star $5$-edge-coloring $c^*: E(G\backslash \{x', y'\})\rightarrow [5]$ such that $c^*(xx_1)\notin c^*(v)$ and $c^*(yy_1)\notin c^*(w)$. Then we obtain a star $5$-edge-coloring of $G$ from $c^*$ by coloring $xx'$ by a color in $[5]\backslash(\{c^*(xx_1)\}\cup c^*(u))$ and $yy'$ by a color in $[5]\backslash(\{c^*(yy_1)\}\cup c^*(u))$, a contradiction. This proves (a). \medskip Let $A$ be the set containing $x, y$ and the $1$-neighbor of each of $ x_1, y_1$ in $G$ if it exists. Since $G$ is $5$-critical, let $c_1: E(G\backslash A)\rightarrow [5]$ be any star $5$-edge-coloring of $G\backslash A$. Let $c$ be a star $5$-edge-coloring of $G\backslash\{x, x', y', x_1'\}$ obtained from $c_1$ by coloring $yy_1$ by a color $\alpha$ in $[5]\backslash (c_1(w)\cup\{c_1(uz)\})$, $uy$ by a color in $[5]\backslash(c_1(z)\cup \{\alpha\})$, and $y_1y_1'$ by a color $\beta$ in $[5]\backslash (c_1(w)\cup \{\alpha\})$. We may assume that $c(uz)=1$, $c(zs)=2$ and $c(zt)=3$. By the choice of $c(uy)$, we may further assume that $c(uy)=4$. We next obtain a contradiction by extending $c$ to be a star $5$-edge-coloring of $G$ (when neither of $x'$ and $y'$ exists) or a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (when $x'$ or $y'$ exists) which violates (a). We consider the worst scenario when $x'$ and $y'$ exist. We first prove two claims. \bigskip \noindent {\bf Claim 1}: $\beta=4$ or $c(y_1w)=4$.\medskip \noindent {\it Proof.} Suppose that $\beta\ne 4$ and $c(y_1w)\ne4$. We next show that $c(v)\cup\{1,4\}\ne [5]$. Suppose that $c(v)\cup\{1,4\}= [5]$. Then $c(v)=\{2,3,5\}$. Clearly, $c(x_1v)=5$, otherwise, coloring $ux$, $xx_1$, $x_1x'_1$ by colors $5,1,4$ in order, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a), a contradiction. We see that $1\in c(s)\cap c(t)$, otherwise, we may assume that $1\notin c(s)$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a) as follows: when $\alpha\ne2$, color $ux, xx_1, x_1x_1'$ by colors $2, 4,1$ in order; when $\alpha=2$, first color $ux, xx_1, x_1x_1'$ by colors $2, 4,1$ in order and then recolor $yy_1, y_1y_1'$ by colors $\beta,2$ in order. It follows that $4, 5\in c(s)\cup c(t)$, otherwise, say $\theta\in\{4,5\}$ is not in $ c(s)\cup c(t)$, let $\alpha'\in\{2,3\}\backslash \alpha$, now either coloring $ux, xx_1, x_1x_1'$ by colors $\alpha', 4,1$ in order and then recoloring $uz$ by color $5$ when $\theta=5$; or coloring $ux, xx_1, x_1x_1'$ by colors $\alpha', 1, 4$ in order and then recoloring $uz, uy$ by colors $4, 1$ in order when $\theta=4$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Thus $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)= \theta$ and $2\notin c(r)$, then we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (which violates (a)) as follows: when $\theta=5$, color $ux, xx_1, x_1x_1'$ by colors $3,1,4$ in order and then recolor $uz$ by color $5$; when $\theta=4$ and $\alpha\in\{2,5\}$, first color $ux, xx_1, x_1x'_1$ by colors $3, 1, 4$ in order, and then recolor $uz, uy$ by colors $4, 1$ in order; when $\theta=4$ and $\alpha=3$ and $\beta\ne 5$, color $ux, xx_1, x_1x'_1$ by colors $5, 1, 4$ in order and then recolor $uz, uy, yy_1, y_1y_1'$ by colors $4, 3, \beta, 3$ in order; when $\theta=4$ and $\alpha=3$ and $\beta=5$, color $ux, xx_1, x_1x'_1$ by colors $3, 1, 4$ in order and then recolor $uz, uy, yy_1, y_1y_1'$ by colors $4, 1, 5, 3$ in order. Thus $c(ss')=1$, $c(sr)=\theta$ and $2\in c(r)$. Now recoloring the edge $ss'$ by a color in $\{3,9-\theta\}\backslash c(r)$ yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $\beta\ne 4$, $c(y_1w)\ne4$, $c(v)\cup\{1,4\}= [5]$ and $c(x_1v)=5$ but $1\notin c(s)\cap c(t)$, a contradiction. This proves that $c(v)\cup\{1,4\}\ne [5]$. \medskip Since $c(v)\cup\{1,4\}\ne [5]$, we see that $[5]\backslash (c(v)\cup\{1,4\})= \{5\}$, otherwise, coloring $ux$ by color $5$, $xx_1$ by a color $\gamma$ in $[5]\backslash(c(v)\cup\{1,4,5\})$, and $x_1x'_1$ by a color in $[5]\backslash (c(v)\cup \gamma)$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Clearly, $2,3\in c(v)$ and $\{1,4\}\backslash c(v)\ne\emptyset$. Let $\gamma\in \{1,4\}\backslash c(v)$ and $\alpha'\in\{2,3\}\backslash \alpha$. Then $1\in c(s)\cap c(t)$, otherwise, we may assume that $1\notin c(s)$, now coloring $ux, xx_1, x_1x_1'$ by colors $2, 5,\gamma$ in order yields a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). It follows that $4, 5\in c(s)\cup c(t)$, otherwise, say $\theta\in\{4,5\}$ is not in $ c(s)\cup c(t)$, first recoloring $uz$ by color $\theta$ and then either coloring $ux, xx_1, x_1x_1'$ by colors $\alpha', 5, \gamma$ in order and then recoloring $uy$ by color $1$ when $\theta=4$; or coloring $ux, xx_1, x_1x_1'$ by colors $\alpha', 1, 5$ in order when $\theta=5$ and $\gamma=1$; or coloring $ux, xx_1, x_1x_1'$ by colors $1,4,5$ in order when $\theta=5$, $\gamma=4$ and $c(x_1v)\ne1$; or coloring $ux, xx_1, x_1x_1'$ by colors $\alpha',4,5$ in order when $\theta=5$, $\gamma=4$ and $c(x_1v)=1$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Thus $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)= \theta$ and $2\notin c(r)$, then we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (which violates (a)) as follows: when $\theta=5$ and $\gamma=1$, color $ux, xx_1, x_1x_1'$ by colors $3, 1, 5$ in order and then recolor $uz$ by colors $5$; when $\theta=5$, $\gamma=4$ and $c(x_1v)\ne1$, color $ux, xx_1, x_1x_1'$ by color $ 1, 4, 5$ in order and then recolor $uz$ by colors $5$; when $\theta=5$, $\gamma=4$ and $c(x_1v)=1$, color $ux, xx_1, x_1x_1'$ by color $ 3, 4, 5$ in order and then recolor $uz$ by colors $5$ (and further recolor $yy_1$ by $\beta$ and $y_1y_1'$ by $\alpha$ when $\alpha=3$); when $\theta=4$ and $\beta\ne1$, color $ux, xx_1, x_1x_1'$ by color $3, 5, \gamma$ in order and then recolor $uz, uy$ by colors $4, 1$ in order, and finally recolor $yy_1$ by a color $\beta'\in \{\alpha,\beta\}\backslash 3$ and $y_1y'_1$ by a color in $\{\alpha,\beta\}\backslash \beta'$; when $\theta=4$, $\beta=1$ and $\gamma=1$, color $ux, xx_1, x_1x_1'$ by color $5,1,5$ in order and then recolor $uz, uy, yy_1, y_1y_1'$ by colors $4, 3, 1, \alpha$ in order; when $\theta=4$, $\beta=1$, $\gamma=4$ and $\alpha\ne3$, color $ux, xx_1, x_1x_1'$ by color $3, 5, 4$ in order and then recolor $uz, uy$ by colors $4, 1$ in order; when $\theta=4$, $\beta=1$, $\gamma=4$ and $\alpha=3$, let $\gamma'\in\{1,3\}\backslash c(x_1v)$, color $ux, xx_1, x_1x_1'$ by color $\gamma', 5, 4$ in order and then recolor $uz$ by color $4$, $uy$ by color $5$, $yy_1$ by a color $\beta'$ in $\{1,3\}\backslash \gamma'$ and $y_1y_1'$ by a color in $\{1,3\}\backslash \beta'$. Thus $c(ss')=1$, $c(sr)=\theta$ and $2\in c(r)$. Now recoloring the edge $ss'$ by a color in $\{3,9-\theta\}\backslash c(r)$ yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $\beta\ne 4$, $c(y_1w)\ne4$ and $[5]\backslash (c(v)\cup\{1,4\})= \{5\}$ but $1\notin c(s)\cap c(t)$, a contradiction. This completes the proof of Claim 1. \hfill\vrule height3pt width6pt depth2pt\\ \noindent {\bf Claim 2}: $\beta=4$. \medskip Suppose that $\beta\ne 4$. By Claim 1, $c(y_1w)=4$. We first consider the case when $c(w)=\{2,3,4\}$. Then $\alpha=5$ and $\beta=1$. We claim that $c(v)\cup\{1,4\}\ne [5]$. Suppose that $c(v)\cup\{1,4\}= [5]$. Then $c(v)=\{2,3,5\}$. Clearly, $1\in c(s)\cap c(t)$, otherwise, we may assume that $1\notin c(s)$, now coloring $ux, x x_1, x_1x'_1$ by colors $5,4,1$ in order and then recoloring $uy$ by $2$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). It follows that $4, 5\in c(s)\cup c(t)$, otherwise, say $\theta\in\{4,5\}$ is not in $ c(s)\cup c(t)$, now coloring $ux, x x_1, x_1x'_1$ by colors $3,1,4$ in order and then recoloring $uz, uy, yy_1, y_1y_1'$ by colors $\theta, 2, 1,5$ in order we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Thus $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)= \theta$ and $2\notin c(r)$, then coloring $ux, x x_1, x_1x'_1$ by colors $3,1,4$ in order and then recoloring $uz, uy, yy_1, y_1y_1'$ by colors $\theta, 9-\theta, 1,5$ in order yileds a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Thus $c(ss')=1$, $c(sr)= \theta$ and $2\in c(r)$. Now recoloring the edge $ss'$ by a color in $\{3,9-\theta\}\backslash c(r)$ yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $\alpha=5$, $\beta=1$, $c(y_1w)=4$ and $c(v)\cup\{1,4\}= [5]$ but $1\notin c(s)\cap c(t)$, a contradiction. This proves that $c(v)\cup\{1,4\}\ne [5]$. Let $\eta=5$ when $5\notin c(v)$ or $\eta \in \{2,3\}\backslash c(v)$ when $5\in c(v)$. Let $\mu\in [5]\backslash( c(v)\cup\{\eta\})$. By Claim 1 and the symmetry between $x$ and $y$, either $4\notin c(v)$ or $5\notin c(v)$. We see that $\mu=4$ when $\eta\ne 5$. Then $1\in c(s)\cap c(t)$, otherwise, we may assume $1\notin c(s)$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (which violates (a)) as follows: when $\eta\neq 2$, color $ux, xx_1, x_1x_1'$ by colors $2, \eta, \mu$ in order; when $\eta=2$, then $\mu=4$, first recolor $uy$ by color $2$ and then color $ux, xx_1, x_1x'_1$ by colors $5,4, 2$ in order. It follows that $4, 5\in c(s)\cup c(t)$, otherwise, say $\theta\in\{4,5\}$ is not in $ c(s)\cup c(t)$, now first recoloring $uz, yy_1, y_1y_1'$ by colors $\theta, 1,5$ in order, and then coloring $xx_1, x_1x'_1$ by colors $\eta,\mu$ in order, $ux$ by a color $\gamma$ in $[5]\backslash\{\mu,\eta,\theta, c(x_1v)\}$, and finally coloring $uy$ either by a color in $\{2,3\}\backslash\eta$ when $\gamma=1$ or by a color in $\{2,3\}\backslash\gamma$ when $\gamma\ne1$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Thus $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)= \theta$ and $2\notin c(r)$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (which violates (a)) as follows: when $\theta=4$ and $\eta=5$, color $ux, xx_1, x_1x_1'$ by colors $3, 5, \mu$ in order and then recolor $uz, uy$ by colors $4,1$ in order; when $\theta=4$ and $\eta\in\{2,3\}$, then $\mu=4$, first recolor $uz, uy$ by colors $4,3$ in order and then color $xx_1, x_1x_1'$ by colors $ \eta, 4$ in order and finally color $ux$ by a color $\gamma$ in $\{1,5\}\backslash c(x_1v)$, $yy_1$ by a color $\lambda$ in $\{1,5\}\backslash \gamma$, and $y_1y_1'$ by a color in $\{1,5\}\backslash \lambda$; when $\theta=5$ and $\eta\in\{2,3\}$, then $\mu=4$, color $ux, xx_1, x_1x_1'$ by colors $1, 4, \eta$ in order and then recolor $uz, uy, yy_1, y_1y_1'$ by colors $5,3, 1,5$ in order; when $\theta=5$, $\eta=5$ and $\mu\ne3$, color $ux, xx_1, x_1x_1'$ by colors $1, \mu, 5$ in order and then recolor $uz, uy, yy_1, y_1y_1'$ by colors $5,3, 1,5$ in order; when $\theta=5$, $\eta=5$ and $\mu=3$, first recolor $uz, uy, yy_1, y_1y_1'$ by colors $5,3, 1,5$ in order, then color $xx_1, x_1x_1'$ by colors $5, 3$ in order and finally color $ux$ by a color in $\{1,4\}\backslash c(x_1v)$. Thus $c(ss')=1$, $c(sr)= \theta$ and $2\in c(r)$. Now recoloring the edge $ss'$ by a color in $\{3,9-\theta\}\backslash c(r)$ yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $\alpha=5$, $\beta=1$, $c(z)=\{1,2,3\}$, $c(uy)=c(y_1w)=4$ and $c(v)\cup\{1,4\}\ne[5]$ but $1\notin c(s)\cap c(t)$, a contradiction.\\ We next consider the case when $c(w)\neq\{2,3,4\}$. If $\alpha, \beta\ne5$, then recoloring $uy$ by color $5$ yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ with $c(uy)\ne c(y_1y_1'), c(y_1w)$, contrary to Claim 1. Thus either $\alpha=5$ or $\beta=5$. Then $1\in c(w)$ because $c(w)\neq\{2,3,4\}$ and $|c(w)|=3$. It follows that $\alpha, \beta\in\{2,3,5\}$ and $5\in\{\alpha, \beta\}$. We may assume that $\alpha\in\{2,3\}$ and $\beta=5$ by permuting the colors on $yy_1$ and $y_1y_1'$ if needed. Then $4,5\in c(s)\cup c(t)$, otherwise, say $\theta\in\{4,5\}$ is not in $ c(s)\cup c(t)$, we obtain a a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ which contradicts Claim 1 by recoloring $uz, uy$ by colors $\theta, 1$ in order. Let $\alpha'\in\{2,3\}\backslash \alpha$. We next show that $c(ss')=1$, $c(sr)= \theta$ and $2\in c(r)$.\medskip Suppose first that $c(v)\cup\{1,4\}= [5]$. Then $c(v)=\{2,3,5\}$. We see that $c(x_1v)=5$, otherwise, coloring $ux$, $xx_1$, $x_1x'_1$ by colors $5,1,4$ in order, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Clearly, $1\in c(s)\cap c(t)$, otherwise, we may assume that $1\notin c(s)$, now coloring $ux, x x_1, x_1x'_1$ by colors $2,4,1$ in order and then recoloring $yy_1,y_1y_1'$ by colors $5,\alpha$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Since $4,5\in c(s)\cup c(t)$, we see that $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)= \theta$ and $2\notin c(r)$, then recoloring $uz, uy$ by colors $\theta, 1$ in order yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ with $c(uy)\ne c(y_1y_1'), c(y_1w)$, contrary to Claim 1. Thus $c(ss')=1$, $c(sr)= \theta$ and $2\in c(r)$. Next suppose that $c(v)\cup\{1,4\}\ne [5]$. Let $\eta=5$ when $5\notin c(v)$ or $\eta \in \{2,3\}\backslash c(v)$ when $5\in c(v)$. Let $\mu\in [5]\backslash( c(v)\cup\{\eta\})$. By Claim 1 and the symmetry between $x$ and $y$, either $4\notin c(v)$ or $5\notin c(v)$. We see that $\mu=4$ when $\eta\ne 5$. Then $1\in c(s)\cap c(t)$, otherwise, we may assume $1\notin c(s)$, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (which violates (a)) as follows: when $\eta=5$, color $ux, xx_1, x_1x_1'$ by colors $4, 5, \mu$ in order and then recolor $uy, yy_1, y_1y_1'$ by colors $2,5,\alpha$ in order; when $\eta\in\{2,3\}$, then $\mu=4$, color $ux, xx_1, x_1x'_1$ by colors $5,\eta, 4$ in order. Since $4,5\in c(s)\cup c(t)$, we see that $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)= \theta$ and $2\notin c(r)$, then recoloring $uz, uy$ by colors $\theta, 1$ in order yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ with $c(uy)\ne c(y_1y_1'), c(y_1w)$, contrary to Claim 1. Thus $c(ss')=1$, $c(sr)= \theta$ and $2\in c(r)$. \medskip Now recoloring the edge $ss'$ by a color in $\{3,9-\theta\}\backslash c(r)$ yields a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $\alpha\in\{2,3\}$, $\beta=5$, $c(y_1w)=4$ and $c(w)\neq\{2,3,4\}$ but $1\notin c(s)\cap c(t)$, a contradiction. This completes the proof of Claim 2. \hfill\vrule height3pt width6pt depth2pt\\ By Claim 2, $\beta=4$. Suppose that $\alpha\ne5$. Then $\alpha\in\{2,3\}$. Note that $\alpha\notin c(w)\cup\{1\}$. Now recoloring $uy$ by color $5$, we obtain a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $c(uz)=1$, $c(zs)=2$ and $c(zt)=3$ but $\beta\ne c(uy)$, contrary to Claim 2. Thus $\alpha=5$ and so $ c(w)=\{1,2,3\}$. By the symmetry of $x$ and $y$, $c(v)=\{1,2,3\}$. Then $1\in c(s)\cap c(t)$, otherwise, we may assume that $1\notin c(s)$, now coloring $ux, xx_1, x_1x_1'$ by colors $2,5,4$ in order yields a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). It follows that $4, 5\in c(s)\cup c(t)$, otherwise, say $\theta\in\{4,5\}$ is not in $ c(s)\cup c(t)$, now first coloring $ux, xx_1, x_1x_1'$ by colors $2,9-\theta, \theta$ in order and then recoloring $uz, uy, yy_1, y_1y_1'$ by colors $\theta, 3, 9-\theta, \theta$ in order, we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ which violates (a). Thus $c(s)=\{1,2,\theta\}$ and $c(t)=\{1,3,9-\theta\}$, where $\theta\in\{4,5\}$. If $c(ss')=\theta$ or $c(sr)=\theta$ and $2\notin c(r)$, then we obtain a star $5$-edge-coloring of $G\backslash\{x', y'\}$ (which violates (a)) by coloring $ux, xx_1, x_1x_1'$ by colors $1, 9-\theta, \theta$ in order, and then recoloring $uz, uy, yy_1, y_1y_1'$ by colors $\theta, 3, 9-\theta, \theta$ in order. Thus $c(ss')=1$ and $2\in c(r)$. Now recoloring $ss'$ by a color in $\{3,9-\theta\}\backslash c(r)$, we obtain a star $5$-edge-coloring $c$ of $G\backslash\{x, x', y', x_1'\}$ satisfying $c(uz)=1$, $c(zs)=2$, $c(zt)=3$, $\beta= 4$ and $\alpha=5$ but $1\notin c(s)\cap c(t)$. \medskip This completes the proof of Lemma~\ref{main}.\hfill\vrule height3pt width6pt depth2pt\\ \section{Proof of Theorem~\ref{main}} We are now ready to prove Theorem~\ref{main}. Suppose the assertion is false. Let $G$ be a subcubic multigraph with $\text{mad}(G)<12/5$ and $\chi'_s(G)>5$. Among all counterexamples we choose $G$ so that $|G|$ is minimum. By the choice of $G$, $G$ is connected, star $5$-critical, and $\text{mad}(G)<12/5$. For all $i\in[3]$, let $A_i=\{v\in V(G): \, d_G(v)=i\}$ and let $n_i=|A_i|$ for all $i\in[3]$. Since $\text{mad}(G)<12/5$, we see that $3n_3<2n_2+7n_1$ and so $A_1\cup A_2\ne\emptyset$. By Lemma~\ref{deg=1}(a), $A_1$ is an independent set in $G$ and $N_G(A_1)\subseteq A_3$. Let $H=G\backslash A_1$. Then $H$ is connected and $\text{mad}(H)<12/5$. By Lemma~\ref{deg=1}(b), $\delta(H)\ge2$. By Lemma~\ref{3nbr}, every $3_2$-vertex in $H$ has three distinct neighbors in $H$. We say that a $3_2$-vertex in $H$ is \dfn{bad} if both of its $2$-neighbors are bad. A vertex $u$ is a \dfn{good} (resp. \dfn{bad}) $2$-neighbor of a vertex $v$ in $H$ if $uv\in E(H)$ and $u$ is a good (resp. bad) $2$-vertex. By Lemma~\ref{main}, every bad $3_2$-vertex in $H$ has a unique $3_0$-neighbor. We now apply the discharging method to obtain a contradiction. \medskip For each vertex $v\in V(H)$, let $\omega(v):= d_{H}(v)-\frac{12}{5}$ be the initial charge of $v$. Then $ \sum_{v\in V(H)} \omega(v) =2e(H)-\frac{12}{5}|H|=|H|(2e(H)/|H|-\frac{12}{5})<0$. Notice that for each $v\in V(H)$, $\omega(v)=2-\frac{12}{5}=-\frac{2}{5}$ if $d_{H}(v)=2$, and $\omega(v)=3-\frac{12}{5}=\frac{3}{5}$ if $d_{H}(v)=3$. We will redistribute the charges of vertices in $H$ as follows. \medskip \par\hangindent\parindent\fattextindent {(R1):} every bad $3_2$-vertex in $H$ takes $\frac1 {5}$ from its unique $3_0$-neighbor. \par\hangindent\parindent\fattextindent {(R2):} every $3_1$-vertex in $H$ gives $\frac3 {5}$ to its unique $2$-neighbor. \par\hangindent\parindent\fattextindent {(R3):} every $3_2$-vertex in $H$ gives $\frac1 {5}$ to each of its good $2$-neighbors (possibly none) and $\frac25$ to each of its bad $2$-neighbors (possibly none). \par\hangindent\parindent\fattextindent {(R4):} every $3_3$-vertex in $H$ gives $\frac1 {5}$ to each of its $2$-neighbors.\medskip Let $\omega^*$ be the new charge of $H$ after applying the above discharging rules in order. It suffices to show that $\sum_{v\in V(H)} \omega^*(v)\geq0$. For any $v\in V(H)$ with $d_{H}(v)=2$, by Lemma~\ref{2nbr}, $v$ has two distinct neighbors in $H$. If $v$ is a good $2$-vertex, then $v$ takes at least $\frac15$ from each of its $3$-neighbors under (R2), (R3) and (R4), and so $\omega^*(v)\geq0$. Next, if $v$ is a bad $2$-vertex, let $x$, $y$ be the two neighbors of $v$ in $H$. We may assume that $y$ is a bad $2$-vertex. By Lemma~\ref{2nbr}, let $z$ be the other neighbor of $y$ in $H$. By Lemma~\ref{noC4}, we may assume that $d_{H}(x)=3$. By Lemma~\ref{noBad}, $x$ is either a $3_1$-vertex or a $3_2$-vertex in $H$. Under (R2) and (R3), $v$ takes at least $\frac25$ from $x$. If $d_{H}(z)=3$, then by a similar argument, $y$ must take at least $\frac25$ from $z$. In this case, $\omega^*(v)+\omega^*(y)\geq0$. If $d_{H}(z)=2$, then $z$ is bad. By Lemma~\ref{2nbr}, let $w$ be the other neighbor of $z$. By Lemma~\ref{noC4}, each of $x$ and $w$ must be a $3_1$-vertex in $H$. Under (R2), $v$ takes $\frac35$ from $x$ and $z$ takes $\frac35$ from $w$. Hence, $\omega^*(v)+\omega^*(y)+\omega^*(z)\geq0$. \medskip For any $v\in V(H)$ with $d_{H}(v)=3$, if $v$ is a bad $3_2$-vertex, then $v$ has a unique $3_0$-neighbor by Lemma~\ref{main}. Under (R1) and (R3), $v$ first takes $\frac15$ from its unique $3_0$-neighbor and then gives $\frac25$ to each of its bad $2$-neighbors, we see that $\omega^*(v)\geq0$. If $v$ is not a bad $3_2$-vertex, then $v$ gives either nothing or one of $\frac15$, $\frac25$, and $\frac35$ in total to its neighbors under (R1), (R2), (R3) and (R4). In either case, $\omega^*(v)\ge0$. Consequently, $\sum_{v\in V(H)} \omega^*(v)\geq0$, contrary to the fact that $\sum_{v\in V(H)} \omega^*(v)=\sum_{v\in V(H)}\omega(v)<0$. \medskip This completes the proof of Theorem~\ref{main}. \hfill\vrule height3pt width6pt depth2pt\medskip \vspace{5mm} \noindent {\bf Acknowledgments.} Zi-Xia Song would like to thank Yongtang Shi and the Chern Institute of Mathematics at Nankai University for hospitality and support during her visit in May 2017. \medskip \noindent Hui Lei and Yongtang Shi are partially supported by the National Natural Science Foundation of China and the Natural Science Foundation of Tianjin (No.17JCQNJC00300). \medskip \noindent Tao Wang is partially supported by the National Natural Science Foundation of China (11101125) and the Fundamental Research Funds for Universities in Henan (YQPY20140051).
proofpile-arXiv_065-3550
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} \label{sec:intro} T Tauri stars are widely known to be associated with protoplanetary disks or T Tauri disks, which often show Keplerian rotation, as has been shown by interferometric observations in the last two decades \citep{gu.du1998,gu1999,si2000,qi2004,hu2009,ro2010}. Class 0/I protostars have also been, on the other hand, considered to be associated with protostellar disks, which are supposed to be precursors of T Tauri disks. Details of protostellar disks around protostars, such as their structures and dynamics, however, are poorly characterized because protostellar disks are deeply embedded in protostellar envelopes. The question of whether disks and envelopes in protostellar systems can be distinguished from each other by detailed observations of protostellar disks is therefore critical. One promising way to achieve this goal is to distinguish them kinematically by inspecting their rotational motion, i.e., envelopes that often have infall motion with conserved angular momentum are expected to have rotation proportional to $r^{-1}$, while disks are expected to have Keplerian rotation proportional to $r^{-0.5}$. Initial attempts with such an approach have been successfully made recently, identifying a couple of disks with Keplerian rotation around protostars \citep{cho2010,ta2012,to2012,ye2013,mu2013,le2014}. Further attempts have been made more recently with higher angular resolutions ($\sim 100$ AU) and sensitivity, enabling us to identify even smaller, less bright disks around younger protostars \citep{ta2014,ye2014,oh2014,aso2015,le2016,ye2017}. Once envelopes and disks are kinematically distinguished, disk radii can also be kinematically estimated. Another important aspect to identify Keplerian rotation in disks around protostars is that it allows us to estimate dynamical masses of central protostars directly with no assumption. Such direct estimation of the dynamical mass enables us to examine whether or not infall motions around protostars are free fall, as has been assumed so far. L1527 IRS \citep{oh2014}, L1551 IRS 5 \citep{ch2014}, and TMC-1A \citep{aso2015} have infall motions that appear significantly slower than free fall velocities yielded by their dynamical masses, while L1489 IRS shows inflow at free fall velocity toward its Keplerian disk \citep{ye2014}. These previous studies indicate that protostellar disks around protostars are kinematically similar to T Tauri disks in the sense that both disks show Keplerian rotation. It is not, however, clear whether protostellar disks are structurally similar to T Tauri disks as well. Physical structures of T Tauri disks, such as radial dependences of surface density and temperature have been well investigated, suggesting that their radial power-law profiles of surface density and temperature are well described as $\Sigma \propto r^{-p}$ with $p\sim 1$ regardless of exponential tails and $T\propto r^{-q}$ with $q\sim 0.5$, respectively. Their scale height described as $H\propto r^{h}$ was also investigated, and was found to be in hydrostatic equilibrium (HSEQ) with a power-law index of $h=1.25$. On the other hand, similar studies to investigate physical structures of protostellar disks have not been performed yet except for very limited cases with radial and vertical structures for the Butterfly Star \citep{wo2008}, L1527 IRS \citep{to2013}, TMC-1A \citep{aso2015}, four protostars in the NGC 1333 star forming region, and VLA 1623 \citep{pe2016} and without vertical structures for TMC-1A, TMC1, TMR1, L1536 \citep{ha2014}, and seven protostars in Perseus molecular cloud \citep{se2016}. Understanding physical structures of protostellar disks would be important to see whether disks and their surrounding envelopes can be geometrically distinguished from each other. If this is the case, we can have two independent ways (kinematically and geometrically) to identify disks around protostars. In addition, investigating physical structures of protostellar disks can help us to understand the disk formation process and also the process of evolution into T Tauri disks. In particular recent observations of T Tauri disks have revealed irregular structures such as spirals, gaps, and central holes, which might be related to planet formation. Tantalizing ring structures are discovered even in the disk around the class I/${\rm I\hspace{-0.1em}I}$ young star HL Tau. It would be very important to understand when and how such structures are formed in these disks, and to answer this question, it is required to investigate structures of protostellar disks, which would be precursors of disks with irregular structures. L1527 IRS (IRAS 04365+2557) is one of the youngest protostars, whose disk has been studied. L1527 IRS, located in one of the closest star-forming regions, the Taurus Molecular Cloud ($d=140$ pc), has bolometric luminosity $L_{\rm bol}=2.0\ L_{\odot}$ and bolometric temperature $T_{\rm bol}=44$ K \citep{kr2012}, indicating that L1527 IRS is a relatively young protostar. The systemic velocity of L1527 IRS in the local standard of rest (LSR) frame was estimated to be $V_{\rm LSR}\sim 5.7\ {\rm km~s^{-1}}$ from C$^{18}$O $J=1-0$ observations with Nobeyama 45 m single-dish telescope \citep{oh1997}, while N$_{2}$H$^{+}$ $J=1-0$ observations with Five College Radio Astronomy Observatory (FCRAO) 14 m and Institut de Radioastronomie Millimetrique (IRAM) 30 m single-dish telescopes, estimated it to be $V_{\rm LSR}\sim5.9\ {\rm km~s^{-1}}$ \citep{ca2002,to2011}. We adopt $V_{\rm LSR}=5.8\ {\rm km~s^{-1}}$ for the systemic velocity of L1527 IRS in this paper, which is reasonable as will be shown later. On a $\sim 30000$ AU scale, a bipolar outflow associated with this source was detected in the east-west direction by FCRAO single-dish observations in $^{12}$CO $J=1-0$ molecular line emission \citep{na2012} and by James Clerk Maxwell Telescope (JCMT) single-dish observations in $^{12}$CO $J=3-2$ molecular line emission \citep{ho1997}. Their results show that the blue and red robes of the outflows are on the eastern and western sides, respectively. On the other hand, inner parts ($\sim 8000$ AU scale) of the outflow mapped with Nobeyama Millimeter Array (NMA) in $^{12}$CO $J=1-0$ show the opposite distribution, i.e., stronger blueshifted emission on the western side and stronger redshifted emission on the eastern side \citep{ta1996}. Mid-infrared observations toward L1527 IRS with Spitzer Space Telescope shows bright bipolar scattered light nebulae along the outflow axis in the $\sim 20000$ AU scale \citep{to2008}. They fitted a protostellar envelope model to near- and mid-infrared scattered light images and spectral energy distribution (SED). As a result the inclination angle of the envelope around L1527 IRS was estimated to be $i=85^{\circ}$, where $i=90^{\circ}$ means the edge-on configuration. In addition \citet{oy2015} found that the western side is closer to observers. This inclination angle is consistent with a disk-like structure of dust highly elongated along the north-south direction, which was spatially resolved for the first time in 7-mm continuum emission by the Very Large Array (VLA) \citep{lo2002}. By expanding the studies by \citet{to2008}, \citet{to2013} fitted a model composed of an envelope and a disk to (sub)millimeter continuum emissions and visibilities observed with SMA and CARMA as well as infrared images and SED. Their best-fitting model suggests a highly flared disk structure ($H\propto R^{1.3}$, $H=48$ AU at $R=100$ AU) with a radius of 125 AU. This study has geometrically distinguished the protostellar disk and envelope around L1527 IRS, although they were not kinematically distinguished from one another. The first interferometric observations of the envelope surrounding L1527 IRS in molecular line emission were reported by \citet{oh1997}, identifying an edge-on flattened envelope elongated perpendicularly to the associated outflow, in their C$^{18}$O $J=1-0$ map obtained with NMA at an angular resolution of $\sim 6\arcsec$. It was found that kinematics of the envelope can be explained with dynamical infall motion $(\sim 0.3\ {\rm km~s^{-1}})$ and slower rotation ($\sim 0.05\ {\rm km~s^{-1}}$) at 2000 AU. Its mass infalling rate was also estimated to be $\dot{M}\sim 1\times 10^{-6}\ M_{\sun} \, {\rm yr}^{-1}$. Higher-resolution ($\sim 1\arcsec$) observations using Combined Array for Research in Millimeter Astronomy (CARMA) in $^{13}$CO $J=2-1$ line were carried out toward this source by \citet{to2012}. They measured emission offsets from the central protostar at each channel and fitted the position-velocity data with a kinematical model using the LIne Modeling Engine (LIME) by assuming Keplerian rotation. According to their best-fit result, the mass of L1527 IRS was estimated to be $M_{*}=0.19\pm 0.04\ M_{\sun}$ and the disk radius was also estimated to be 150 AU. It should be noted, however, that no other kinds of rotation, such as the one conserving its angular momentum, were compared with the observations in the work. They attempted later to compare their rotation curve with rotation laws conserving angular momentum, which did not change their original conclusion \citep{to2013pro}. In order to investigate the rotational velocity around L1527 IRS without assuming Keplerian rotation, a radial profile of the rotational velocity $V_{\rm rot}$ was measured by \citet{ye2013} from their SMA observations in C$^{18}$O $J=2-1$ line with the angular resolution of $4\farcs 2 \times 2\farcs 5$. In their analysis, the rotation profiles are derived from Position-Velocity (PV) diagrams cutting along a line perpendicularly to the outflow axis. The rotation profile of L1527 IRS was measured at $r\gtrsim 140$ AU to be $V_{\rm rot}\propto r^{-1.0\pm 0.2}$, which is clearly different from Keplerian rotation $V_{\rm rot}\propto r^{-1/2}$. Further investigation of the rotation profile around L1527 IRS was performed with much higher sensitivity as well as higher angular resolution provided by Atacama Large Millimeter/submillimeter Array (ALMA) \citep{oh2014}. The rotation profile obtained in C$^{18}$O $J=2-1$ at a resolution of $\sim 0\farcs9$ mostly shows velocity inversely proportional to the radius being consistent with the results obtained by \citet{ye2013}, while it also suggests a possibility that the profile at $\lesssim 54$ AU can be interpreted as Keplerian rotation with a central stellar mass of $\sim 0.3\ M_{\sun}$. In addition, infall velocity in the envelope is found to be slower than the free fall velocity yielded by the expected central stellar mass \citep{oh2014}. In this paper we report new ALMA Cycle 1 observations of L1527 IRS in C$^{18}$O $(J=2-1)$ and 220 GHz continuum, with a $\sim 2$ times higher angular resolution and a $\sim 4$ times higher sensitivity as compared with our previous ALMA cycle 0 observations, which allow us to give a much better constraint on the rotation profile of the disks and the envelope, and also their geometrical structures. Our observations and data reduction are described in Section \ref{sec:obs}. In Section \ref{sec:res}, we present the continuum and molecular-line results. In Section \ref{sec:ana}, we analyze rotation velocity measured by the C$^{18}$O line and perform $\chi ^{2}$ fitting to explain the continuum visibility using a model. In Section \ref{sec:disc}, we investigate the validity and consistency of the model that reproduces the observations the best. We present a summary of the results and our interpretation in Section \ref{sec:conc}. \section{ALMA OBSERVATIONS AND DATA REDUCTION} \label{sec:obs} We observed our target, L1527 IRS, during Cycle 1 using ALMA on 2014 July 20. The observations were composed of two tracks in the same day with a separation of $\sim 40$ minutes. Each track was $\sim 30$ minutes including overhead. J0510$+$1800 was observed as the passband, gain, and flux calibrator for the former track and J0423-013 was observed as the flux calibrator for the latter track. Thirty-four antennas were used in the first track, while one antenna was flagged in the latter track. The antenna configuration covers projected baseline length from 17 to 648 m (13-474 k$\lambda$ in $uv$-distance at the frequency of C$^{18}$O $J=2-1$). This minimum baseline resolves out more than 50\% of the flux when a structure is extended more than $7\farcs 1$ \citep{wi.we1994}, corresponding to $\sim990$ AU at a distance of L1527 IRS. The coordinates of the map center during the observations were $\alpha {\rm (J2000)}=04^{\rm h}39^{\rm m}53\fs90,\ \delta {\rm (J2000)}=26^{\circ}03\arcmin 10\farcs00$. C$^{18}$O $J=2-1$ line and 220 GHz continuum emission in Band 6 were observed for 6.9$+$6.7$=$14 minutes (on source). To achieve high velocity resolution for molecular line observations, we configured the correlator in Frequency Division Mode for two spectral windows. Each spectral window has 3840 channels covering 234 MHz bandwidth. Emission-free channels in the lower side band are used to make the continuum map centered at 220 GHz. The total bandwidth of the continuum map is $\sim234$ MHz. We performed self-calibration for the continuum observations using tasks ($clean$, $gaincal$, and $applycal$) in Common Astronomy Software Applications (CASA), and the obtained calibration table for the continuum observations was applied to the C$^{18}$O observations. The self-calibration has improved the rms noise level of the continuum map by a factor of 2-3, while the noise level of the C$^{18}$O map has been improved by less than a few percent. The noise level of the C$^{18}$O map was measured in emission-free channels. All the mapping process was carried out with CASA. Because the original map center during the observations was not coincident with the continuum peak position estimated from 2D Gaussian fitting in the $uv$-domain by $0\farcs54$, the phase center of the observed visibilities was shifted from the original phase center with $fixvis$ in CASA, making the map center of the resultant maps in this paper the same as the continuum peak position. The visibilities were Fourier transformed and CLEANed. In this process we adopted superuniform weighting with $npixel=2$ and binned two frequency channels; the resultant frequency resolution in this paper is 122 kHz, corresponding to 0.17 ${\rm km~s^{-1}}$ in the velocity resolution at the frequency of C$^{18}$O $J=2-1$. We set a $12\arcsec \times 12\arcsec$ area centered on the map center as a CLEAN box with a threshold of $3\sigma$. The synthesized beam sizes of the CLEANed maps are $0\farcs 50\times 0\farcs 40 $ for the C$^{18}$O line, and $0\farcs47\times 0\farcs37$ for the continuum emission. The parameters of our observations mentioned above and others are summarized in Table \ref{tab:obs}. \begin{deluxetable}{c|cc} \tablecaption{Summary of the ALMA observational parameters \label{tab:obs}} \tablehead{ \colhead{Date} & \multicolumn{2}{c}{2014.Jul.20}\\ \colhead{Target} & \multicolumn{2}{c}{L1527 IRS}\\ \colhead{Coordinate center} & \multicolumn{2}{c}{R.A. (J2000)=4$^{\rm h}$39$^{\rm m}$53$^{\rm s}\!\!$.9}\\ \colhead{} & \multicolumn{2}{c}{Dec. (J2000)=26$^{\circ }03\arcmin 10\farcs 0$}\\ \colhead{Projected baseline length} & \multicolumn{2}{c}{17.4 - 647.6 m}\\ \colhead{Primary beam} & \multicolumn{2}{c}{28\farcs 6}\\ \colhead{Passband calibrator} & \multicolumn{2}{c}{J0510$+$1800}\\ \colhead{Flux calibrator} & \multicolumn{2}{c}{J0423$-$013, J0510$+$1800}\\ \colhead{Gain calibrator} & \multicolumn{2}{c}{J0510$+$1800}} \colnumbers \startdat & Continuum & C$^{18}$O $J=2-1$\\ \hline Frequency & 219.564200 & 219.560358\\ Synthesized beam (P.A.) & $0\farcs 47 \times 0\farcs 37\ (-0.4^{\circ})$ & $0\farcs 50\times 0\farcs 40\ (3.1^{\circ})$\\ Velocity resolution & 234 MHz & 0.17 ${\rm km~s^{-1}}$\\ 1$\sigma$ & 0.2 ${\rm mJy~beam^{-1}}$ & 2.6 ${\rm mJy~beam^{-1}}$\\ \enddata \end{deluxetable} \section{RESULTS} \label{sec:res} \subsection{220 GHz Continuum} \label{sec:cont} Figure \ref{fig:ci} shows the 220 GHz continuum emission in L1527 IRS observed with ALMA. Strong compact continuum emission is detected. The emission is clearly elongated in the north-south direction and shows weak extensions to the northwest and the southeast. The $6\sigma$ contour in Figure \ref{fig:ci} shows a full width of $\sim 2\arcsec=280$ AU along the north-south direction. Its deconvolved size derived from a 2D Gaussian fitting is $531\pm 2\ {\rm mas}\times 150\pm2\ {\rm mas},\ {\rm P.A.}=1.5^{\circ}\pm 0.2^{\circ}$. This major-axis direction is almost perpendicular to the direction of the associated outflow, indicating that the continuum emission traces a dust disk and/or a flattened dust envelope around L1527 IRS. Compared with the synthesized beam size ($0\farcs 47\times 0\farcs 37,\ {\rm P.A.}=-0.4^{\circ}$), the major axis of the emission is longer than the beam size and thus spatially resolved, which is consistent with the previous observations at a higher angular resolution \citep[$\sim 0\farcs 35$;][]{se2015}\citep[$\sim 0\farcs3$;][]{to2013}. The aspect ratio, 0.28, is a half of (i.e. thinner than) the ratio reported by \citet{oh2014} with lower angular resolutions than this work. The peak position is also measured from the Gaussian fitting to be $\alpha (2000)=04^{\rm h}39^{\rm m}53\fs 88,\ \delta (2000)=+26^{\circ}03\arcmin 09\farcs 55$, which is consistent with previous measurements \citep{ye2013,oh2014}. We define this peak position and the major-axis direction as the central protostellar position of L1527 IRS and the orientation angle of its dust disk/envelope respectively in this paper. The peak intensity and the total flux density of the emission derived from the Gaussian fitting are $101.4\pm 0.2\ {\rm mJy~beam^{-1}}$ and $164.6\pm 0.5\ {\rm mJy}$, respectively, while the total flux density is $176\ {\rm mJy}$ when measured in the whole region of Figure \ref{fig:ci}. By assuming that the dust continuum emission is optically thin and dust temperature is isothermal, total mass can be calculated with the total flux density \citep{an2005}. The total fluxes derived above correspond to a mass of $M_{\rm gas}\sim 0.013\ M_{\sun}$ by assuming a dust opacity of $\kappa(220\ {\rm GHz})=0.031\ {\rm cm}^{2}\,{\rm g}^{-1}$ \citep{to2013}, a dust temperature of 30 K \citep{to2013}, and a standard gas to dust mass ratio, g/d, of 100. \begin{figure}[ht!] \figurenum{1} \epsscale{1} \plotone{continuum_image.eps} \caption{Continuum emission map of L1527 IRS. Contour levels are $-3,3,6,12,24,\dots \times \sigma$, where 1$\sigma$ corresponds to $0.2\ {\rm mJy~beam^{-1}}$. A blue-filled ellipse at the bottom right corner denotes the ALMA synthesized beam; $0\farcs 47\times 0\farcs 37,\ {\rm P.A.}=-0.4^{\circ}$. The elongation direction ($1.5^{\circ}$) is shown with a white dashed line. Blue and red arrows show the direction of the molecular outflow (east-west) from single-dish observations toward L1527 IRS in $^{12}$CO $J=1-0$ \citep{na2012}. \label{fig:ci}} \end{figure} \subsection{C$^{18}$O J=2-1} \label{sec:18} The C$^{18}$O $J=2-1$ emission was detected above $3\sigma$ level at the relative velocity range from $-3.3$ to $3.2\ {\rm km~s^{-1}}$ in the LSR frame with respect to the systemic velocity $V_{\rm LSR}=5.8\ {\rm km~s^{-1}}$. Figure \ref{fig:18m} shows the total integrated intensity (moment 0) map in white contours and the intensity-weighted-mean velocity (moment 1) map in color; both are derived from the above velocity range with $3\sigma$ cutoff. The moment 0 map overall shows an elongated structure perpendicular to the outflow axis, centered at the protostellar position. In more detail, lower contours ($\sim 3$-$6\sigma$) show extensions to north-northeast, north-northwest, south-southeast, and south-southwest. The moment 0 map also shows two local peaks on the northern and southern sides of the central protostar with a separation of $\sim 1\arcsec$. This double peak is due to a ``continuum subtraction artifact''; the continuum emission was subtracted even at channels where the C$^{18}$O emission has low contrast with respect to the continuum emission and is resolved out by the interferometer. Subtraction of the continuum thus results in negative intensity at the protostellar position. Regardless of the double peak, the map was fitted with single 2D Gaussian to measure the overall structure of the C$^{18}$O emission; a deconvolved size of the C$^{18}$O emission is estimated to be $2\farcs 17\pm 0\farcs 04 \times 0\farcs 88\pm 0\farcs 02$, with ${\rm P.A.}=-1.8^{\circ}\pm 0.7^{\circ}$. Peak integrated intensity and total flux measured in the whole region of Figure \ref{fig:18m} are $0.20\ {\rm Jy~beam^{-1}} \, {\rm km~s^{-1}}$ and $2.2\ {\rm Jy} \, {\rm km~s^{-1}}$. The moment 1 map shows a velocity gradient in the north-south direction, which is perpendicular to the outflow axis. The morphology of C$^{18}$O emission indicates that it traces a flattened gas envelope and/or a gas disk around L1527 IRS and thus the velocity gradient seen in the C$^{18}$O emission is mainly due to their rotation, as already suggested by \citet{oh2014} and \citet{ye2013}. Because the C$^{18}$O emission shows a more complicated structure than the continuum emission, we assume the orientation angle of the gas disk/envelope to be the same as that of the dust disk/envelope in this paper. \begin{figure}[ht!] \figurenum{2} \epsscale{1} \plotone{C18O_mom01.eps} \caption{Integrated intensity map (moment 0; white contours) and mean velocity map (moment 1; color) of the C$^{18}$O $J=2-1$ emission in L1527 IRS, where the velocity is relative LSR velocity with respect to the systemic velocity $V_{\rm LSR}=5.8\ {\rm km~s^{-1}}$. Contour levels of the integrated intensity map are from $5\sigma$ to $30\sigma$ in steps of $5\sigma$ and then in steps of $10\sigma$, where $1\sigma$ corresponds to $2.3\ {\rm mJy~beam^{-1}} \, {\rm km~s^{-1}}$. A central black plus sign shows the position of the central protostar (continuum emission peak). A blue-filled ellipse at the bottom right corner denotes the ALMA synthesized beam; $0\farcs 50\times 0\farcs 40,\ {\rm P.A.}=3.1^{\circ}$. Blue/red arrows and a white dashed line show the direction of the molecular outflow and the major-axis direction of the continuum emission, respectively, as shown in Figure \ref{fig:ci}. \label{fig:18m}} \end{figure} Figure \ref{fig:18c} shows channel maps of the C$^{18}$O emission, which enable us to investigate velocity structures in more detail. In higher blue- and redshifted velocities ($|V|\gtrsim 1.6\ {\rm km~s^{-1}}$), emissions show overall circular shapes and their sizes at $3\sigma$ level are smaller than $\sim 1\farcs 5$. The emission peaks are located on the southern side in the blueshifted range while on the northern side in the redshifted range, making a velocity gradient from the south to the north as seen in Figure \ref{fig:18m}. In a middle velocity range ($0.4\lesssim |V|\lesssim 1.5\ {\rm km~s^{-1}}$), more complicated structures can be seen. For example, at $-1.15,\ -0.98,\ 0.85$, 1.02 ${\rm km~s^{-1}}$, the emissions appear to composed of a strong compact ($\sim 1\arcsec$) structure close to the protostar and a more extended ($>2\arcsec$) structure, resulting in a plateau structure. The extended emissions are located mainly on the southern side in the blueshifted range while mainly on the northern side in the redshifted range. Furthermore some blueshifted channels ($-0.65$ and $-0.48\ {\rm km~s^{-1}}$) show an extension from the protostar to the northwest. In the other lower velocities, emissions are strongly resolved out and negative emission can be seen from 0.02 to $0.35\ {\rm km~s^{-1}}$. This redshifted negative emission is due to a continuum subtraction artifact with an extended infalling envelope around L1527 IRS as \citet{oh2014} confirmed using a infalling envelope$+$Keplerian disk model with radiative transfer calculation. The higher angular resolution than \citet{oh2014} can make the negative emission deeper and result in a double peak in Figure \ref{fig:18m} that did not appear in \citet{oh2014}. Other recent observations reported that spatial thickness of the envelope decreases outward from a radius of $\sim 150$ AU by a factor of two in CCH ($N=4-3$, $J=9/2-5/2$, $F=5-4$ and $4-3$) line emission \citep{sa2017}. No such structure, however, can be confirmed in our results in C$^{18}$O emission, which is considered to be a better tracer of the overall column density and H$_{2}$ gas distribution. The CCH emission has a higher critical density than the C$^{18}$O emission by three orders of magnitude and thus traces only dense regions in the envelope. \begin{figure*}[ht!] \figurenum{3} \plotone{C18O_channel.eps} \caption{Channel maps of the C$^{18}$O $J=2-1$ emission in L1527 IRS. Contour levels are from $3\sigma$ to $15\sigma$ in steps of $3\sigma$ and then in $5\sigma$, where $1\sigma$ corresponds to $2.6\ {\rm mJy~beam^{-1}}$. A central red plus sign in each panel shows the position of the central protostar (continuum emission peak). A blue-filled ellipse in the top left panel denotes the ALMA synthesized beam; $0\farcs 50\times 0\farcs 40,\ {\rm P.A.}=3.1^{\circ}$. Relative LSR velocity with respect to the systemic velocity $V_{\rm LSR}=5.8\ {\rm km~s^{-1}}$ is shown at the top left corner of each panel. \label{fig:18c}} \end{figure*} In the moment 1 map and channel maps, L1527 IRS shows a velocity gradient from the south to the north. The PV diagrams along the major and minor axes are considered to represent a velocity gradient due to rotation and radial motion, respectively, toward disk-like structures as discussed in previous work. Figure \ref{fig:18pv}a and b show PV diagrams of the C$^{18}$O emission cutting along the major and minor axes, respectively. The overall velocity gradient from the south to the north can be confirmed in the PV diagram along the major axis. In more detail, the so-called ``spin up'' rotation can also be seen in $|V|\gtrsim 1.5\ {\rm km~s^{-1}}$, that is, an emission peak at a velocity channel is closer to the central position at higher velocity. We will analyze the dependence of rotational velocity on radial distance from the central position in Section \ref{sec:rp}. In $0.6\lesssim |V|\lesssim 1.4\ {\rm km~s^{-1}}$, a strong compact emission and a more extended emission appear to be superposed, which corresponds to the plateau structures seen in the channel maps (Figure \ref{fig:18c}). In the PV diagram along the minor axis, there are four strong peaks in the western redshifted and blueshifted components, and the eastern blueshifted and redshifted ones in $|V|\lesssim 1.5\ {\rm km~s^{-1}}$ while the emission is mainly concentrated on the central position in the higher velocity range. The western blueshifted component extends to higher velocities than the eastern blueshifted one, and also the eastern redshifted component extends to higher velocities than the western redshifted one. These extensions can be interpreted as a small velocity gradient from the west to the east, which was detected in observations in CS $J=5-4$ line emission and considered to be due to infalling motion in the protostellar envelope by \citet{oy2015}. \begin{figure*}[ht!] \figurenum{4} \gridline{ \fig{C18O_PV.eps}{0.5\textwidth}{(a)} \fig{C18O_PVmin.eps}{0.5\textwidth}{(b)} } \caption{Position Velocity diagrams of the C$^{18}$O $J=2-1$ emission in L1527 IRS along (a) the major axis and (b) the minor axis of the continuum emission (the major axis corresponds to the white dashed line in Figure \ref{fig:ci}, ${\rm P.A.}=1.5^{\circ}$). The width of cut is one pixel, $0\farcs02$. These PV diagrams have the same angular and velocity resolutions as those of the channel maps shown in Figure \ref{fig:18c}. Contour levels are $5\sigma$ spacing from $3\sigma$, where $1\sigma$ corresponds to $2.6\ {\rm mJy~beam^{-1}}$. Central vertical dashed lines show the systemic velocity and central horizontal dashed lines show the protostellar position. Blue and red points with error bars in panel (a) are mean positions derived along the position (vertical) direction at each velocity. \label{fig:18pv}} \end{figure*} \section{ANALYSIS} \label{sec:ana} \subsection{Rotation Profile} \label{sec:rp} In the previous section we identified rotation in the C$^{18}$O gaseous component tracing either a flattened envelope, disk, or both. In this section the radial profile of the rotation will be investigated with the PV diagram along the major axis (Figure \ref{fig:18pv}) so as to characterize the nature of the observed rotation. The method used in this section to obtain the radial profile of rotational velocity is based on the analyses presented by \citet{ye2013} and also explained by \citet{aso2015} in detail. The representative position at each velocity channel of the PV diagram along the major axis is measured as the intensity-weighted 1D mean position, $x_{m}(v)=\int xI(x,v)dx /\int I(x,v)dx$. Pixels having intensity more than $5\sigma$ are used to calculate the sum. The error bar of each representative position is also derived by considering propagation of errors. The derived representative positions are overlaid with error bars on the PV diagram along the major axis (Figure \ref{fig:18pv}) and also plotted in a $\log R-\log V$ plane (Figure \ref{fig:18log}). The abscissa of Figure \ref{fig:18log} is calculated from the offset position in the PV diagram by assuming that the distance of L1527 IRS is 140 pc. Figure \ref{fig:18log} shows a clear negative correlation that the rotational velocity is higher at the position closer to the central protostar, i.e., differential rotation. Furthermore the $\log R-\log V$ diagram in Figure \ref{fig:18log} exhibits two different linear regimes with a break radius of $\sim 60$ AU. The data points in the $\log R-\log V$ plane are, therefore, fitted by a double power-law function with four free parameters: inner and outer power-law indices $p_{\rm in}$ and $p_{\rm out}$, respectively, and a break point $(R_{b},V_{b})$ \citep[see Equation (1) in][]{aso2015}. The best-fit parameter set is $(R_{b},V_{b},p_{\rm in},p_{\rm out})=(56\pm 2\ {\rm AU}, 2.31\pm 0.07\ {\rm km~s^{-1}} ,0.50\pm 0.05, 1.22\pm 0.04)$, giving a reasonable reduced $\chi ^{2}$ of 1.6. For comparison, $\chi ^{2}$ fitting with a single power function is also performed, where $V_{b}$ is fixed at $2.31\ {\rm km~s^{-1}}$. The best-fit parameter set for this case is $(R_{b},p)=(49.5\pm 0.3\ {\rm AU},0.88\pm 0.01)$, giving reduced $\chi ^{2}=4.0$. These reduced $\chi^{2}$ suggest that the radial profile of rotational velocity is characterized by the double power function better than any single power function. In addition, to examine whether the LSR velocity of L1527 IRS we adopt, $5.8\ {\rm km~s^{-1}}$, is reasonable, we also carried out fitting including the systemic velocity as another free parameter, which was fixed to be $5.8\ {\rm km~s^{-1}}$ above, confirming that the adopted systemic velocity is the most reasonable to fit the $\log R-\log V$ diagram. \begin{figure}[ht!] \figurenum{5} \epsscale{1} \plotone{C18O_loglog.eps} \caption{Mean positions of the PV diagram along the major axis plotted on a $\log R-\log V$ plane. The ordinate is not deprojected. Blue and red points show blueshifted and redshifted mean positions in Figure \ref{fig:18pv}a, respectively. Dashed lines show the best-fit lines with a double power law. \label{fig:18log}} \end{figure} The best-fit inner power-law index is almost equal to Keplerian rotation law ($p=1/2$), suggesting that the inner/higher-velocity component of the C$^{18}$O emission traces a Keplerian disk. In fact, the Keplerian rotation was marginally detected in \citet{oh2014}. On the other hand, the best-fit outer power-law index is roughly equal to that of rotation conserving its angular momentum ($p=1$), which is steeper than the Keplerian rotation law, and thus suggests that the rotation in the envelope around the disk cannot support material against the gravity yielded by the central protostar. This is consistent with the fact that the envelope is in infalling motion, as was reported by \citet{oh2014}. The best-fit result is quite consistent with that obtained in our previous Cycle 0 observations of L1527 IRS \citep{oh2014}, while the higher angular resolution and sensitivity of the Cycle 1 observations has enabled us to sample twice as many data points within the break radius in the $\log R-\log V$ diagram as the previous work, making the break in the rotation profile more definite with more precise measurements of the radius of the break and the inner power-law index. The radius of the identified Keplerian disk can be estimated from the best-fit break radius. Note that low angular resolutions along the minor axis of disks cause emission within the radius to be measured at a given velocity channel and the emission moves the representative position inward at the channel, as pointed out by \citet{aso2015}. This underestimation strongly affects in the case of edge-on configurations, such as L1527 IRS, because such configurations make the angular resolution even lower along the minor axis. We thus take into account the underestimation. In their notation, the correction factor is a constant, 0.760, when Keplerian disk radius $R_{o}$ is smaller than $\lesssim 480$ AU with the inclination angle $i=85^{\circ}$ and the angular resolution $\theta \sim 0\farcs 5=70$ AU. With this correction factor taken into account, the Keplerian disk radius is estimated to be $\sim 74$ AU. From the best-fit break velocity together with this disk radius, the central protostellar mass $M_{*}$ of L1527 IRS and a specific angular momentum $j$ at the outermost radius of the Keplerian disk can also be estimated to be $M_{*}\sim 0.45\ M_{\sun}$ and $j\sim 8.3\times 10^{-4}\ {\rm km~s^{-1}} \, {\rm pc}$, respectively, where the inclination angle is assumed to be $i=85^{\circ}$ \citep{to2013,oy2015}. \subsection{Structures of the Keplerian Disk} \label{sec:cv} As shown in the previous section, a Keplerian disk has been kinematically identified by the C$^{18}$O results. This disk around the protostar L1527 IRS seems to be kinematically quite similar to those around T Tauri stars \citep{gu.du1998,si2000,ro2010}. A tantalizing question is whether this disk is also geometrically similar to those around T Tauri stars. Because this disk is almost edge-on, it is also possible to investigate the vertical structures of the disk. In this subsection geometrical structures of the Keplerian disk are investigated. \subsubsection{Continuum Visibility distribution} \label{sec:cvd} To investigate the disk structures, we have performed direct model fitting to the observed continuum visibility data, which is free from any non-linear effects associated with interferometric imaging. Figure \ref{fig:cuvd} shows distributions of the continuum visibilities in three panels, where red and blue points denote the same groups; the red points are located near the major axis ($\pm 15^{\circ}$ on the $uv$-plane) while the blue points are located near the minor axis. Note that all the visibilities of each baseline in each track for $\sim30$ minutes are averaged in these plots. That is why any trajectory due to Earth's rotation is not drawn on the $uv$-plane (Figure \ref{fig:cuvd}a). It should be stressed that although all the visibilities of each baseline are averaged, no further azimuthal average has been done in our analysis as is obvious in Figure \ref{fig:cuvd}a. This is because information on structures that are not spherically symmetric such as disks is missed with azimuthally averaged visibilities unless the disks are face-on, as explained below in more detail. Figure \ref{fig:cuvd}b exhibits a trend that the visibility amplitudes are higher at shorter $uv$-distances. The total flux density $176\ {\rm mJy}$ measured in the image space (Figure \ref{fig:ci}) appears consistent with an amplitude at zero $uv$-distance, which can be derived from visual extrapolation of the amplitude distribution. Figure \ref{fig:cuvd}b also exhibits the data points appearing to scatter more widely at longer $uv$-distances. This is clearly not due to the error of each data point, which is $\sim 2\ {\rm mJy}$ but due to the structures of the continuum emission. In more detail, blue and red points have the highest and lowest amplitudes at each $uv$-distance, respectively. Because visibility is derived by Fourier transforming an image, these distributions of the blue and red points indicate that structures of the continuum emission are largest along the major axis and smallest along the minor axis in the image domain, which is consistent with the image of the continuum emission shown in Figure \ref{fig:ci}. Similarly the scattering of the green points between the blue and red points is due to structures along directions at different azimuthal angles. In addition, the data points near the major axis (red points) are also compared with two simple functions in Figure \ref{fig:cuvd}b: Gaussian and power-law profiles. Best-fit profiles are $0.16\ {\rm Jy}\ \exp (-4\ln 2 (\beta /234\ {\rm m})^{2})$ and $0.13\ {\rm Jy}\ (\beta /100\ {\rm m})^{-0.48}$, respectively, where $\beta$ denotes the $uv$-distance. The power-law profile cannot explain the observations at all. Although the Gaussian profile matches the observations better, this profile is not necessarily realistic for circumstellar disk structures \citep[][and references therein]{ha2014} and also the comparison shows systematic deviation; the points are higher than the Gaussian profiles in $<100$ m and $>300$ m while they are lower in $\sim 200$ m. Figure \ref{fig:cuvd}c exhibits that most phases of the visibility are smaller than $\lesssim 5^{\circ}$, which corresponds to $\lesssim 0\farcs 04$ where $uv$-distance is larger than 100 m. This indicates that emission is centered at the protostellar position at most spatial frequencies. Red points and blue points appear to be well mixed in Figure \ref{fig:cuvd}c, which means that the distribution of phases in the azimuthal direction is roughly uniform. As Figure \ref{fig:cuvd}b clearly demonstrates, the analysis of visibility without azimuthal average in a $uv$-plane is quite powerful for investigating spatially resolved not-spherically symmetric structures such as disks except for face-on cases. No such analysis with sufficient signal-to-noise ratio has been done in previous studies. Note that a few studies attempted to use 2D distributions in model fitting \citep{pe2016}. Those studies, however, showed only azimuthally averaged visibilities deprojected by considering inclination angles, but lacked the signal-to-noise ratio to perform comparable exploration to the 2D visibilities, making it impossible to evaluate how good their fittings were in 2D $uv$-space. Exceedingly high sensitivity as well as high angular resolution of ALMA allows us to perform such data analyses. \begin{figure*}[ht!] \figurenum{6} \plottwo{continuum_uv.eps}{continuum_DAmpPhase2.eps} \caption{Continuum visibility averaged over scans. The two observational tracks are not averaged together. 1 m corresponds to $0.73\ {\rm k}\lambda$ at the observed frequency. (a) Data points on the $uv$-plane. Both of each conjugate pair are plotted. (b) Distribution of the visibility amplitude. An error bar of the amplitude for each point is $\sim 2\ {\rm mJy}$. (c) Distribution of the visibility phase. Only one of each conjugate pair with a positive phase is plotted. Red and blue circles denote data points near the major- and minor-axis directions, respectively; in other words, ${\rm Arctan}(U/V)=1.5^{\circ}\pm 15^{\circ},\ -88.5^{\circ}\pm 15^{\circ}$, respectively. Green crosses denote the other data points. Solid and dashed curves are the best-fit Gaussian and power-law profiles, respectively, to the data points near the major axis (red points). \label{fig:cuvd}} \end{figure*} \subsubsection{Model fitting} \label{sec:fit} Our analysis of the disk structures is performed by $\chi^{2}$ fitting of models to the continuum visibilities shown in Figure \ref{fig:cuvd}b. It should be noted that the full size of the continuum emission at $6\sigma$ level in Figure \ref{fig:ci}, $\sim 280$ AU (see Section \ref{sec:cont}) \footnote{The $6\sigma$ size is referred here to see the extension of the whole continuum emission, which is much larger than the FWHM derived from Gaussian fitting.} is twice the disk size expected from the radius kinematically estimated in Section \ref{sec:rp}. This suggests that the continuum emission arises not only from the disk but also from the envelope. Hence our models should include envelope structures as well as disk structures. Because the envelope around L1527 IRS shows a flattened morphology, the model we use in this section is based on a standard disk model \citep[e.g.,][]{du1994} but modified to express a flattened dust envelope as well as a dust disk, as described below. We used the code described in \citet{oh2014}. The model includes 12 parameters summarized in Table \ref{tab:mod}. The radial dependence of temperature $T(R)$ and scale height $H(R)$ are described as $T(R)=T_{1}(R/1\ {\rm AU})^{-q}$ and $H(R)=H_{1}(R/1\ {\rm AU})^{h}$, respectively. This means that the scale height in our model is not assumed to be in HSEQ. To express dust disk and dust envelope structures, the radial dependence of surface density $\Sigma (R)$ is described by a combination of inner and outer profiles formulated as \begin{eqnarray} \Sigma (R)=\frac{(2-p)M_{\rm disk}}{2\pi \left( R_{\rm out}^{2-p} -R_{\rm in}^{2-p}\right) }R^{-p}\times \left\{ \begin{array}{c} 1\ \ (R\leq R_{\rm out})\\ S_{\rm damp}\ \ (R>R_{\rm out}) \end{array} \right. , \label{eq:sd} \end{eqnarray} where $M_{\rm disk}$ is a disk mass (mass within $R_{\rm out}$) determined from g/d$=100$ and $S_{\rm damp}$ is a damping factor of the surface density for the outer dust envelope. The model has no envelope when $S_{\rm damp}$ is zero, while the model has no density jump when $S_{\rm damp}$ is unity. $R_{\rm out}$ is the outer radius of the disk defined as the boundary between the disk and the envelope. In our model, mass density distribution $\rho(R,z)$ is determined from the scale height $H(R)$ and the surface density $\Sigma (R)$ as $\Sigma/(\sqrt{2\pi}H)\exp (-z^{2}/2H^{2})$ within the outermost radius of 1000 AU. Our model adopts the same power-law index of density for both disk and envelope, which is consistent with theoretical simulations by \citet{ma2010}. Some of the parameters, including the power-law index of the temperature distribution, are fixed as shown in Table \ref{tab:mod}. Radio observations are considered to be not sensitive to temperature distribution because most observed continuum emissions are optically thin, which makes it hard to constrain temperature distribution by mm-continuum observations. Therefore, we fixed the temperature distribution in our model, referring a total luminosity derived in infrared wavelengths \citep{to2008,to2013}, which are more sensitive to temperature than radio observations. In addition, because our angular resolution is not high enough to resolve vertical structures of the disk, the radial temperature profile we adopted is vertically isothermal. We confirmed that our profile provides representative temperature of the 2D temperature distribution derived by \citet{to2013} using a self-consistent radiative transfer model \citep{wh2003} at each radius on 10-100 AU scales. The inclination angle of the dust disk/envelope is fixed at $i=85^{\circ}$ and the eastern side is on the near side for the observers \citep{oy2015}. The other six quantities are free parameters ($M_{\rm disk},R_{\rm out},p,S_{\rm damp},H_{1},h$). When radiative transfer were solved in 3D space to produce a model image, the following condition and quantities were assumed: local thermodynamic equilibrium (LTE), g/d$=100$, and a dust opacity of $0.031\ {\rm cm}^{2}\, {\rm g}^{-1}$ calculated from a opacity coefficient of $\kappa (850\ \mu {\rm m})=0.035\ {\rm cm}^{2}\, {\rm g}^{-1}$ and an opacity index of $\beta =0.25$ \citep{to2013}. After a model image was calculated from the radiative transfer, model visibility was obtained by synthetic observations through the CASA tasks $simobserve$ and $listvis$. In the synthetic observations, the phase center was set at the center of the model image and the orientation (P.A.) was assumed to be the same as that of the observed continuum emission. Following the two observational tracks, we performed the synthetic observations without artificial noise with the same antenna configurations as the observations. Then model visibilities were derived at the same points on the $uv$-plane as the observations. Using the model visibility and the observed continuum visibility, reduced $\chi ^{2}$ was calculated to evaluate the validity of each model. Without azimuthally averaging visibilities, all 1089 data points were used to calculate the reduced $\chi ^{2}$ defined as \begin{eqnarray} \chi^{2}_{\nu}=&&\frac{1}{\sigma ^{2}(2N_{\rm data}-N_{\rm par}-1)}\sum _{i}\left[ \left( {\rm Re} V_{i}^{\rm obs}-{\rm Re} V_{i}^{\rm mod}\right)^{2}\right. \nonumber \\ &&+\left. \left( {\rm Im} V_{i}^{\rm obs}-{\rm Im} V_{i}^{\rm mod}\right)^{2}\right], \end{eqnarray} where $\sigma$, $V_{i}^{\rm obs}$, $V_{i}^{\rm mod}$, $N_{\rm data}$, and $N_{\rm par}$ are the standard deviation of noise in the observed visibility amplitude, the observed visibility, model visibility, the number of data points, and the number of free parameters, respectively; $N_{\rm data}=1089$ and $N_{\rm par}=6$ as mentioned above. $N_{\rm data}$ is multiplied by two because each visibility includes two independent values, real and imaginary parts. Using the distribution of reduced $\chi ^{2}$ in the parameter space, the uncertainty of each parameter is defined as the range of the parameter where the reduced $\chi ^{2}$ is below the minimum plus one $(=6.6)$ when all parameters are varied simultaneously. We also used the Markov Chain Monte Carlo method to find the minimum $\chi ^{2}$ efficiently. \begin{figure*}[ht!] \figurenum{7} \gridline{ \fig{continuum_compDAmp.eps}{0.33\textwidth}{(a)} \fig{continuum_compDAmpSd0.eps}{0.33\textwidth}{(b)} \fig{continuum_compDAmpSd1.eps}{0.33\textwidth}{(c)} } \caption{The observed continuum visibility (black circles and crosses) and (a) the best-fit model, (b) the model with $S_{\rm damp}=0$, and (c) the model with $S_{\rm damp}=1$, denoted with red circles and crosses. For the models in panel (b) and (c), parameters except for $S_{\rm damp}$ are fixed at those of the best-fit model. The observations are the same plot as Figure \ref{fig:cuvd}(b) except for the color, and thus the circles denote the data points near the major- and minor-axis directions while crosses denote the other data points for both the observations and models. \label{fig:ccv}} \end{figure*} Figure \ref{fig:ccv}a represents a comparison of the observed continuum visibility with our best-fit model showing that the best-fit model overall reproduces the observations. The reduced $\chi ^{2}$ of the best-fit model is 5.6, which corresponds to a residual of $\sim 2.4\sigma$ on average in the $uv$-space. It is also confirmed that the best-fit model can reproduce the observations in an image space, as shown in Figure \ref{fig:cci}. Note that the model image in Figure \ref{fig:cci}a was not made through the synthetic observation using CASA but was simply made based on the best-fit parameters in Table \ref{tab:mod} with convolution using the synthesized beam of our observations. This is because the synthetic observations using CASA produce a beam that is slightly different from the actual observations. Synthetic observation is not very crucial for this comparison, but convolution with the beam exactly same as the synthesized beam used in the actual observations is more crucial because the model image is relatively compact as compared to the synthesized beam. The residual shown in Figure \ref{fig:cci}b was obtained by subtracting the best-fit model, in the image space, from the observations, indicating that almost no significant residual can be seen in the image space. This comparison demonstrates that when the residual in the $uv$-space is small, the image-space residual derived from the analysis above is also small. \begin{figure}[ht!] \figurenum{8} \gridline{ \fig{continuum_comp.eps}{0.25\textwidth}{(a)} \fig{continuum_res.eps}{0.25\textwidth}{(b)} } \caption{(a) The observed continuum image (black contours) and the best-fit model (red contours). (b) The residual obtained by subtracting the best-fit model from the observations. Contour levels are $-3,3,6,12,24,\dots \times \sigma$ for panel (a) and $-3,3,6,9,\dots \times \sigma$ for panel (b), where $1\sigma$ corresponds to $0.2\ {\rm mJy~beam^{-1}}$. A blue-filled ellipse at the bottom right corner in panel (a) denotes the ALMA synthesized beam; $0\farcs 47\times 0\farcs 37,\ {\rm P.A.}=-0.4^{\circ}$. The spatial scale is different from Figure \ref{fig:ci}. \label{fig:cci}} \end{figure} The parameters of the best-fit model are summarized in Table \ref{tab:mod}. The damping factor $S_{\rm damp}=0.19^{+0.03}_{-0.09}$, which is not zero but significantly smaller than unity, suggests that the observed continuum emission arises from both a dust disk and a dust envelope with a significant jump of the column density between them. A much larger or smaller value than 0.19 cannot explain the observed visibilities as shown in Figure \ref{fig:ccv}b and \ref{fig:ccv}c. The former and the latter show models with $S_{\rm damp}=0$ and 1, respectively, where the other parameters are the same as those of the best-fit model. In Figure \ref{fig:ccv}b and \ref{fig:ccv}c visibility amplitude of models at longer $uv$-distance ($\gtrsim 300$ m) appears to be similar to the observations in both cases, whereas visibility amplitude of the models with $S_{\rm damp}=0$ and 1 are lower and higher than the observations, respectively, at shorter $uv$-distance ($\lesssim 300$ m). One might wonder whether a jump of dust opacity, as well as surface density, could also explain the observations. It is observationally confirmed, however, that the opacity index $\beta$ does not change on scales of 100 to 1000 AU, based on observed dependence of spectral index on $uv$-distance \citep{to2013}, and possible uncertainty of $\Delta \beta$, $\sim 0.2$, adds only $\sim 8$\% to the relative uncertainty of $S_{\rm damp}$. Further discussion on the significant jump of the surface density between the dust disk and envelope will be presented in Section \ref{sec:disc}. The best-fit value of $R_{\rm out}=84^{+16}_{-24}$ AU is close to the Keplerian disk radius kinematically derived from the C$^{18}$O results; the discrepancy between the two values are within their errors. This result suggests that the dust disk identified geometrically as density contrast by this model fitting is consistent with the kinematically identified gaseous Keplerian disk. The power-law index of the surface density, $p=1.7^{+0.1}_{-0.3}$, is a bit steeper than the typical value for T Tauri disks, $\sim 1.0$ \citep{an.wi2007,hu2008,an2010} while a similarly steep $p$ has been found toward a few other protostars as well \citep[e.g.,][]{ye2014,aso2015}. The power-law index of the scale height, $h=1.2^{+0.1}_{-0.1}$, corresponds to that for HSEQ ($h=1.25$) within the error range, where the temperature distribution is assumed to be vertically isothermal. To examine whether the best-fit model is indeed in HSEQ, the scale height of the disk at a certain radius can be compared directly with that in HSEQ. The scale height of the best-fit model at $R=R_{\rm out}$ is calculated to be $\sim 20$ AU while the scale height of a disk in HSEQ is estimated to be $\sim 15$ AU at the same radius when the central stellar mass is $M_{*}=0.45\ M_{\sun}$ as kinematically estimated from the rotation profile (see Section \ref{sec:rp}), the temperature is 44 K as estimated from the temperature profile we used in the model, and a mean molecular weight is $2.37\ m_{\rm H}$. This comparison suggests that the disk around L1527 IRS is most probably in HSEQ. Note that the scale height in HSEQ also depends on vertical temperature distribution; it can be higher if temperature is higher in upper layers of disks. \citet{to2013} suggested a highly flared disk around L1527 IRS, which has a scale height of 38 AU at $R_{\rm out}$. We consider that their estimated scale height is higher than ours because their observations included infrared wavelength, which may be traced by scattered light making the disk geometrically thick in appearance, or because smaller grains traced by infrared wavelength are in upper layers than larger grains traced by mm wavelength. In addition, the scale height and the index derived by our best-fit model are relatively large and steep, respectively, when compared with protoplanetary disks in Ophiuchus star forming region \citep{an2010}. The best-fit model provides us a comparison between the disk around L1527 IRS and T Tauri disks from a geometrical point of view. The disk around L1527 IRS has a mass and a radius similar to those of T Tauri disks. On the other hand, the power-law index of the disk surface density, the scale height, and the power-law index of the scale height are possibly steeper, larger, and steeper, respectively, than those of T Tauri disks. \begin{deluxetable*}{ccccccc} \tablecaption{Fixed and Free Parameters of the Model Fitting \label{tab:mod}} \tablehead{\colhead{Fixed} & \colhead{$i$} & \colhead{$R_{in}$} & \colhead{$T_{1}$} & \colhead{$q$} & \colhead{$\kappa (220\ {\rm GHz})$} & \colhead{g/d}\\ & \colhead{$85^{\circ}$} & \colhead{0.1 AU} & \colhead{403.5 K} & \colhead{0.5} & \colhead{$0.031\ {\rm cm}^{2}\, {\rm g}^{-1}$} & \colhead{100}} \startdat Free & $M_{\rm disk}$ & $R_{\rm out}$ & $p$ & $S_{\rm damp}$ & $H_{1}$ & $h$\\ Best & $1.3^{+0.3}_{-0.4}\times 10^{-2}\ M_{\sun}$ & $84^{+16}_{-24}$ AU & $1.7^{+0.1}_{-0.3}$ & $0.19^{+0.03}_{-0.09}$ & $0.11^{+0.02}_{-0.03}$ AU & $1.2^{+0.1}_{-0.1}$\\ \enddata \end{deluxetable*} \section{DISCUSSION} \label{sec:disc} \subsection{Possible Origin of the Surface Density Jump} \label{sec:jump} A jump of the surface density between the envelope and the Keplerian disk around L1527 IRS has been found by our model fitting to the continuum visibility. In fact, such a density jump can be qualitatively confirmed in numerical simulations of disk evolution \citep{ma2010} and a similar density jump by a factor of $\sim 8$ is suggested to reproduce continuum emission arising from the disk around HH 212 \citep{le2014}. Furthermore it is important to note that the disk radius of L1527 IRS geometrically estimated with the density jump is fairly consistent with the radius kinematically estimated, suggesting that disks and envelopes around protostars may be not only kinematically but also geometrically distinguishable from the viewpoint of density contrast. The results also suggest that the density jump may be physically related to kinematical transition from infalling motions to Keplerian rotation. In this section, the possible origin of the surface density jump is quantitatively discussed. Surface density of the disk at the boundary $\sim 84$ AU can be calculated from our best-fit model to be $\Sigma _{\rm disk}= 0.42\ {\rm g}\, {\rm cm}^{-2}$ with Equation \ref{eq:sd} and the best-fit parameters shown in Table \ref{tab:mod}. This surface density is within the typical range derived for Class ${\rm I\hspace{-0.1em}I}$ YSO disks in Taurus \citep{an.wi2007} and Ophiuchus \citep{an2009,an2010}. On the other hand, volume density of the envelope at the boundary can be calculated from the best-fit model to be $\rho_{\rm env}= 1.0\times 10^{-16}\ {\rm g}\, {\rm cm}^{-3}$, which corresponds to the number density of $n_{\rm env}=2.5\times 10^{7}\ {\rm cm}^{-3}$. For embedded young stars in the Taurus-Auriga molecular cloud, a typical density distribution of protostellar envelopes can be described as $\sim (0.3-10)\times 10^{-13}\ {\rm g}\,{\rm cm}^{-3}\ (R/1\ {\rm AU})^{-3/2}$ \citep{ke1993}, which provides $(0.4-13)\times 10^{-16}\ {\rm g}\, {\rm cm}^{-3}$, corresponding to $(0.1-3.5)\times 10^{7}\ {\rm cm}^{-3}$, at the boundary, $R_{\rm out}$. This indicates that the envelope density in the best-fit model is also reasonable when compared with other observations. The envelope density is also consistent with that in \citet{to2013} at their boundary radius 125 AU. Because the disk around L1527 IRS is considered to still grow with mass accretion from the envelope, as discussed in \citet{oh2014}, a possible origin of the density jump may be mass accretion from the envelope to the disk. First, density in the Keplerian disk is higher than that in the infalling envelope when radial motion in the disk is not as fast as that in the envelope, which is reasonable because gravity and centrifugal force are balanced in the disk while they are not balanced in the envelope. This difference of mass infall rates makes mass build up in the disk, resulting in disk growth. Secondly, to explain the factor $S_{\rm damp}=0.19$ quantitatively, we consider isothermal shock due to the mass accretion between the infalling envelope and the Keplerian disk. The gravity of a central protostar causes material in the envelope to infall dynamically and thus the material has radial infall velocity $u_{r} (R)$ as a function of radius. When $\rho _{\rm env}$, $\rho _{\rm disk}$, and $c_{s}$ indicate envelope density, disk density, and sound speed, respectively, the isothermal shock condition is $S_{\rm damp}=\rho _{\rm env}/\rho _{\rm disk}=c_{s}^{2}/u_{r}^{2}$. The sound speed at the boundary can be calculated from our best-fit model to be $c_{s}=0.39\ {\rm km~s^{-1}}$. Regarding the infall velocity, we assume $u_{r}$ to be the product of a constant coefficient $\alpha$ and free fall velocity, i.e., $u_{r}=\alpha \sqrt{2GM_{*}/R_{\rm out}}$, as was discussed by \citet{oh2014}. By using $M_{*}=0.45\ M_{\sun}$ and $R_{\rm out}=84$ AU, this infall velocity can be calculated to be $3.1\alpha\ {\rm km~s^{-1}}$. In order to explain $S_{\rm damp}=0.19$, $\alpha$ should be $\sim 0.3$, which is consistent with the range of $\alpha$ (0.25-0.5) \citet{oh2014} found. This quantitative discussion suggests that mass accretion from the envelope to the disk is a possible origin of the density jump we found. \subsection{Structures of the C$^{18}$O gas disk} \label{sec:gas} The previous sections have discussed the structures of the disk and the envelope around L1527 IRS based on the continuum observations tracing the disk and the envelope. C$^{18}$O emission, on the other hand, also traces them, as was discussed in \citet{oh2014}. Because it can be reasonably assumed that gas and dust are well coupled and mixed in the protostellar phase in contrast to the T Tauri phase where gas and dust can be decoupled because of grain growth, it is important to examine whether the structures of the disk and the envelope revealed in the dust observations can also be valid for those traced in C$^{18}$O emission. In order to answer the question mentioned above, in this section, models for C$^{18}$O are constructed based on the best-fit dust model derived from the fitting to the continuum visibility shown in Section \ref{sec:cv}, and models are compared with the C$^{18}$O observations. The models of C$^{18}$O have the same profiles of the surface density, temperature, and scale height as those of the best-fit dust model derived from the fitting to the continuum visibility shown in Section \ref{sec:cv}. The surface density profile has a jump at a radius of 84 AU, which is the boundary between the disk and the envelope, as was suggested by the best-fit dust model. In addition to these constraints on structures, C$^{18}$O models also require velocity fields, which cannot be constrained by the continuum observations. For velocity fields, we assume that C$^{18}$O models follow the radial profile of the rotation derived from the C$^{18}$O observations in Section \ref{sec:rp}, and that of infall introduced in Section \ref{sec:jump}. Note that we adopt 74 AU as the radius where the rotation profile has a break even though the geometrical boundary between the disk and the envelope is set at 84 AU in radius, as mentioned above. These geometrical and kinematical structures of the C$^{18}$O models are fixed in the following discussion. Importantly the C$^{18}$O models still depend on the fractional abundance of C$^{18}$O relative to H$_{2}$, $X({\rm C^{18}O})$, as discussed later. In order to compare models and observations, C$^{18}$O data cubes are calculated from the models described above. It should be noted that the C$^{18}$O emission obviously traces more extended structures arising from outer parts of the envelope, as compared with the dust emission. Because structures of the disk and the envelope adopted for C$^{18}$O model are based on the continuum emission detected within a radius of $\sim 1\arcsec$ (see Figure \ref{fig:ci}), comparisons between C$^{18}$O models and observations should be made only within a radius of $1\arcsec$. According to the C$^{18}$O velocity channel maps shown in Figure \ref{fig:18c}, the C$^{18}$O emission arises within a radius of $\sim 1\arcsec$ when $|V_{\rm LSR}|>2.0\ {\rm km~s^{-1}}$, and only C$^{18}$O emission having these LSR velocities is discussed in comparisons between the models and the observations. When C$^{18}$O data cubes are calculated from the models, radiative transfer, including both dust and C$^{18}$O opacities, are also solved in 3D and velocity space, and then dust continuum emission is subtracted to derive the final model cubes. The model data cubes calculated from the radiative transfer are convolved with a Gaussian beam having the same major and minor axes, and orientation as the synthesized beam of our observations. A moment 0 map is made from this convolved data cube to compare with the observations. Even though visibilities are not compared between models and observations here, we can still judge how good each model is based on this comparison using moment 0 maps, as demonstrated in the model fitting for the continuum data in Section \ref{sec:fit}. Figure \ref{fig:cde}a shows a comparison of moment 0 maps between a model and the observations. In this model, a constant C$^{18}$O abundance of $4\times 10^{-8}$ is adopted as a nominal value. In this case, significant residuals at 9$\sigma$ level are left, as shown in Figure \ref{fig:cde}b. Inner regions show negative residuals while outer regions show positive residual, demonstrating that the model C$^{18}$O is too strong in inner regions while it is too weak in outer regions, as compared with the observations. Note that neither a higher nor a lower constant value than $4.0\times 10^{-8}$ of the C$^{18}$O abundance improves the model. For instance, the value in interstellar medium (ISM), $5.0\times 10^{-7}$ \citep{la1994,jo2005,wi.ro1994}, provides more negative residuals than Figure \ref{fig:cde}b. Although there are a couple of factors to change the C$^{18}$O intensity in the model, the abundance of C$^{18}$O is the only one that changes the intensity of C$^{18}$O if we still remain the same physical structures of the disk in the model, i.e., in order to make the C$^{18}$O intensity weaker in inner regions and stronger in outer regions in the model, the C$^{18}$O abundance might be lower in inner regions and higher in outer regions. For instance, if the C$^{18}$O abundance in outer regions ($R>84$ AU) is the same as the one in ISM, $5.0\times 10^{-7}$ \citep{la1994,jo2005,wi.ro1994}, and the abundance in the inner regions ($R<84$ AU) decreases by a factor of $\sim 20$ due to the freeze-out of C$^{18}$O molecules, the observations can be explained by the model. T Tauri disks are, however, usually considered to have temperature profiles having higher temperature in inner regions \citep[e.g.,][]{an.wi2007}, and the disk around L1527 IRS is also considered to have such a temperature profile as \citet{to2013} suggested. With such a temperature profile, it would be difficult for the disk to have a lower C$^{18}$O abundance in the inner regions because of the molecular freeze-out. \begin{figure}[ht!] \figurenum{9} \epsscale{0.5} \gridline{ \fig{C18O_compde0404.eps}{0.25\textwidth}{(a)} \fig{C18O_resde0404.eps}{0.25\textwidth}{(b)} } \caption{Comparison of C$^{18}$O moment 0 maps integrated over $|V|>2.0\ {\rm km~s^{-1}}$. (a) Observations in black contours and a model with $X({\rm C}^{18}{\rm O})=4\times 10^{-8}$ in red contours. (b) The residual obtained by subtracting the model from the observations. Contour levels are $-3,3,6,12,24,\dots \times \sigma$ in panel (a) and $-3,3,6,9,12,\dots \times \sigma$ in panel (b), where $1\sigma$ corresponds to $1.8\ {\rm mJy~beam^{-1}} \, {\rm km~s^{-1}}$. A blue filled ellipse at the bottom right corner denotes the ALMA synthesized beam; $0\farcs 50\times 0\farcs 40,\ {\rm P.A.}=3.1^{\circ}$. \label{fig:cde}} \end{figure} One possible C$^{18}$O abundance distribution that can explain the observations is the one with a local enhancement of the C$^{18}$O molecule, as has been suggested for the SO molecular abundance around L1527 IRS by \citet{oh2014}; they suggested that the SO abundance is locally enhanced around L1527 IRS because of accretion shocks making the dust temperature sufficiently high for SO molecules frozen out on dust grains to be desorbed. An example of such a C$^{18}$O abundance distribution is the ISM abundance \citep[$5.0\times 10^{-7};$][]{la1994,jo2005,wi.ro1994} at 80$\leq R\leq $88 AU and a lower constant abundance of $2.8\times 10^{-8}$ elsewhere. Figure \ref{fig:cde2} shows the comparison between the model with this C$^{18}$O abundance distribution and the observations, suggesting that the model with this C$^{18}$O abundance distribution can reproduce the observations with reasonably small residual. \begin{figure}[ht!] \figurenum{10} \fig{C18O_compde_is.eps}{0.23\textwidth}{(a)} \fig{C18O_resde_is.eps}{0.23\textwidth}{(b)} \caption{The same figures as Figure \ref{fig:cde} but the model has enhancement of C$^{18}$O abundance: $X({\rm C}^{18}{\rm O})=5.0\times 10^{-7}$ in 80-88 AU and $X({\rm C}^{18}{\rm O})=2.8\times 10^{-8}$ in the other regions. \label{fig:cde2}} \end{figure} The reason of the lower C$^{18}$O abundance in the inner disk region is not clear. According to the temperature distribution of L1527 IRS by \citet{to2013}, the midplane temperature becomes lower than 30 K at $r\gtrsim 100$ AU, sublimation temperature of CO on dense conditions \citep[10$^{8-12}$ cm$^{-3}$;][]{fu.ai2014}. Thus CO freeze-out could be present only at the outer region. In general, CO molecules indeed could not be frozen-out so easily in protostellar disks as in T Tauri disks because surrounding envelopes heat up such embedded disks \citep{ha2015}, although the degree of this heating effect depends on a couple of factors, such as density distribution. There are two other possibilities to explain the observed C$^{18}$O abundance decrease in the inner disk region. One is that dust can be optically thick in inner regions and hides part of the C$^{18}$O emission as indicated by our best-fit model shown in Section \ref{sec:fit}. The other possibility is that chemistry on the warm and dense conditions. On such a condition CO can be converted into more complex molecules such as CO$_{2}$ and organic molecules. Indeed, recent CO observations of protoplanetary disks with ALMA have been reporting similar decrease of CO abundance, which is attributed to such a chemical effect \citep[e.g.,][]{sc2016}. The CO conversion is also reported in protostellar phases as well based on single-dish and interferometric observations \citep{an2016,fu2012,yi2012,al2010}. Although it is difficult to give a strong constraint on the width of the local enhancement and the lower C$^{18}$O abundance in the frozen-out or converted region in models with the current observations, the density and temperature structures derived from the continuum observations can also reproduce the C$^{18}$O observations with a radial abundance profile of C$^{18}$O with a local enhancement like the one discussed above. Future observations at a higher angular resolution can give a better constraint on the C$^{18}$O radial abundance profile. \section{CONCLUSIONS} \label{sec:conc} We have observed the Class 0/I protostar L1527 IRS in the Taurus star-forming region with ALMA during its Cycle 1 in 220 GHz continuum and C$^{18}$O $J=2-1$ line emissions to probe the detailed structures of the disk and the envelope around L1527 IRS. The 220 GHz continuum emission spatially resolved with an angular resolution of $\sim 0\farcs5 \times 0\farcs 4$ shows a similar elongated structure in the north-south direction. Its deconvolved size is estimated from a 2D Gaussian fitting to be $\sim 0\farcs53 \times 0\farcs 15$, showing a significantly thinner structure than those previously reported on the same target. The C$^{18}$O $J=2-1$ emission overall shows an elongated structure in the north-south direction with its velocity gradient mainly along the same direction. The integrated intensity map shows a double peak with the central star located between the peaks, due to a continuum subtraction artifact. The elongation of the continuum as well as C$^{18}$O clearly indicates that these emissions trace the disk/envelope system around L1527 IRS and the velocity gradient along the elongation is naturally considered to be due to rotation of the system, as was previously suggested. The radial profile of rotational velocity of the disk/envelope system obtained from the position-velocity diagram of the C$^{18}$O emission cutting along the major axis of the continuum emission was fitted with a double power-law, providing the best-fit result with a power-law index for the inner/higher-velocity ($p_{\rm in}$) of 0.50 and that for the outer/lower-velocity component ($p_{\rm out}$) of 1.22. This analysis clearly suggests the existence of a Keplerian disk around L1527 IRS, with a radius kinematically estimated to be $\sim 74 $ AU. The dynamical mass of the central protostar is estimated to be $\sim 0.45\ M_{\sun}$. In order to investigate structures of the disk/envelope system, $\chi ^{2}$ model fitting to the continuum visibility without any annulus averaging have been performed, revealing a density jump between the disk and the envelope, with a factor of $\sim 5$ higher density on the disk side. The disk radius geometrically identified as the density jump is consistent with the Keplerian disk radius kinematically estimated, suggesting that the density jump may be related to the kinematical transformation from infalling motions to Keplerian motions. One possible case to form such a density jump is isothermal shock due to mass accretion at the boundary between the envelope and the disk. If this is the case, to form the density jump with a factor of $\sim 5$ requires the infall velocity in the envelope to be $\sim 0.3$ times slower than the free fall velocity yielded by the central stellar mass. In addition to the density jump, it was found that the disk is roughly in hydrostatic equilibrium. The geometrical structures of the disk found from the $\chi^{2}$ model fitting to the continuum visibility can also reproduce the C$^{18}$O observations as well, if C$^{18}$O freeze-out, conversion, and localized desorption possibly occurring within $\sim 1\arcsec$ from the central star are taken into account. \acknowledgments This paper makes use of the following ALMA data: ADS/JAO.ALMA2012.1.00647.S (P.I. N. Ohashi). ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We thank all the ALMA staff making our observations successful. We also thank the anonymous referee, who gave us invaluable comments to improve the paper. Data analysis were in part carried out on common use data analysis computer system at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. S.T. acknowledges a grant from the Ministry of Science and Technology (MOST) of Taiwan (MOST 102-2119-M-001-012-MY3), and JSPS KAKENHI Grant Number JP16H07086, in support of this work. Y.A. is supported by the Subaru Telescope Internship Program. \vspace{5mm} \facilities{ALMA} \software{CASA, MIRIAD, IDL} \bibliographystyle{aasjournal}
proofpile-arXiv_065-3560
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{Background and motivation} \label{sec:background} Let $F$ be a number field and $\mathbb{A}$ be the associated ring of adeles. Let ${\mathrm {G}}$ be a reductive algebraic group defined over $F$ and $\pi$ an (irreducible) automorphic representation of ${\mathrm {G}}(\mathbb{A})$ as defined in \cite{BJ79, FGKP18}. Fix a Borel subgroup $B$ and let $P \subset {\mathrm {G}}$ be a standard parabolic subgroup with Levi decomposition $P=LU$, and let $\psi : U(F)\backslash U(\mathbb{A})\to \mathbb{C}^\times$ be a global unitary character. Given any automorphic form $\varphi\in \pi$ one can consider the following function on ${\mathrm {G}}(\mathbb{A})$: \begin{equation} \mathcal{F}_{U}(\varphi, \psi; g)=\intl_{U(F)\backslash U(\mathbb{A})} \varphi(ug)\psi^{-1}(u) \, du\,. \end{equation} This can be viewed as a Fourier coefficient of the automorphic form $\varphi$ with respect to the unipotent subgroup $U$. Fourier coefficients of automorphic forms carry a wealth of arithmetic and representation-theoretic information. For example, in the case of classical modular forms on the upper half-plane, Fourier coefficients are well-known to encode information about the count of rational points on elliptic curves. On the other hand, for higher rank Lie groups their arithmetic content is not always transparent, but they always encode important representation-theoretic information. Langlands showed that the constant terms in the Fourier expansion of Eisenstein series provide a source for automorphic $L$-functions \cite{L67}, and Shahidi extended this method (now called the Langlands-Shahidi method) to include also the non-constant Fourier coefficients \cite{Sha78,Sha81}. Theta correspondences provide realizations of Langlands functorial transfer between automorphic representations $\pi$ and $\pi'$ of two different groups ${\mathrm {G}}$ and ${\mathrm {G}}'$. In this context automorphic forms attached to {\it minimal} automorphic representations play a key role \cite{G06}. The wave front set of a minimal representation $\pi_\text{min}$ of a group ${\mathrm {G}}$ is the closure of the smallest non-trivial nilpotent coadjoint orbit $\mathcal{O}_\text{min}$ of ${\mathrm {G}}$ \cite{J76,KS90}. The automorphic realizations of minimal representations are characterized by having very few non-vanishing Fourier coefficients \cite{GRS97}. Conversely, the method of descent \cite{GRS11} can be viewed as an inverse to the functorial lifting, in which an automorphic representation of a general linear group ${\mathrm{GL}}_n$ is transferred to a representation of a smaller classical group ${\mathrm {G}}$. Also in this case do Fourier coefficients of small representations enter in a crucial way. In general it is a difficult problem to obtain explicit formulas for Fourier coefficients for higher rank groups, let alone settle the question of whether an automorphic form $\varphi$ can be reconstructed from only a subset of its Fourier coefficients. For cusp forms on ${\mathrm{GL}}_n$ this is possible due to the Piatetski-Shapiro--Shalika formula \cite{S74,PS79} that allows to reconstruct $\varphi$ from its Whittaker coefficients; i.e. the Fourier coefficients with respect to the unipotent radical $N$ of the Borel subgroup $B \subset {\mathrm {G}}$. These coefficients are sums of Eulerian Whittaker coefficients on subgroups of ${\mathrm {G}}$, and their non-archimedean parts can be obtained from the Casselman--Shalika formula \cite{Sh76,CS80} as described in \cite{FGKP18}. However, even if this gives us complete control of the Fourier expansion with respect to $N$ it does not automatically give us a way of calculating an arbitrary Fourier coefficient $\mathcal{F}_{U}(\varphi, \psi; g)$ with respect to some other unipotent subgroup $U$. Such coefficients play an important role in the construction of $L$-functions, and also carry information about non-perturbative effects in string theory as described in section~\ref{sec:string}. Expanding upon the classic results of \cite{GRS97}, Miller and Sahi proved in \cite{MS12} that for automorphic forms $\varphi$ attached to a minimal representation $\pi_\text{min}$ of $E_6$ and $E_7$, any Fourier coefficient $\mathcal{F}_{U}(\varphi, \psi; g)$ is completely determined by maximally degenerate Whittaker coefficients of the form \begin{align} \label{maxdeg} \intl_{N(F)\backslash N(\mathbb{A})} \varphi(ng)\psi_\alpha (n)^{-1} \,dn\, , \end{align} where $\psi_\alpha$ is non-trivial only on the one-parameter subgroup of $N$ corresponding to the simple root $\alpha$. This result maybe viewed as a global version of the classic results of Moeglin-Waldspurger in the non-archimedean setting \cite{MW87}, and Matumoto in the archimedean setting \cite{Mat87}. For the special cases of ${\mathrm{SL}}_3$ and ${\mathrm{SL}}_4$ the Miller--Sahi results were generalized in \cite{GKP16} (following related results in \cite{FKP14}) to automorphic forms attached to a next-to-minimal automorphic representation $\pi_\text{ntm}$. It was shown that any Fourier coefficient is completely determined by \eqref{maxdeg} and coefficients of the following form \begin{equation} \int\limits_{N(F)\backslash N(\mathbb{A})} \varphi(ng)\psi_{\alpha, \beta} (n)^{-1} \,dn\,, \label{ntmdeg} \end{equation} where $\psi_{\alpha, \beta}$ is only supported on strongly orthogonal pairs of simple roots $(\alpha, \beta)$ which here reduces to that $[E_\alpha, E_\beta]=0$~\cite{Knapp}. The main goal of the present paper is to use the techniques of \cite{JL13,JLS16,GGS15}, in particular the notion of \emph{Whittaker pair}, to extend the above results to all of ${\mathrm{SL}}_n$. \subsection{Summary of results} We now summarize our main results. In the rest of this paper we will consider ${\mathrm{SL}}_n$ for $n \geq 5$ where we have fixed a Borel subgroup with the unipotent radical $N$. Let also $T$ be the diagonal elements of ${\mathrm{SL}}_n(F)$ and, for a character $\psi_0$ on $N$, let $T_{\psi_0}$ be the stabilizer of $\psi_0$ under the action $[h.\psi_0](n) = \psi_0(h n h^{-1})$ for $h \in T$. \label{tpsi0} Define \begin{equation} \label{eq:Gamma-i} \Gamma_i(\psi_0) \coloneqq \begin{cases} ({\mathrm{SL}}_{n-i}(F))_{\hat Y} \backslash {\mathrm{SL}}_{n-i}(F) & 1 \leq i \leq n-2 \\ (T_{\psi_0} \cap T_{\psi_{\alpha_{n-1}}}) \backslash T_{\psi_0} & i = n-1\, , \end{cases} \end{equation} where $({\mathrm{SL}}_{n-i}(F))_{\hat Y}$ is the stabilizer of $\hat Y = {}^t(1, 0, 0, \ldots, 0) \in \Mat_{(n-i)\times 1}(F)$ and consists of elements $\begin{psmallmatrix} 1 & \xi \\ 0 & h \end{psmallmatrix}$, with $h \in {\mathrm{SL}}_{n-i-1}(F)$ and $\xi \in \Mat_{1 \times (n-i-1)}(F)$. When $\psi_0 = 1$ we write $\Gamma_i(1)$ as $\Gamma_i$. Similarly, let $({\mathrm{SL}}_j(F))_{\hat X}$ be the stabilizer of $\hat X = (0, \ldots, 0, 1) \in \Mat_{1\times j}(F)$ with respect to multiplication on the right, $\psi_0$ a character on $N$, and define \begin{equation} \label{eq:Lambda-j} \Lambda_j(\psi_0) \coloneqq \begin{cases} ({\mathrm{SL}}_j(F))_{\hat X} \backslash {\mathrm{SL}}_j(F) & 2 < j \leq n \\ (T_{\psi_0} \cap T_{\psi_{\alpha_1}}) \backslash T_{\psi_0} & j = 2 \, , \end{cases} \end{equation} where, again, we denote $\Lambda_j(1) = \Lambda_j$. Define also the embeddings $\iota, \hat \iota : {\mathrm{SL}}_{n-i} \to {\mathrm{SL}}_n$ for any $0 \leq i \leq n-1$ as \begin{equation} \label{eq:iota} \iota(\gamma) = \begin{pmatrix} I_{i} & 0 \\ 0 & \gamma \end{pmatrix} \qquad \hat\iota(\gamma) = \begin{pmatrix} \gamma & 0 \\ 0 & I_{i} \end{pmatrix} \, , \end{equation} where we for brevity suppress their dependence on $i$. Note that for $i = 0$, they are just the identity maps for ${\mathrm{SL}}_n$. The following theorem expands an automorphic form $\varphi$ attached to a small automorphic representation of ${\mathrm{SL}}_n$ in terms of highly degenerate Whittaker coefficients similar to how cusp forms on ${\mathrm{GL}}_n$ can be expanded in terms of Whittaker coefficients with the Piatetski-Shapiro--Shalika formula \cite{S74, PS79}. Expansion of non-cuspidal automorphic forms on ${\mathrm{GL}}_n$ in terms of Whittaker coefficients were discussed in~\cite{Yukie,JL13}. \begin{mainthm} \label{thm:varphi} Let $\pi$ be a minimal or next-to-minimal irreducible automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$, and let $\varphi \in \pi$. \begin{enumerate}[label=\textnormal{(\roman*)}, leftmargin=0cm,itemindent=1.75\parindent,labelwidth=\itemindent,labelsep=0mm,align=left] \item \label{itm:varphi-min} If $\pi=\pi_{min}$, then $\varphi$ has the expansion \begin{equation} \varphi(g) = \intl_{N(F)\backslash N(\mathbb{A})} \varphi(ng) \, dn + \sum_{i=1}^{n-1} \sum_{\gamma \in \Gamma_{i}} \, \intl_{N(F)\backslash N(\mathbb{A})} \varphi(n \iota(\gamma) g) \psi^{-1}_{\alpha_i}(n) \, dn \, . \end{equation} \item \label{itm:varphi-ntm} If $\pi=\pi_{ntm}$, then $\varphi$ has the expansion \begin{multline} \varphi(g) = \intl_{N(F) \backslash N(\mathbb{A})} \varphi(vg) \, dv + \sum_{i=1}^{n-1} \sum_{\gamma \in \Gamma_i} \intl_{N(F)\backslash N(\mathbb{A})} \varphi(v \iota(\gamma) g) \psi^{-1}_{\alpha_i}(v) \, dv +{} \\ + \sum_{j=1}^{n-3}\sum_{i=j+2}^{n-1} \sum_{\substack{\gamma_i \in \Gamma_i(\psi_{\alpha_j}\!) \\ \gamma_j \in \Gamma_j}} \intl_{N(F)\backslash N(\mathbb{A})} \varphi(v \iota(\gamma_i) \iota(\gamma_j) g) \psi^{-1}_{\alpha_j, \alpha_i} (v) \, dv \,. \end{multline} \end{enumerate} \end{mainthm} Note that the Whittaker coefficients in the last sum of case \ref{itm:varphi-ntm} have characters supported on two strongly orthogonal (or commuting) simple roots. As mentioned in section~\ref{sec:background} and further described in \cite{FGKP18}, the Whittaker coefficients are sums of Eulerian Whittaker coefficients on smaller subgroups ${\mathrm{SL}}_n$, whose non-archimedean parts can be computed by the Casselman--Shalika formula \cite{S74, CS80}. The more degenerate a Whittaker coefficient is the smaller the subgroup we need to consider (and on which character becomes generic). Thus, maximally degenerate Whittaker coefficients, and the ones with characters supported on two commuting simple roots become particularly simple and are, in principle, one, or a product of two, known ${\mathrm{SL}}_2$ Whittaker coefficients respectively. Next, we consider Fourier coefficients on maximal parabolic subgroups. Let $P_m$ the maximal parabolic subgroup of $SL_n$ with respect to the simple root $\alpha_m$ and let $U = U_m$ be the unipotent radical and $L_m$ be the corresponding Levi subgroup which stabilizes $U_m$ under conjugation. For an element $l\in L_m(F)$ and a character $\psi_U$ on $U_m$ we obtain another character $\psi_U^l$ by conjugation as \begin{equation} \psi_U^l(u) = \psi_U(lul^{-1}) \, . \end{equation} Fourier coefficients $\mathcal{F}_U$ with conjugated characters are related by $l$-translates of their arguments \begin{align} \label{eq:Fourier-L-conjugation} \mathcal{F}_U(\varphi, \psi_U^l; g) &= \intl_{U(F) \backslash U(\mathbb{A})} \varphi(ug) \psi_U^{-1}(lul^{-1}) \, du \\ &= \intl_{U(F) \backslash U(\mathbb{A})} \varphi(l^{-1}u'lg) \psi_U^{-1}(u') \, du' = \mathcal{F}_U(\varphi, \psi_U; lg) \, , \end{align} where we have first made the variable substitution $u' = lul^{-1}$ and then used the automorphic invariance since $l \in L_m(F)$. This means that we only need to compute the Fourier coefficients of one character per $L_m(F)$-orbit. We show in section~\ref{sec:fourier} that a character can be parametrized by an element $y \in \lie g$ by \eqref{eq:character} denoted by $\psi_y$, which, under conjugation, satisfies $\psi_y^l = \psi_{l^{-1} y l}$ according to \eqref{eq:character}. In section~\ref{sec:thmB} and appendix~\ref{app:levi-orbits}, we describe these orbits following~\cite{N11} and construct standard characters $\psi_{y(Y_r(d))}$ on $U_m$ based on anti-diagonal $(n-m) \times m$ rank $r$ matrices $Y_r(d)$, where $d \in F^\times/(F^\times)^2$ for $n = 2r = 2m$ and $d = 1$ otherwise (in which case we suppress the $d$), and $y(Y_r(d))$ is defined as \begin{equation} y(Y_r(d)) = \begin{pmatrix} 0_m & 0\\ Y_r(d) & 0_{n-m} \end{pmatrix}\,. \end{equation} Let $\pi$ be a minimal or next-to-minimal automorphic representations, and \begin{equation} r_\pi = \begin{cases} 1 & \text{if $\pi$ is a minimal automorphic representation} \\ 2 & \text{if $\pi$ is a next-to-minimal automorphic representation.} \end{cases} \end{equation} We will show that only the characters with rank $r \leq r_\pi \leq 2$ give non-vanishing Fourier coefficients. Let us briefly define the characters with rank $r \leq 2$ which will be used in the next theorem, postponing a more general definition to section~\ref{sec:thmB}. The rank zero character is the trivial character $\psi_{y(Y_0)} = 1$ and the corresponding Fourier coefficient has been computed in \cite{MW95} as reviewed in \cite{FGKP18}. The rank one character is $\psi_{y(Y_1)} = \psi_{\alpha_m}$ and the rank two character can be defined as follows \begin{equation} \label{eq:psi-Y2} \psi_{y(Y_2)}(u) = \psi(u_{m,m+1} + u_{m-1, m+2}) \qquad u \in U_m(\mathbb{A}) \, . \end{equation} The following theorem, together with the known constant term, then allows us to compute any Fourier coefficient with respect to the unipotent radical of a maximal parabolic subgroups for automorphic forms attached to minimal and next-to-minimal automorphic representations in terms of Whittaker coefficients. \begin{mainthm} \label{thm:max-parabolic} Let $\pi$ be a minimal or next-to-minimal irreducible automorphic representation of ${\mathrm{SL}}_n(\mathbb{A})$, and let $r_\pi$ be $1$ or $2$ respectively (which denotes the maximal rank of the character matrix $Y_r$). Let also, $\varphi \in \pi$, $P_m$ be the maximal parabolic subgroup described above with its associated subgroups $U\equiv U_m$ and $L_m$, and let $\psi_U$ be a non-trivial character on $U_m$ with Fourier coefficient $$\mathcal{F}_U(\varphi, \psi_U; g) = \int_{U_m(F)\backslash U_m(\mathbb{A})} \varphi(ug) \psi_U^{-1}(u) \, du\, .$$ Then, there exists an element $l \in L_m(F)$ such that $$\mathcal{F}_U(\varphi, \psi_U; g) = \mathcal{F}_U(\varphi, \psi_{y(Y_r(d))}; lg)$$ for some standard character $\psi_{y(Y_r(d))}$ described above and in the proof. Additionally, all $\mathcal{F}_U(\varphi, \psi_{y(Y_r(d))}; lg)$ for $r > r_\pi$ vanish identically. The remaining (non-constant) coefficients can be expressed in terms of Whittaker coefficients on $N$ as follows. \begin{enumerate}[label=\textnormal{(\roman*)}, leftmargin=0cm,itemindent=1.75\parindent,labelwidth=\itemindent,labelsep=0mm,align=left] \item \label{itm:max-parabolic-min} If $\pi = \pi_\text{min}$: \begin{flalign} \qquad \mathcal{F}_U(\varphi, \psi_{y(Y_1)}; g) &= \intl_{N(F)\backslash N(\mathbb{A})} \varphi(ng) \psi_{\alpha_m}^{-1}(n) \, dn \, . & \end{flalign} \item \label{itm:max-parabolic-ntm-rank1} If $\pi = \pi_\text{ntm}$: \begin{flalign} \qquad \mathcal{F}_U(\varphi, \psi_{y(Y_1)}; g) &= \begin{multlined}[t] \intl_{[N]} \varphi(ng) \psi_{\alpha_m}^{-1}(n) \, dn +{} \\ + \sum_{j=1}^{m-2} \sum_{\gamma \in \Lambda_j(\psi_{\alpha_m}\!)}\, \intl_{[N]} \varphi(n\hat\iota(\gamma) g) \psi_{\alpha_j, \alpha_m}^{-1}(n) \, dn +{} \\ + \sum_{i=m+2}^{n-1} \sum_{\gamma \in \Gamma_i(\psi_{\alpha_m}\!)} \, \intl_{[N]} \varphi(n \iota(\gamma) g) \psi_{\alpha_m, \alpha_i}^{-1}(n) \, dn \, . \end{multlined}& \end{flalign} \item \label{itm:max-parabolic-ntm-rank2} If $\pi = \pi_\text{ntm}$: \begin{flalign} \qquad \mathcal{F}_U(\varphi, \psi_{y(Y_2)}; g) &= \intl_{C({\mathbb {A}})}\intl_{N(F)\backslash N(\mathbb{A})} \varphi (n\omega cg)\psi_{\alpha_1,\alpha_3}^{-1}(n) \,dn\,dc\, , & \end{flalign} where $\omega$ is the Weyl element mapping the torus elements $$(t_1, t_2, \ldots, t_n) \mapsto (t_{m-1}, t_{m+2}, t_m, t_{m+1}, t_1, t_2, \ldots, t_{m-2}, t_{m+3}, t_{m+4}, \ldots, t_n)\,,$$ and the subgroup $C$ of $U_m$ will be detailed in the proof in section~\ref{sec:thmB}. \end{enumerate} \end{mainthm} As described in detail in section~\ref{sec:fourier}, $F$-rational nilpotent orbits of ${\mathrm{SL}}_n$ are characterized by $(\underline p, d)$ where $\underline p$ is a partition of $n$ and $d \in F^\times/(F^\times)^k$ with $k = \gcd(\underline p)$. If $k = 1$ we will often suppress the extra $d = 1$ and only write out the partition. There we will also see that, for each orbit, there are natural choices of unipotent subgroups and characters related by conjugations with elements $\gamma \in {\mathrm{SL}}_n(F)$ and the corresponding Fourier coefficients \eqref{eq:orbit-coefficient} are related by $\gamma$-translates of their arguments. The orbits may be partially ordered and the minimal and next-to-minimal orbits are described by the partitions $[21^{n-2}]$ and $[2^21^{n-4}]$, respectively. Besides the trivial partition, these are the only partitions whose associated Fourier coefficients are non-vanishing for $\varphi$ in a minimal or next-to-minimal irreducible automorphic representation. In section~\ref{sec:orbit-coefficients} we choose standard representatives for these orbits and specify the associated standard Fourier coefficients which we denote by $\mathcal{F}^{[211\ldots]}$ and $\mathcal{F}^{[221\ldots]}$. For $n\geq 5$, we have that the trivial, minimal and next-to-minimal orbit all have $k=1$. The following theorems express these standard Fourier coefficients associated with the two partitions above in terms of Fourier coefficients on maximal parabolic subgroups that, in turn, were written in terms of Whittaker coefficients in theorem~\ref{thm:max-parabolic}. \begin{mainthm} \label{thm:min-coeff} Let $\pi$ be an irreducible automorphic representation of ${\mathrm{SL}}_n(\mathbb{A})$, $\varphi \in \pi$ and $Y = \Pi_{i=3}^n X_{e_i - e_2}$. Then, \begin{equation} \mathcal{F}^{[211\ldots]}(\varphi; g) = \sum_{y \in Y(F)} \, \intl_{U_1(F)\backslash U_1(\mathbb{A})} \varphi(u y^{-1} g) \psi_{\alpha_1}^{-1}(u) \, du \, , \end{equation} where $U_1$ is the unipotent radical of $P_1$ consisting of the first row of $N$. The Fourier coefficient $\mathcal{F}^{[211\ldots]}$ is for a particular standard choice of orbit representative detailed in the proof; all other choices are related simply by ${\mathrm{SL}}_n(F)$ translation. \end{mainthm} \begin{mainthm} \label{thm:ntm-coeff} Let $\pi$ be an irreducible automorphic representation of ${\mathrm{SL}}_n(\mathbb{A})$, $\varphi \in \pi$, $Y' = \prod_{i=5}^{n} X_{e_i-e_4} \prod_{i=5}^{n} X_{e_i-e_3}$ and $\omega$ be the Weyl element mapping the torus elements $$(t_1, t_2, \ldots, t_n) \mapsto (t_1, t_3, t_4, t_2, t_5, t_6, \ldots, t_n)\, .$$ Then, \begin{equation} \mathcal{F}^{[221\ldots]}(\varphi; g) = \sum_{y \in Y'(F)} \, \intl_{U_2(F)\backslash U_2(\mathbb{A})} \varphi(u y^{-1} \omega g) \psi_{y(Y_2)}^{-1}(u) \, du \, , \end{equation} where $U_2$ is the unipotent radical of $P_2$ consisting of the first two rows of $N$ and $\psi_{y(Y_2)}$ is defined in \eqref{eq:psi-Y2} with $m = 2$. The Fourier coefficient $\mathcal{F}^{[2^21\ldots]}$ is for a particular standard choice of orbit representative detailed in the proof; all other choices are related simply by ${\mathrm{SL}}_n(F)$ translation. \end{mainthm} \subsection{Applications in string theory} \label{sec:string} String theory is a quantum theory of gravity describing maps $X: \Sigma \to M$, where $\Sigma$ is a Riemann surface (the string worldsheet) and $M$ is a ten-dimensional pseudo-Riemannian manifold (spacetime). Its low-energy limit is a supersymmetric extension of Einstein's theory of gravity in 10 dimensions coupled to additional matter in the form of scalar fields $\Phi : M\to \mathbb{C}$ and differential forms on spacetime $M$. Our main focus here will be the scalar fields. The scalar fields parametrize the space of string theory vacua, i.e. the moduli space $\mathcal{M}$. To make contact with a lower-dimensional world, one choice is to decompose spacetime into $$M=\mathbb{R}^{1,9-n}\times T^n\,,$$ where $\mathbb{R}^{1,9-n}$ is the flat Minkowski space in $10-n$ dimensions and $T^n$ is an $n$-dimensional torus. In the limit when the size of the torus is small, the physics looks effectively $(10-n)$-dimensional and one says that the theory has been \emph{compactified}. As the size of the torus is increased the moduli space $\mathcal{M}$ gets larger and larger due to an increased number of scalar fields $\Phi$. The moduli space for this toroidal compactification is always of the form \begin{equation} \mathcal{M}=\mathrm{G}(\mathbb{Z})\backslash \mathrm{G}(\mathbb{R})/K\,, \end{equation} where $\mathrm{G}(\mathbb{R})$ is a semi-simple Lie group in its split real form, $K$ its maximal compact subgroup and $\mathrm{G}(\mathbb{Z})$ an arithmetic subgroup. The group $\mathrm{G}(\mathbb{Z})$ is known as the \emph{U-duality group} and is a symmetry of the full quantum string theory. The extreme case when $n=0$, i.e. for no compactification, the moduli space is given by $$\mathcal{M}={\mathrm{SL}}_2(\mathbb{Z})\backslash {\mathrm{SL}}_2(\mathbb{R})/\mathrm{SO}_2\, .$$ Another extreme case is $n=6$, corresponding to four space-time dimensions, for which the moduli space is given by \cite{HT95} \begin{equation} \mathcal{M}= \mathrm{E}_7(\mathbb{Z})\backslash \mathrm{E}_7(\mathbb{R})/(\mathrm{SU}_8/\mathbb{Z}_2)\,. \end{equation} Here $\mathrm{E}_7(\mathbb{R})$ is the split real form and $\mathrm{E}_7(\mathbb{Z})$ its Chevalley group of integer points. The sequence of groups in between are obtained by successively removing nodes from the $\mathrm{E}_7$ Dynkin diagram; see table \ref{tab:duality} for the complete list. Constraints from $U$-duality and supersymmetry ensure that certain quantum corrections to Einstein's gravitational theory involve functions $f : \mathcal{M} \to \mathbb{C}$ that must be eigenfunctions of the ring of $\mathrm{G}(\mathbb{R})$-invariant differential operators. In particular they are eigenfunctions of the Laplacian on $\mathrm{G}(\mathbb{R})/K$ with specific eigenvalues. In addition, they must have well-behaved growth properties in certain limits corresponding to `cusps' of $\mathcal{M}$. Such quantum corrections are therefore controlled by automorphic forms on $\mathcal{M}$. It turns out that the relevant automorphic forms are very special and are precisely those attached to a minimal and next-to-minimal automorphic representation of the groups $\mathrm{G}$ \cite{GMRV10,P10,GMV15}. The Fourier coefficients of such automorphic forms therefore have a direct physical interpretation: the constant terms encode perturbative quantum corrections, while the non-constant terms correspond to non-perturbative, instanton, effects \cite{FK12,FKP14,BV14,BV15a,BV15b,BCP17a,BP17,BCP17b}. For a recent book on automorphic representations and the connection with string theory, see \cite{FGKP18}. Fourier coefficients with respect to different choices of parabolic subgroups $P\subset \mathrm{G}$ correspond to different limits in string theory, and reveal different types of effects. The ones of main interest are certain maximal parabolic subgroups. Let $P_\alpha= L_\alpha U_\alpha$ denote the maximal parabolic whose Levi subgroup is $L_\alpha = M_\alpha\times {\mathrm{GL}}_1$, where $M_\alpha$ is obtained by removing the node in the Dynkin diagram of $\mathrm{G}$ corresponding to the simple root $\alpha$. There are three types of maximal parabolics of main interest in string theory (the numbering of nodes are according to the Bourbaki convention of the exceptional Lie algebras): \begin{itemize} \item $P_{\alpha_1}$: this is the \emph{perturbative, or string theory, limit} where the Levi is of orthogonal type $M_{\alpha_1}=\mathrm{D}_{n}$; \item $P_{\alpha_2}$: this is the \emph{M-theory limit} where the Levi is of type $M_{\alpha_2}=\mathrm{A}_{n}$; \item $P_{\alpha_{n+1}}$: this is the \emph{decompactification limit} where the Levi is of exceptional type $M_{\alpha_{n+1}}=\mathrm{E}_n$ (for $n<6$ these are strictly speaking not exceptional, but given by table \ref{tab:duality}). \end{itemize} Theorem \ref{thm:max-parabolic}, together with its counterpart in \cite{GKP16}, then provides explicit results for the Fourier coefficients of automorphic forms in all these parabolics for the cases $n=2$ or $n=3$ when the symmetry groups are ${\mathrm{SL}}_2\times {\mathrm{SL}}_3$ or ${\mathrm{SL}}_5$, respectively. The case of ${\mathrm{SL}}_5$ will be treated in detail in section \ref{sec:sl5}. \begin{table}[t!hb] \centering \caption{\label{tab:duality} List of U-duality groups in compactifications of (type IIB) string theory on $T^n$.} \begin{tabular}{|c|c|c|c|} \hline $n$ & $\mathrm{G}(\mathbb{R})$ & $K(\mathbb{R})$ & $\mathrm{G}(\mathbb{Z})$ \\ \hline $0$ & ${\mathrm{SL}}_2(\mathbb{R})$ & ${\mathrm{SO}}_2$ & ${\mathrm{SL}}_2(\mathbb{Z})$ \\ $1$ & ${\mathrm{GL}}_2(\mathbb{R})$ & ${\mathrm{SO}}_2$ & ${\mathrm{SL}}_2(\mathbb{Z})$ \\ $2$ & ${\mathrm{SL}}_2(\mathbb{R})\times {\mathrm{SL}}_3(\mathbb{R})$ & ${\mathrm{SO}}_2\times {\mathrm{SO}}_2$ & ${\mathrm{SL}}_3(\mathbb{Z})\times {\mathrm{SL}}_2(\mathbb{Z})$\\ $3$ & ${\mathrm{SL}}_5(\mathbb{R})$ & ${\mathrm{SO}}_5$ & ${\mathrm{SL}}_5(\mathbb{Z})$ \\ $4$ & ${\mathrm{Spin}}_{5,5}(\mathbb{R})$ & $({\mathrm{Spin}}_5\times {\mathrm{Spin}}_5)/\mathbb{Z}_2$ & ${\mathrm{Spin}}_{5,5}(\mathbb{Z})$\\ $5$ & $\mathrm{E}_6(\mathbb{R})$ & $\mathrm{USp}_8/\mathbb{Z}_2$ & $\mathrm{E}_6(\mathbb{Z})$ \\ $6$ & $\mathrm{E}_7(\mathbb{R})$ & ${\mathrm{SU}}_8/\mathbb{Z}_2$ & $\mathrm{E}_7(\mathbb{Z})$\\ $7$ & $\mathrm{E}_8(\mathbb{R})$ & ${\mathrm{Spin}}_{16}/\mathbb{Z}_2$ & $\mathrm{E}_8(\mathbb{Z})$\\ \hline \end{tabular} \end{table} \subsection*{Acknowledgements} We have greatly benefitted from many discussions with Dmitry Gourevitch, Joseph Hundley, Stephen D. Miller and Siddhartha Sahi. We gratefully acknowledge support from the Simons Center for Geometry and Physics, Stony Brook University during the program on ``Automorphic forms, mock modular forms and string theory'' in the fall of 2016 during which part of the research for this paper was performed. The fourth named author is partially supported by NSF grant DMS-1702218 and by a start-up fund from the Department of Mathematics at Purdue University. \section{Nilpotent orbits and Fourier coefficients} \label{sec:fourier} In this section, first, we introduce Whittaker pairs and nilpotent orbits with their associated Fourier coefficients following~\cite{GGS15}, which is slightly more general and easier to use than the one given in~\cite{G06}. Then we recall the parametrization of $F$-rational nilpotent orbits of ${\mathrm{SL}}_n$ in terms of partitions of $n$ from \cite{N11} and a lemma for exchanging roots in Fourier integrals from \cite{GRS11}. As before, let $F$ be a number field, $\mathbb{A}$ be the adele ring of $F$ and fix a non-trivial additive character $\psi$ on $F \backslash {\mathbb {A}}$. Let also ${\mathrm {G}}$ be a reductive group defined over $F$, or a central extension of finite degree, and let $\lie g$ be the Lie algebra of ${\mathrm {G}}(F)$. For a semi-simple element $s \in \lie g$, let $\lie g^s_i$ be defined as the eigenspace of $s$ in $\lie g$ with eigenvalue $i$ under the adjoint action decomposing $\lie g$ to a direct sum of eigenspaces over different eigenvalues. For any $r \in {\mathbb {Q}}$, we further define $\lie g^s_{\geq r} = \oplus_{r' \geq r} \lie g^s_{r'}$ and similarly for other inequality relations. For an element $X \in \lie g$, we will also denote the centralizer of $X$ in $\lie g$ as \begin{equation} \lie g_X = \left\{ x \in \lie g \,\middle|\, \left[x,X\right]=0\right\}\,. \end{equation} Furthermore, a semi-simple element $s$ is called \emph{rational semi-simple} if all of its eigenvalues under the adjoint action on $\lie g$ are in ${\mathbb {Q}}$. For such a rational semi-simple element $s$ and a non-trivial nilpotent element $u \in \lie g^s_{-2}$ we call the ordered pair $(s, u)$ a \emph{Whittaker pair}. If, for such a pair, $s$ is also called a \emph{neutral element} for $u$ or $(s, u)$ a \emph{neutral pair} if the map $\lie g^s_0 \to \lie g^s_{-2} : X \mapsto [X,u]$ is surjective or, equivalently \cite[Lemma 2.2.1]{GGS15}, $s \in \Im(\operatorname{ad}(u))$. An \emph{$\lie{sl}_2$-triple} is an ordered triple $(u, s, v)$ of elements in $\lie g$ that satisfy the standard commutation relations for $\lie{sl}_2$, \begin{equation} [s, v] = 2v \qquad [s, u] = - 2u \qquad [v, u] = s\,, \end{equation} where $u$ is called the nil-negative element, $v$ is called the nil-positive element and $s$ is a neutral element for $u$. We have, from \cite[Lemma 2.2.1]{GGS15}, that a Whittaker pair $(s,u)$ comes from an $\lie{sl}_2$-triple $(u,s,v)$ if and only if $s$ is a neutral element for $u$. By the Jacobson--Morozov theorem, there exists an $\lie{sl}_2$ triple for any nilpotent element $u \in \lie g$. Moreover, the ${\mathrm {G}}$-conjugacy classes of $\lie{sl}_2$-triples are one-to-one with the nilpotent orbits $\mathcal{O}_X = \{gXg^{-1} \mid g \in {\mathrm {G}}(F)\}$ in $\lie g$ \cite{CollingwoodMcGovern}. We will now construct the Fourier coefficient that is associated to a Whittaker pair $(s, u)$. The pair defines a unipotent subgroup $N_s$ and a character $\psi_u$ on $N_s$ as follows. Following \cite[Lemma 3.2.6]{GGS15}, let \begin{equation} \lie{n}_s = \lie{g}^s_{>1} \oplus \lie{g}^s_1 \cap \lie{g}_u \, , \end{equation} which is a nilpotent subalgebra of $\lie g$, and define $N_s = \exp(\lie n_s)$ as the corresponding unipotent subgroup of ${\mathrm {G}}$. Then $\psi_u$, defined by \begin{equation} \psi_u(n) = \psi(\langle u, \log(n)\rangle) \qquad n \in N_s(\mathbb{A})\,, \end{equation} is a character on $N_s(\mathbb{A})$ where $\langle \cdot, \cdot \rangle$ is the Killing form. Note that if the Whittaker pair $(s,u)$ comes from an $\mathfrak{sl}_2$-triple $(u,s,v)$, then, by $\lie{sl}_2$ representation theory, $\operatorname{ad}(s)$ has integer eigenvalues with a graded decomposition of the Lie algebra $\lie g = \bigoplus_{i \in \mathbb{Z}} \lie{g}^s_i$ and $\lie g_u \subset \bigoplus_{i \leq 0} \lie{g}^s_i$ \cite{CollingwoodMcGovern}, and thus, \begin{equation} \label{eq:neutral-ns} \mathfrak{n}_s=\mathfrak{g}^s_{\geq 2} \qquad (\text{for neutral } s). \end{equation} Let $\pi$ be an automorphic representation of ${\mathrm {G}}({\mathbb {A}})$ and $\varphi$ an automorphic form attached to $\pi$. The Fourier coefficient associated with a Whittaker pair $(s, u)$ is \begin{equation} \label{fc} {\mathcal {F}}_{s,u}(\varphi)(g) = \intl_{N_{s}(F) \backslash N_{s}({\mathbb {A}})} \varphi(ng){\psi}^{-1}_u(n)dn, \quad g \in {\mathrm {G}}({\mathbb {A}}) \, , \end{equation} and let ${\mathcal {F}}_{s,u}(\pi)=\{{\mathcal {F}}_{s,u}(\varphi) \mid \varphi\in \pi\}$. For convenience, we introduce the following notation for a unipotent subgroup $U$ \begin{equation} [U] = U(F) \backslash U(\mathbb{A}) \, . \end{equation} Consider the Fourier coefficient associated with a neutral Whittaker pair $(s, u)$, and let $(s', u') = (\gamma s\gamma^{-1}, \gamma u \gamma^{-1})$ which is also neutral for any $\gamma\in G(F)$. Because of the invariance of the Killing form we have that $\psi_{u'}(n') = \psi_u(\gamma^{-1} n' \gamma)$ where $n' \in [N_{s'}]$, and because of \eqref{eq:neutral-ns} we have that $N_{s'} = \gamma N_s \gamma^{-1}$. Thus, with a variable substitution $n' = \gamma n \gamma^{-1}$, \begin{equation} \label{eq:orbit-coefficient} \begin{split} {\mathcal {F}}_{s',u'}(\varphi)(g) &= \intl_{[\gamma N_{s'} \gamma^{-1}]} \varphi(n'g) \psi^{-1}_u(\gamma^{-1} n' \gamma) \, dn \\ &= \intl_{[N_s]} \varphi(\gamma n \gamma^{-1} g) \psi_u^{-1}(n) \, dn = {\mathcal {F}}_{s,u}(\varphi)(\gamma^{-1}g)\,, \end{split} \end{equation} using the automorphic invariance of $\varphi$. Note the resemblance with \eqref{eq:Fourier-L-conjugation} where we made a conjugation keeping $N_s$ invariant. In particular, \eqref{eq:orbit-coefficient} means that if ${\mathcal {F}}_{s,u}$ vanishes identically then so do all Fourier coefficients associated to neutral Whittaker pairs $(s',u')$ where $u' \in \mathcal{O}_u$. For an $F$-rational nilpotent orbit $\mathcal{O}$, we say that the coefficients ${\mathcal {F}}_{s,u}$ with neutral $s$ and $u \in \mathcal{O}$ are Fourier coefficients attached to the nilpotent orbit $\mathcal{O}$. We define the \emph{(global) wave-front set} $\mathcal{WF}(\pi)$ of an automorphic representation $\pi$ of ${\mathrm {G}}(\mathbb{A})$ as the set of nilpotent orbits $\mathcal{O}$ such that $\mathcal{F}_{s,u}(\pi)$ is non-zero, for some (and therefore all) neutral Whittaker pairs $(s,u)$ with $u \in \mathcal{O}$. Note that nilpotent orbits can be partially ordered with respect to the inclusion of Zariski closures $\mathcal{O}' \leq \mathcal{O}$ if $\overline{\mathcal{O}'} \subseteq \overline{\mathcal{O}}$. We recall \cite[Theorem C]{GGS15} as follows. \begin{thm}[Theorem C, \cite{GGS15}]\label{thm:ggsglobal} Let $\pi$ be an automorphic representation of ${\mathrm {G}}({\mathbb {A}})$, let $(s,u)$ be a Whittaker pair, and $(h, u)$ a neutral Whittaker pair such that ${\mathcal {F}}_{h,u}(\pi)$ is zero. Then, ${\mathcal {F}}_{s,u}(\pi)$ is zero. \end{thm} This means that if $u \in \mathcal{O}$ where $\mathcal{O} \nin \mathcal{WF}(\pi)$ then, for any Whittaker pair $(s, u)$, not necessarily neutral, the associated Fourier coefficient $\mathcal{F}_{s, u}(\varphi)$ vanishes identically for $\varphi \in \pi$. In this paper, we focus on the group ${\mathrm{SL}}_n$ where we parametrize a character on $N_s$ by $u \in \lie g^{s}_{-2}$ as \begin{equation} \label{eq:character} \psi_u(n)=\psi({\mathrm{tr}} (u\log(n))) \qquad n \in N_s(\mathbb{A}). \end{equation} Then, for any $l$ in the normalizer of $N_s(\mathbb{A})$ in ${\mathrm {G}}(\mathbb{A})$ \begin{equation} \label{eq:character-conjugation} \begin{split} \psi_y^l(x) &= \psi_y(l x l^{-1}) = \psi\big({\mathrm{tr}}( y \log(l x l^{-1}))\big) = \psi\big({\mathrm{tr}}( y l \log(x) l^{-1})\big) \\ &= \psi\big({\mathrm{tr}}(l^{-1} y l \log(x))\big) = \psi_{l^{-1} y l}(x) \, . \end{split} \end{equation} The nilpotent orbits of ${\mathrm{SL}}_n$ can be described by partitions $\underline{p}$ of $n$. Let us characterize the $F$-rational orbits of ${\mathrm{SL}}_n$ following \cite{N11}. \begin{prop}[Proposition 4, \cite{N11}]\label{orbits} Let $\underline{p}=[p_1 p_2 \cdots p_r]$ be an ordered partition of $n$, with $p_1 \geq p_2 \geq \ldots \geq p_r$ and let $m = \gcd(\underline{p})=\gcd(p_1, p_2, \ldots, p_r)$. For $d \in F^\times$, define $D(d) = \operatorname{diag}(1, 1, \ldots, 1, d)$ and let also $J_{\underline{p}}$ be the standard (lower triangular) Jordan matrix corresponding to $\underline{p}$: $J_{\underline{p}} = \operatorname{diag}(J_{[p_1]}, J_{[p_2]}, \ldots, J_{[p_3]})$, where $J_{[p]}$ is a $p\times p$ matrix with non-zero elements only on the subdiagonal which are one. \begin{enumerate} \item For each $d \in F^\times$, the matrix $D(d)J_{\underline{p}}$ is a representative of an $F$-rational nilpotent orbit of ${\mathrm{SL}}_n$ parametrized by $\underline{p}$, and conversely, every orbit parametrized by $\underline{p}$ has a representative of this form. We say that the $F$-rational orbit represented by $D(d)J_{\underline{p}}$ is parametrized by $(\underline{p},d)$. \item The ${\mathrm{SL}}_n(F)$-orbits represented by $D(d)J_{\underline{p}}$ and $D(d')J_{\underline{p}'}$ coincide if and only if $\underline{p}=\underline{p}'$ and $d \equiv d'$ in $F^\times/(F^\times)^m$. \end{enumerate} \end{prop} \begin{examp} The $F$-rational orbit $([322], 1)$ of ${\mathrm{SL}}_7$ is represented by \begin{equation} J_{[322]}=\operatorname{diag}(J_{[3]}, J_{[2]}, J_{[2]})=\begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}\,. \end{equation} \end{examp} \begin{rmk} Over $\overline{F}$ the $F$-rational orbits for different $d$ become the same, meaning that they are completely characterized by partitions of $n$. There is partial ordering for partitions that agrees with the partial ordering of the $\overline{F}$-orbits, where $[p_1p_2\ldots p_r] \leq [q_1q_2\ldots q_r]$ (possibly padded by zeroes) if \cite{CollingwoodMcGovern} \begin{equation} \sum_{1\leq j \leq n} p_j \leq \sum_{1\leq j \leq n} q_j \quad \text{for } 1 \leq n \leq r \, . \end{equation} The Zarisky topology over $F$ is induced from that of $\overline F$ which means that we can use this partial ordering of partitions for the $F$-rational orbits as well. Thus, when discussing the partial ordering of orbits or the closure of orbits we will sometimes not specify the $F$-rational orbit, but only the partition, that is, the ${\mathrm{SL}}_n(\overline{F})$-orbit. \end{rmk} An automorphic representation $\pi$ of ${\mathrm{SL}}_n(\mathbb{A})$ is called {\it minimal} if $\mathcal{WF}(\pi)$ is the set of orbits in the closure of the minimal (non-trivial) orbit which is represented by the partition $[21^{n-2}]$, and it is called \emph{next-to-minimal} if it is instead the set of orbits in the closure of the next-to-minimal orbit $[2^21^{n-4}]$. We will now recall a general lemma for exchanging roots in Fourier coefficients from \cite{GRS11}. In \cite{GRS11}, the groups considered are quasi-split classical groups, but the lemma holds for any connected reductive group with exactly the same proof. Let $\mathrm{G}$ be a connected reductive group defined over $F$ and let $C$ be an $F$-subgroup of a maximal unipotent subgroup of $\mathrm{G}$. Let also $\psi_C$ be a non-trivial character on $[C] = C(F) \backslash C({\mathbb {A}})$, and $X, Y$ two unipotent $F$-subgroups satisfying the following conditions: \begin{enumerate}[label=(\arabic*)] \item $X$ and $Y$ normalize $C$; \item $X \cap C$ and $Y \cap C$ are normal in $X$ and $Y$, respectively, $(X \cap C) \backslash X$ and $(Y \cap C) \backslash Y$ are abelian; \item $X({\mathbb {A}})$ and $Y({\mathbb {A}})$ preserve $\psi_C$ under conjugation; \item $\psi_C$ is trivial on $(X \cap C)({\mathbb {A}})$ and $(Y \cap C)({\mathbb {A}})$; \item $[X, Y] \subset C$; \item there is a non-degenerate pairing \begin{align} (X \cap C)({\mathbb {A}}) &\times (Y \cap C)({\mathbb {A}}) \rightarrow {\mathbb {C}}^\times \\ (x,y) &\mapsto \psi_C([x,y]) \end{align} which is multiplicative in each coordinate, and identifies \linebreak[4]$(Y \cap C)(F) \backslash Y(F)$ with the dual of $ X(F)(X \cap C)({\mathbb {A}}) \backslash X({\mathbb {A}}), $ and $(X \cap C)(F) \backslash X(F)$ with the dual of $ Y(F)(Y \cap C)({\mathbb {A}}) \backslash Y({\mathbb {A}}). $ \end{enumerate} Let $B =CY$ and $D=CX$, and extend $\psi_C$ trivially to characters of $[B]=B(F)\backslash B({\mathbb {A}})$ and $[D]=D(F)\backslash D({\mathbb {A}})$, which will be denoted by $\psi_B$ and $\psi_D$ respectively. \begin{lem}[Lemma 7.1 of \cite{GRS11}]\label{exchangeroots} Assume that $(C, \psi_C, X, Y)$ satisfies all the above conditions. Let $f$ be an automorphic form on $\mathrm{G}({\mathbb {A}})$. Then for any $g \in \mathrm{G}({\mathbb {A}})$, $$\int_{[B]} f(vg) \psi_B^{-1}(v) dv = \int_{(Y \cap C) ({\mathbb {A}}) \backslash Y({\mathbb {A}})} \int_{[D]} f(vyg) \psi_D^{-1}(v) \,dv\,dy\,.$$ \end{lem} For simplicity, we will use $\psi_C$ to denote its extensions $\psi_B$ and $\psi_D$ when using the lemma. \section{Proof of theorem \ref{thm:varphi}} \label{sec:thmA} Before we prove Theorem \ref{thm:varphi} in this section, let us first introduce a few definitions and useful lemmas. Let $V_i$ be the unipotent radical of the parabolic subgroup of type $(1^i,n-i)$, that is, the parabolic subgroup with Levi subgroup $({\mathrm{GL}}_1)^i \times GL_{n-i}$ together with a determinant one condition. Then, $N= V_n = V_{n-1}$ is the unipotent radical of the Borel subgroup and $V_i$ can be seen as the first $i$ rows of $N$. For $1 \leq i \leq n-1$, let $\alpha_i = e_i-e_{i+1}$ be the $i$-th simple root of ${\mathrm{SL}}_n$, and let $\psi_{\alpha_i}$ be the character of $N$ defined by \begin{equation} \psi_{\alpha_i}(n)= \psi(n_{i,i+1}), \forall n \in N({\mathbb {A}}) \, . \end{equation} For a list of simple roots, we let $\psi_{\alpha_{i_1}, \ldots, \alpha_{i_m}} = \psi_{\alpha_{i_1}} \cdots {} \; \psi_{\alpha_{i_m}}$ and we also regard $\psi_{\alpha_j}$ for $j \leq i$ as a character of $V_i$ via restriction. Also, let $R_{i+1}$ be the subgroup of $V_{i+1}$, consisting of the elements $v$ with conditions that $v_{p,q}=0$, for all $1 \leq p \leq i$ and $p < q \leq n$, that is $R_{i+1}$ consists of the row $i+1$ in $V_{i+1}$. It is clear that $R_{i+1} \cong V_i \backslash V_{i+1}$ is an abelian subgroup of $V_{i+1}$. For a character $\psi_N$ on $N$, we say that $\psi_N$ is trivial along a simple root $\alpha_i$ if the restriction of $\psi_N$ to $R_i$ is identically zero. \begin{examp} For ${\mathrm{SL}}_5$ we have that \begin{equation*} V_3 = \Big\{ \begin{psmallmatrix} 1 & * & * & * & * \\ & 1 & * & * & * \\ & & 1 & * & * \\ & & & 1 & \\ & & & & 1 \\ \end{psmallmatrix} \Big\} \qquad R_3 = \Big\{ \begin{psmallmatrix} 1 & & & & \\ & 1 & & & \\ & & 1 & * & * \\ & & & 1 & \\ & & & & 1 \end{psmallmatrix} \Big\} \, . \end{equation*} \end{examp} Thus, we have that $[R_i] \cong (F \backslash {\mathbb {A}})^{n-i}$ and the dual of $[R_i]$ is $F^{n-i}$, which can be identified with the nilpotent subalgebra ${}^t \lie r_i(F) = \log({}^t{R}_{i}(F))$, where ${}^t{R}_{i}(F)$ is the transpose of $R_{i}(F)$. Given $y \in {}^t{\lie r}_{i}(F)$, the corresponding character $\psi_y$ on $[R_{i}]$ is given by \eqref{eq:character} as \begin{equation} \psi_y(x) = \psi({\mathrm{tr}} (y \log x)), \quad \forall x \in [R_{i}] \, . \end{equation} \begin{examp} For $SL_5$ with $R_3$ above, let \begin{equation*} y = \begin{psmallmatrix} 0 & & & & \\ & 0 & & & \\ & & 0 & & \\ & & y_1 & 0 & \\ & & y_2 & & 0 \end{psmallmatrix} \in {}^t\lie r_3(F) \qquad x = \begin{psmallmatrix} 1 & & & & \\ & 1 & & & \\ & & 1 & x_1 & x_2 \\ & & & 1 & \\ & & & & 1 \end{psmallmatrix} \in [R_3] \, . \end{equation*} Then, $\psi_y(x) = \psi({\mathrm{tr}}(y \log x)) = \psi(y_1 x_1 + y_2 x_2)$. \end{examp} Define \begin{equation} \label{eq:trdiag} \operatorname{trdiag}(\cdot) = \operatorname{diag}(\cdot) -\frac1n {\mathrm{tr}}(\operatorname{diag}(\cdot)) \end{equation} and let $s = s_{V_i}$ \begin{equation} \label{eq:s-Vi} s_{V_i} = \operatorname{trdiag}(2(i-1), 2(i-2), \ldots, 0, -2, \ldots, -2) \end{equation} for which $\lie g^s_{1} = \emptyset$ and $\lie n_s = \lie g^s_{\geq 2}$ with the corresponding $N_s = V_{i}$. In particular, we have $s_N = s_{V_{n-1}}= \operatorname{trdiag}(2(n-2), \ldots, 0, -2)$ \begin{lem} \label{lem:gamma} Let $\varphi$ be an automorphic form on ${\mathrm{SL}}_n({\mathbb {A}})$. Then, for $1 \leq i \leq n-2$, \begin{equation} \label{eq:row-expansion} \sum_{\substack{y \in {}^t \lie{r}_i(F) \\ y \neq 0}} \, \intl_{[R_i]} \varphi(xg) \psi^{-1}_y(x) \, dx = \sum_{\gamma \in \Gamma_i} \, \intl_{[R_i]} \varphi(x \iota(\gamma) g) \psi^{-1}_{\alpha_i}(x) \, dx\,, \end{equation} where $\Gamma_i$ is defined in \eqref{eq:Gamma-i} and $\iota(\gamma)$ in \eqref{eq:iota}. \end{lem} We note that the left-hand side of the equation in this lemma equals $\varphi(g)$ up to constant terms corresponding to $y=0$. \begin{proof} With $Y \in \Mat_{(n-i)\times 1}(F)$, we parametrize $y \in {}^t \lie{r}_i(F)$ as \begin{equation} y(Y) = \begin{psmallmatrix} 0_{i-1} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & Y & 0_{n-i} \end{psmallmatrix} \, . \end{equation} Let $\hat Y = {}^t(1, 0, \ldots, 0) \in \Mat_{(n-i)\times 1}(F)$. Then the surjective map ${\mathrm{SL}}_{n-i}(F) \to \Mat_{(n-i) \times 1}(F)^\times$ defined by $\gamma \mapsto \gamma^{-1} \hat Y$ gives that \begin{equation} \Mat_{(n-i) \times 1}(F)^\times \cong ({\mathrm{SL}}_{n-i}(F))_{\hat Y}\backslash{\mathrm{SL}}_{n-i}(F) = \Gamma_i \end{equation} from \eqref{eq:Gamma-i}. We then have that, \begin{equation} \label{eq:std-row-char-step} \sum_{y \neq 0} \, \intl_{[R_i]} \varphi(xg) \psi^{-1}_y(x) \, dx = \sum_{\gamma \in \Gamma_i} \intl_{[R_i]} \varphi(xg) \psi^{-1}_{y(\gamma^{-1} \hat Y)}(x) \, dx \, . \end{equation} We now rewrite the character using that for any $Y \in \Mat_{n-i}(F)$ \begin{equation} y(\gamma^{-1}Y) = \begin{psmallmatrix} 0_{i-1} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & \gamma^{-1} Y & 0_{n-i} \end{psmallmatrix} = \begin{psmallmatrix} I_{i-1} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \gamma^{-1} \end{psmallmatrix} \begin{psmallmatrix} 0_{i-1} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & Y & 0_{n-i} \end{psmallmatrix} \begin{psmallmatrix} I_{i-1} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \gamma \end{psmallmatrix} = l^{-1} y l\,, \end{equation} where we have introduced $l = \iota(\gamma)$ and denoted $y(Y)$ simply as $y$, which according to \eqref{eq:character-conjugation} gives, for any $x \in [R_i]$, that \begin{equation} \psi_{ y(\gamma^{-1} Y)}(x) = \psi_{l^{-1} y l}(x) = \psi_y(l x l^{-1}) \, . \end{equation} The element $l$ is in the Levi subgroup of the parabolic subgroup corresponding to $V_i$, meaning that it preserves $V_i$ under conjugation. In particular, it also normalizes $R_i$ since for $x \in R_i$ parametrized by $X \in \Mat_{1\times n-1}$ \begin{equation} l x(X) l^{-1} = \begin{psmallmatrix} I_{i-1} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \gamma \end{psmallmatrix} \begin{psmallmatrix} I_{i-1} & 0 & 0 \\ 0 & 1 & X \\ 0 & 0 & I_{n-i} \end{psmallmatrix} \begin{psmallmatrix} I_{i-1} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \gamma^{-1} \end{psmallmatrix} = \begin{psmallmatrix} I_{i-1} & 0 & 0 \\ 0 & 1 & X \gamma^{-1} \\ 0 & 0 & I_{n-i} \end{psmallmatrix} = x(X \gamma^{-1})\, . \end{equation} We can thus make the variable substitution $lxl^{-1} \to x$ in \eqref{eq:std-row-char-step} to obtain \begin{equation} \label{eq:Yhat} \sum_{\gamma \in \Gamma_i} \intl_{[R_i]} \varphi(x l g) \psi^{-1}_{y(\hat Y)}(x) \, dx \, , \end{equation} where we have used the fact that $\varphi$ is left-invariant under $l^{-1}$. Noting that $\psi_{y(\hat Y)} = \psi_{\alpha_i}$ this proves the lemma. \end{proof} We will now state a similar lemma for the last row $R_{n-1}$, that needs to be treated separately. The freedom in choosing a character $\psi_0$ in this lemma will be of importance later. \begin{lem} \label{lem:gamma-last-row} Let $\varphi$ be an automorphic form on ${\mathrm{SL}}_n({\mathbb {A}})$. Then, for any character $\psi_0$ on $N$ trivial on $R_{n-1}$ and along (at least) two adjacent simple roots not including $\alpha_{n-1}$, \begin{equation} \sum_{\substack{y \in {}^t \lie{r}_{n-1}(F) \\ y \neq 0}} \, \intl_{[R_{n-1}]} \varphi(xg) \psi^{-1}_y(x) \, dx = \sum_{\gamma \in \Gamma_{n-1}(\psi_0)} \, \intl_{[R_{n-1}]} \varphi(x \iota(\gamma) g) \psi^{-1}_{\alpha_{n-1}}(x) \, dx\,, \end{equation} where $\Gamma_{n-1}(\psi_0)$ is defined in \eqref{eq:Gamma-i}. \end{lem} \begin{proof} With $Y \in F$, we parametrize $y \in {}^t \lie{r}_{n-1}(F)$ as \begin{equation} y(Y) = \begin{psmallmatrix} 0_{n-2} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & Y & 0 \end{psmallmatrix} \, . \end{equation} We recall from page~\pageref{tpsi0} that $T_{\psi_0}$ is the subgroup of diagonal elements in ${\mathrm{SL}}_n(F)$ stabilizing $\psi_0$ under conjugation of its argument and that $y \in {}^t \lie r_{n-1}(F) \cong F$. The map $T_{\psi_0} \to {}^t \lie r_{n-1}(F)^\times : h \mapsto h^{-1} y(1) h$ is surjective, which can be shown as follows. The character $\psi_0$ is, by assumption, trivial along at least two adjacent simple roots not including $\alpha_{n-1}$. Pick such a pair $\alpha_{j-1}$ and $\alpha_j$ where $2 \leq j \leq n-2$ and for an arbitrary $m \in F^\times$ let $h = \operatorname{diag}(1, \ldots, 1, m, 1, \ldots 1, 1/m)$ where the first non-trivial element is at the $j$th position. Then $h \in T_{\psi_0}$ since $y_0 \in {}^t \lie n$ corresponding to $\psi_0$ is zero at both rows and columns $j$ and $n$ and $h \mapsto y(m)$ Because of \eqref{eq:character-conjugation} we have that the centralizer of $y(1)$ in $T$ is $T_{\psi_{\alpha_{n-1}}}$, and thus, \begin{equation} {}^t \lie r_{n-1}(F) \cong (T_{\psi_0} \cap T_{\psi_{\alpha_{n-1}}}) \backslash T_{\psi_0} = \Gamma_{n-1}(\psi_0) \, . \end{equation} We then have that \begin{align} & \sum_{\substack{y \in {}^t \lie{r}_{n-1}(F) \\ y \neq 0}} \, \intl_{[R_{n-1}]} \varphi(xg) \psi^{-1}_y(x) \, dx = \sum_{\gamma \in \Gamma_{n-1}(\psi_0)} \, \intl_{[R_{n-1}]} \varphi(xg) \psi^{-1}_{\gamma^{-1} y(1) \gamma}(x) \, dx \\ &\quad= \sum_{\gamma \in \Gamma_{n-1}(\psi_0)} \, \intl_{[R_{n-1}]} \varphi(xg) \psi^{-1}_{y(1)}(\gamma x \gamma^{-1}) = \sum_{\gamma \in \Gamma_{n-1}(\psi_0)} \, \intl_{[R_{n-1}]} \varphi(x \gamma g) \psi^{-1}_{\alpha_{n-1}}(x) \, , \end{align} after making the variable change $\gamma x \gamma^{-1} \to x$, which concludes the proof. \end{proof} \begin{rmk} \label{rem:character-condition} For $n \geq 5$ any character $\psi_0$ on $N$ that is non-trivial along at most a single simple root which is not $\alpha_{n-1}$ satisfies the character condition in lemma~\ref{lem:gamma-last-row}. \end{rmk} The following lemma will be used to iteratively expand in rows. The lemma, which is valid for any automorphic representation, will be followed by two corollaries that specialize to the minimal and next-to-minimal representations respectively. \begin{lem} \label{lem:Vi-to-Vi+1} Let $\varphi$ be an automorphic form on ${\mathrm{SL}}_n({\mathbb {A}})$, $1 \leq i \leq n-2$, and $\psi_0$ be a character on $N$ trivial on the complement of $V_i$ in $N$. For $i = n-2$ we also require that $\psi_0$ is trivial along (at least) two adjacent simple roots not including $\alpha_{n-1}$. Then, \begin{equation} \label{eq:Vi-expansion} \begin{multlined} \intl_{[V_i]} \varphi(vg) \psi^{-1}_0(v) dv = \intl_{[V_{i+1}]} \varphi(vg) \psi^{-1}_0(v) \, dv +{} \\ \quad + \sum_{\gamma \in \Gamma_{i+1}(\psi_0)} \, \intl_{[V_{i+1}]} \varphi(v \iota(\gamma) g) \psi^{-1}_0(v) \psi^{-1}_{\alpha_{i+1}}(v) \, dv \, . \end{multlined} \end{equation} \end{lem} \begin{proof} For $x \in R_{i+1}(F)$ and $v \in V_i(\mathbb{A})$ we have that $\varphi(xvg) = \varphi(vg)$ and can thus Fourier expand along the abelian unipotent $R_{i+1}$ as \begin{equation} \label{eq:further-expansion} \varphi(vg) = \sum_{y \in {}^t \lie r_{i+1}(F)} \intl_{[R_{i+1}]} \varphi(xvg) \psi^{-1}_y(x) \, dx \, . \end{equation} Then, using lemma \ref{lem:gamma} (for $i+1 \leq n-2$) or lemma~\ref{lem:gamma-last-row} (for $i+1 = n-1$) \begin{equation} \varphi(vg) = \intl_{[R_{i+1}]} \varphi(xvg) \, dx + \sum_{\gamma \in \Gamma_{i+1}(\psi_0)} \, \intl_{[R_{i+1}]} \varphi(x\iota(\gamma)vg) \psi^{-1}_{\alpha_{i+1}}(x) \, dx\, . \end{equation} Let $v \in V_{i}$ be parametrized as \begin{equation} v = \begin{pmatrix} A & B \\ 0 & I_{n-i-1} \end{pmatrix}\,, \end{equation} where $A \in \Mat_{(i+1) \times (i+1)}$ is upper unitriangular and $B \in \Mat_{(i+1) \times (n-i-1)}$ with the elements in the last row being zero. Since $B$ does not intersect the abelianization $[N,N]\backslash N$ (that is, the Lie algebra of $B$ does not contain any generator of a simple root), we have, by assumption, that $\psi_0$ only depends on $A$. Similarly, we parametrize $x \in R_{i+1}$ as \begin{equation} x = \begin{pmatrix} I_{i+1} & B' \\ 0 & I_{n-i-1} \end{pmatrix}\,, \end{equation} where $B' \in \Mat{(i+1) \times (n-i-1)}$ with non-zero elements only in the last row. Then, \begin{equation} xv = \begin{pmatrix} A & B + B' \\ 0 & I_{n-i-1} \end{pmatrix} \, , \end{equation} which means that $\psi_0(v) = \psi_0(xv)$, and since $\psi_{\alpha_{i+1}}$ only depends on the first column in $B'$ which is the same as for $B + B'$, we also have that $\psi_{\alpha_{i+1}}(x) = \psi_{\alpha_{i+1}}(xv)$. \begin{itemize-small} \item For $1 \leq i \leq n-3$ with $\gamma \in \Gamma_{i+1}$, $l = \iota(\gamma)$ is in the Levi subgroup corresponding to $V_{i}$ and we will now show that $\psi_0(l^{-1} v l) = \psi_0(v)$ for $v \in [V_{i}]$. We have that \begin{equation} l^{-1}vl = \begin{pmatrix} I_{i+1} & 0 \\ 0 & \gamma^{-1} \end{pmatrix} \begin{pmatrix} A & B \\ 0 & I_{n-i-1} \end{pmatrix} \begin{pmatrix} I_{i+1} & 0 \\ 0 & \gamma \end{pmatrix} = \begin{pmatrix} A & B \gamma \\ 0 & I_{n-i-1} \end{pmatrix} \end{equation} and $\psi_0(v)$ only depends on $A$. \item For $i = n-2$ with $\gamma \in \Gamma_{n-1}(\psi_0)$, $l = \iota(\gamma) = \gamma$ is in the stabilizer $T_{\psi_0}$ which normalizes $V_i$ and, by definition, means that $\psi_0(v) = \psi_0(lvl^{-1})$. \end{itemize-small} Thus, for $1 \leq i \leq n-2$, \begin{equation} \begin{multlined} \intl_{[V_i]} \varphi(vg) \psi^{-1}_0(v) dv = \intl_{[V_i]} \intl_{[R_{i+1}]} \varphi(xvg) \psi^{-1}_0(v) \, dx \,dv +{} \\ + \sum_{\gamma \in \Gamma_{i+1}(\psi_0)} \, \intl_{[V_i]} \intl_{[R_{i+1}]} \varphi(x v l g) \psi^{-1}_{\alpha_{i+1}}(x) \psi^{-1}_0(v) \, dx \,dv\,, \end{multlined} \end{equation} where we have made the variable change $l v l^{-1} \to v$. Using that $R_{i+1} V_i = V_{i+1}$ the above expressions simplifies to \begin{equation} \intl_{[V_{i+1}]} \varphi(vg) \psi^{-1}_0(v) \, dv + \sum_{\gamma \in \Gamma_{i+1}(\psi_0)} \, \intl_{[V_{i+1}]} \varphi(v \iota(\gamma) g) \psi^{-1}_0(v) \psi^{-1}_{\alpha_{i+1}}(v) \, dv \, . \end{equation} \end{proof} \begin{cor} \label{cor:min-row} Let $\pi$ be an irreducible minimal automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$, $\varphi \in \pi$, and $\psi_0$ be a character on $N$ trivial on the complement of $V_i$ in $N$, $1 \leq i \leq n-2$. Then, $\mathcal{F}_{\psi_0} \coloneqq \int_{[V_i]} \varphi(vg) \psi^{-1}_0 \, dv$ can be further expanded as follows. \begin{enumerate}[label=\textnormal{(\roman*)}, leftmargin=0cm,itemindent=1.75\parindent,labelwidth=\itemindent,labelsep=0mm,align=left] \item \label{itm:min-trivial} If $\psi_0 = 1$, then \begin{flalign} \qquad \mathcal{F}_{\psi_0} &= \intl_{[V_{i+1}]} \varphi(vg) dv + \sum_{\gamma \in \Gamma_{i+1}} \intl_{[V_{i+1}]} \varphi(v \iota(\gamma) g) \psi^{-1}_{\alpha_{i+1}}(v) \,dv, & \end{flalign} where $\Gamma_{i+1}(\psi_0)$ with $\Gamma_{i+1} = \Gamma_{i+1}(1)$ is defined in \eqref{eq:Gamma-i}. \item \label{itm:min-single} If $\psi_0 = \psi_{\alpha_j}$ $(1 \leq j \leq i)$, then \begin{flalign} \qquad \mathcal{F}_{\psi_0} &= \intl_{[V_{i+1}]} \varphi(vg) \psi^{-1}_0(v) \,dv.& \end{flalign} \end{enumerate} \end{cor} \begin{proof} We will use lemma~\ref{lem:Vi-to-Vi+1} where all the considered $\psi_0$ satisfy the character condition for the last row according to remark~\ref{rem:character-condition}. For $\psi_0 = 1$, the expression is already in the form of lemma \ref{lem:Vi-to-Vi+1}. This proves case \ref{itm:min-trivial}. For $\psi_0 = \psi_{\alpha_j}$ with $1 \leq j \leq i$ we have that $\psi_0(v) \psi_{\alpha_{i+1}}(v) = \psi_{\alpha_j, \alpha_{i+1}}(v) = \psi_u(v)$ for some $u \in \lie g$ which is in the next-to-minimal orbit. Theorem \ref{thm:ggsglobal} with the Whittaker pair $(s_{V_{i+1}}, u)$ gives that $\mathcal{F}_{s_{V_{i+1}}, u}(\varphi)$ vanishes for $\varphi$ in the minimal representation which leaves only the constant (or trivial) mode in lemma~\ref{lem:Vi-to-Vi+1}. This proves case \ref{itm:min-single}. \end{proof} \pagebreak[2] \begin{cor} \label{cor:ntm-row} Let $\pi$ be an irreducible next-to-minimal automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$, $\varphi \in \pi$, and $\psi_0$ be a character on $N$ trivial on the complement of $V_i$ in $N$, $1 \leq i \leq n-2$. Then, $\mathcal{F}_{\psi_0} \coloneqq \int_{[V_i]} \varphi(vg) \psi^{-1}_0 \, dv$ can be further expanded as follows. \begin{enumerate}[label=\textnormal{(\roman*)}, leftmargin=0cm,itemindent=1.75\parindent,labelwidth=\itemindent,labelsep=0mm,align=left] \item \label{itm:ntm-trivial} If $\psi_0 = 1$, then \begin{flalign} \qquad \mathcal{F}_{\psi_0} &= \intl_{[V_{i+1}]} \varphi(vg) dv + \sum_{\gamma \in \Gamma_{i+1}} \intl_{[V_{i+1}]} \varphi(v\iota(\gamma) g) \psi^{-1}_{\alpha_{i+1}}(v) \,dv\,. & \end{flalign} \item \label{itm:ntm-single-j} If $\psi_0 = \psi_{\alpha_j}$ $(1 \leq j < i)$, then \begin{flalign} \qquad \mathcal{F}_{\psi_0} &= \intl_{[V_{i+1}]} \varphi(vg) \psi^{-1}_{\alpha_j}(v) dv + \hspace{-1.5em} \sum_{\gamma \in \Gamma_{i+1}(\psi_{\alpha_j}\!)} \, \intl_{[V_{i+1}]} \varphi(v\iota(\gamma) g) \psi^{-1}_{\alpha_j, \alpha_{i+1}}(v) \,dv\,. & \end{flalign} \item \label{itm:ntm-single-i} If $\psi_0 = \psi_{\alpha_i}$, then \begin{flalign} \qquad \mathcal{F}_{\psi_0} &= \intl_{[V_{i+1}]} \varphi(vg) \psi^{-1}_{\alpha_i}(v) \,dv\,. & \end{flalign} \item \label{itm:ntm-double} If $\psi_0 = \psi_{\alpha_j, \alpha_k}$ $(1 < j+1 < k \leq i)$, then \begin{flalign} \qquad \mathcal{F}_{\psi_0} &= \intl_{[V_{i+1}]} \varphi(vg) \psi^{-1}_{\alpha_j, \alpha_k}(v) \,dv\,. & \end{flalign} \end{enumerate} Where $\Gamma_{i+1}(\psi_0)$ with $\Gamma_{i+1} = \Gamma_{i+1}(1)$ is defined in \eqref{eq:Gamma-i}. \end{cor} \begin{proof} We will use lemma~\ref{lem:Vi-to-Vi+1} where the considered $\psi_0$ in cases \ref{itm:ntm-trivial}--\ref{itm:ntm-single-i} satisfy the character condition for the last row according to remark~\ref{rem:character-condition}. \begin{itemize-small} \item For $\psi_0 = 1$, the expression is already in the form of lemma~\ref{lem:Vi-to-Vi+1}. This proves case \ref{itm:ntm-trivial}. \item For $\psi_0 = \psi_{\alpha_j}$ with $1 \leq j < i$ we get that $\psi_0(v) \psi_{\alpha_{i+1}}(v) = \psi_{\alpha_j, \alpha_{i+1}}(v)$. This proves case \ref{itm:ntm-single-j}. \item For $\psi_0 = \psi_{\alpha_i}$ we get that $\psi_0(v) \psi_{\alpha_{i+1}}(v) = \psi_{\alpha_i, \alpha_{i+1}}(v) = \psi_u(v)$ for some $u \in \lie g$ belonging to an orbit higher than the next-to-minimal. Theorem \ref{thm:ggsglobal} with the Whittaker pair $(s_{V_{i+1}}, u)$ gives that $\mathcal{F}_{s_{V_{i+1}}, u}(\varphi)$ vanishes both for $\varphi$ in the minimal and next-to-minimal representations which leaves only the constant mode in lemma~\ref{lem:Vi-to-Vi+1}. This proves case \ref{itm:ntm-single-i}. \item Lastly, for $\psi_0 = \psi_{\alpha_j,\alpha_k}$ with $2 \leq j+1 < k \leq i$ we first consider $i \leq n-3$ with lemma~\ref{lem:Vi-to-Vi+1}. We get that $\psi_0(v) \psi_{\alpha_{i+1}}(v) = \psi_{\alpha_j, \alpha_k, \alpha_{i+1}}(v) = \psi_u(v)$ for some $u \in \lie g$ belonging to an orbit higher than the next-to-minimal. Theorem \ref{thm:ggsglobal} with the Whittaker pair $(s_{V_{i+1}}, u)$ gives that $\mathcal{F}_{s_{V_{i+1}}, u}(\varphi)$ vanishes for $\varphi$ in next-to-minimal representation which leaves only the first term in \eqref{eq:Vi-expansion}. \end{itemize-small} For $i = n-2$, we expand along the last row and obtain a sum over characters $\psi_u = \psi_0 \psi_y$ on $N$ for all $y \in {}^t \lie r_{n-1}(F)$ where only $y = 0$ gives a $u \in \lie g$ belonging to an orbrit in the closure of the next-to-minimal orbit. Again, using theorem~\ref{thm:ggsglobal} only the constant mode remains. This proves case \ref{itm:ntm-double} and completes the proof. \end{proof} \begin{proof}[\bf Proof of theorem \ref{thm:varphi}] Since $\varphi(x_1g) = \varphi(g)$ for $x_1 \in V_1(F)$ we can make a Fourier expansion on $V_1$ and then use lemma \ref{lem:gamma} to obtain \begin{equation} \label{eq:ThmA-first-row} \varphi(g) = \intl_{[V_1]} \varphi(v g) \, dv + \sum_{\gamma_1 \in \Gamma_1} \intl_{[V_1]} \varphi(v \iota(\gamma_1) g) \psi^{-1}_{\alpha_1}(v) \, dv \, . \end{equation} We will now make an iteration in the rows of the nilpotent, starting with the row $i = 1$ and continue until we reach the last row $i = n - 1$. \begin{itemize-small} \item For case \ref{itm:varphi-min}, that is, with $\varphi$ in the minimal representation, the first step, using corollary~\ref{cor:min-row}, is \begin{multline*} \varphi(g) = \intl{[V_2]} \varphi(vg) \, dv + \sum_{\gamma_2 \in \Gamma_2} \, \intl_{[V_2]} \varphi(v \iota(\gamma_2) g)\psi^{-1}_{\alpha_2}(v) \, dv +{} \\[-1em] + \sum_{\gamma_1 \in \Gamma_1} \, \intl_{[V_2]} \varphi(v \iota(\gamma_1) g) \psi^{-1}_{\alpha_1}(v) \, dv \, , \end{multline*} where we note that the extra second term comes from the constant term on $V_1$. We will, after the iteration end up with \begin{equation} \varphi(g) = \intl_{[N]} \varphi(ng) \, dn + \sum_{i=1}^{n-1} \sum_{\gamma \in \Gamma_{i}} \, \intl_{[N]} \varphi(n \iota(\gamma) g) \psi^{-1}_{\alpha_i}(n) \, dn \, . \end{equation} This completes the proof for the minimal representation. \item For case \ref{itm:varphi-ntm}, where $\varphi$ is in the next-to-minimal-representation, we start again from \eqref{eq:ThmA-first-row} and expand using corollary~\ref{cor:ntm-row}. We get, for the first step, that \begin{multline} \varphi(g) = \Big( \intl_{[V_2]} \varphi(v g) \, dv + \sum_{\gamma_2\in\Gamma_2}\intl_{[V_2]} \varphi(v \iota(\gamma_2) g) \psi^{-1}_{\alpha_2}(vg) \, dv \Big) +{} \\[-1em] +\sum_{\gamma_1 \in \Gamma_1} \intl_{[V_2]} \varphi(v \iota(\gamma_1) g) \psi^{-1}_{\alpha_1}(v) \, dv \, , \end{multline} where the parenthesis comes from the expansion of the constant term in \eqref{eq:ThmA-first-row}. Expanding in the next row as well, this becomes \begin{multline} \Big(\! \intl_{[V_3]} \!\!\! \varphi(vg) \, dv \, + \hspace{-0.4em} \sum_{\gamma_3\in\Gamma_3} \, \intl_{[V_3]} \!\!\! \varphi(v \iota(\gamma_3) g) \psi^{-1}_{\alpha_3}(v) \, dv \, + \hspace{-0.4em} \sum_{\gamma_2\in\Gamma_2} \, \intl_{[V_3]} \!\!\! \varphi(v \iota(\gamma_2) g) \psi^{-1}_{\alpha_2}(v) \, dv \Big) \,+{} \\ + \hspace{-0.4em}\sum_{\gamma_1 \in \Gamma_1} \!\! \Big( \! \intl_{[V_3]} \!\!\! \varphi(v \iota(\gamma_1) g) \psi^{-1}_{\alpha_1}(v) \, dv + \hspace{-1.4em} \sum_{\gamma_3 \in \Gamma_3(\psi_{\alpha_1}\!)} \, \intl_{[V_3]} \!\!\! \varphi(v \iota(\gamma_3) \iota(\gamma_1) g) \psi^{-1}_{\alpha_1, \alpha_3}(v) \, dv \Big)\,. \end{multline} For each expansion adding a row $i$, the constant term gives an extra sum over $\Gamma_{i}$ of a Fourier integral with character $\psi_{\alpha_i}$, and from all terms with characters $\psi_{\alpha_j}$ with $j < i - 1$ we get an extra sum over $\Gamma_{i}(\psi_{\alpha_j})$ together with a character $\psi_{\alpha_j, \alpha_{i}}$. Corollary~\ref{cor:ntm-row} \ref{itm:ntm-double} implies that these terms with characters non-trivial along two simple roots do not receive any further contributions. Thus, after repeatedly using corollary~\ref{cor:ntm-row} to the last row, we get that \begin{multline} \label{eq:ThmA-ntm} \varphi(g) = \intl_{[N]} \varphi(ng) \, dn + \sum_{i=1}^{n-1} \sum_{\gamma \in \Gamma_i} \, \intl_{[N]} \varphi(n \iota(\gamma) g) \psi^{-1}_{\alpha_i}(n) \, dn +{} \\ + \sum_{j=1}^{n-3} \sum_{i=j+2}^{n-1} \sum_{\substack{\gamma_i \in \Gamma_i(\psi_{\alpha_j}\!) \\ \gamma_j \in \Gamma_j}} \, \intl_{[N]} \varphi(n \iota(\gamma_i) \iota(\gamma_j) g) \psi^{-1}_{\alpha_j, \alpha_i} (n) \, dn \, , \end{multline} which completes the proof of Theorem \ref{thm:varphi}. \end{itemize-small} \end{proof} \section{Proof of theorem \ref{thm:max-parabolic}} \label{sec:thmB} In this section, we prove Theorem \ref{thm:max-parabolic} which relates Fourier coefficients on a maximal parabolic subgroup with Whittaker coefficients on the Borel subgroup. Recalling that the constant terms are known from \cite{MW95}, we only focus on non-trivial characters, but first we need to introduce some notation and lemmas. For $1 \leq m \leq n-1$, let $U_m$ be the unipotent radical of the maximal parabolic subgroup $P_m$ with Levi subgroup $L_m$ isomorphic to the subgroup of ${\mathrm{GL}}_m \times {\mathrm{GL}}_{n-m}$ defined by $\{(g,g')\in {\mathrm{GL}}_m \times {\mathrm{GL}}_{n-m}: \det(g) \det(g')=1\}$. $U_m$ is abelian and is isomorphic to the set of all $m \times (n-m)$ matrices. Write $U_m$ as \begin{equation} U_m = \left\{ \begin{pmatrix} I_m & X\\ 0 & I_{n-m} \end{pmatrix} : X \in \Mat_{m \times (n-m)}\right\} \, . \end{equation} Let $\overline{U}_m = {}^t U_m$ be the unipotent radical of the opposite parabolic $\overline{P}_m$. Then the Lie algebra of $\overline{U}_m$ can be written as \begin{equation} \label{eq:um-param} \overline{\mathfrak{u}}_m = {}^t \lie u_m = \left\{y(Y) = \begin{pmatrix} 0_m & 0\\ Y & 0_{n-m} \end{pmatrix} : Y \in \Mat_{(n-m) \times m}\right\}. \end{equation} It is clear that the character group of $U_m$ can be identified with ${}^t \lie u_m$. $L_m$ acts on ${}^t \lie u_m$ via conjugation and with \eqref{eq:character-conjugation} this becomes a conjugation of the corresponding character's argument. Because of \eqref{eq:Fourier-L-conjugation}, the Fourier coefficients for characters in the same $L_m(F)$-orbit are related by translates of their arguments, which means that we only need to compute one Fourier coefficient for each orbit. We will therefore now describe the $L_m(F)$-orbits of elements $y(Y) \in {}^t \lie u_m$ but leave the details to be proven in appendix~\ref{app:levi-orbits}. Starting first with $\overline F$ the number of $L_m(\overline F)$-orbits is $\min(m,n-m)+1$ and the orbits are classified by the rank of the $(n-m) \times m$ matrix $Y$. A representative of an $L_m(\overline F)$-orbit corresponding to rank $r$ can be chosen as $y(Y_r)$ where $Y_r$ is an $(n-m) \times m$ matrix, zero everywhere except for the upper right $r \times r$ submatrix which is anti-diagonal with all anti-diagonal elements equal to one. For each rank $r$, $0 \leq r \leq \min(m,n-m)$, the corresponding $G(\overline{F})$-orbit is parametrized by the partition $[2^r 1^{n-2r}]$. As shown in appendix~\ref{app:levi-orbits}, the $L_m(F)$-orbits are characterized by the same data as the $G(F)$-orbits with $([2^r 1^{n-2r}], d)$, $0 \leq r \leq \min(m,n-m)$, $d \in F^\times/(F^\times)^k$ and $k \in \gcd([2^r 1^{n-2r}])$ with representatives $y(Y_r(d))$ where $Y_r(d)$ is of the same form as $Y_r$ above, but with the lower left element in the $r \times r$ matrix equal to $d$ \begin{equation} Y_r(d) = \begin{pmatrix} \quad 0 & \begin{bsmallmatrix} & & & 1 \\ & & \reflectbox{$\ddots$} & \\ & 1 & & \\ d & & & \end{bsmallmatrix} \\[1.5em] \quad 0 & 0 \end{pmatrix} \in \Mat_{(n-m) \times m}(F)\, . \end{equation} We will continue to write $Y_r(1) = Y_r$. Note that for $0 \leq r \leq 2$ and $n \geq 5$, $k$ is equal to $1$. Each such $L_m(F)$-orbit is also part of the $G(F)$-orbit of the same data. From \eqref{eq:character} the corresponding character on $U_m$ is \begin{equation} \psi_{y(Y_r)}(u)=\psi({\mathrm{tr}} (y(Y_r) \log (u))), \quad u \in U_m(\mathbb{A}). \end{equation} Let $s_m$ be the semisimple element $\operatorname{trdiag}(1,1,\ldots, 1, -1,-1, \ldots, -1)$ with $m$ copies of $1$'s and $(n-m)$ copies of $-1$'s. Then, for any automorphic form $\varphi$ on ${\mathrm{SL}}_n({\mathbb {A}})$, the following Fourier coefficient \begin{equation} \label{eq:Yr-coefficient} \int_{[U_m]} \varphi(ug) \psi_{y(Y_r(d))}^{-1}(u) \,du \end{equation} is exactly the degenerate Fourier coefficient ${\mathcal {F}}_{s_m,y(Y_r(d))}(\varphi)$. Note that in this paper, we focus on minimal and next-to-minimal representations, hence we only need to consider the cases of $0 \leq r \leq 2$. Indeed, for $3 \leq r \leq \min(m,n-m)$, by definition, the generalized Fourier coefficient attached to the partition $[2^r1^{n-2r}]$ is identically zero for minimal and next-to-minimal representations. By Theorem \ref{thm:ggsglobal} and since $y(Y_r(d))$ is in the $G(\overline{F})$-orbit $[2^r1^{n-2r}]$, all the Fourier coefficients ${\mathcal {F}}_{s_m,y(Y_r)}(\varphi)$ are also identically zero. This leaves $r \in \{1, 2\}$ and with our assumption that $n \geq 5$, we thus only need to consider the representatives $y(Y_1)$ and $y(Y_2)$ with $d = 1$ since $\gcd([2^r1^{n-2r}])=1$. The above arguments proves the first part of theorem~\ref{thm:max-parabolic}, that there exists an element $l \in L_m(F)$ such that $\mathcal{F}_U(\varphi, \psi_U; g) = \mathcal{F}_U(\varphi, \psi_{y(Y_r)}; lg)$ (note the slight difference in notation $\psi_{y(Y_r)}$ instead of $\psi_{Y_r}$), and that all $\mathcal{F}_U(\varphi, \psi_{y(Y_r)}; lg)$ for $r > r_\pi$ vanish identically where $r_{\pi_\text{min}} = 1$ and $r_{\pi_\text{ntm}} = 2$. We will now determine the remaining Fourier coefficients $\mathcal{F}_U(\varphi, \psi_{y(Y_r)}; g)$ in terms of Whittaker coefficients. For $1 \leq m \leq n-1$, $0 \leq i \leq m-1$, let $U_m^i$ be the unipotent radical of the parabolic of type $(m-i,1^i,n-m)$. Note that $U_m^0=U_m$. Note that the character $\psi_{y(Y_1)}$ can be extended to a character of any subgroup of $N$ containing $U_m$, still denoted by $\psi_{y(Y_1)}$. Let $C_{m-i}$ be the subgroup of $U_m^{i+1}$ consisting of elements with $u_{p,q}=0$ except when $q = m-i$ and the diagonal elements. Note that $C_{m-i}$ is an abelian subgroup and its character group can be identified with ${}^t \mathfrak{c}_{m-i}$, the Lie algebra of ${}^t C_{m-i}$. Write $C_{m-i}$ as \begin{equation} C_{m-i} = \left\{ c(X) = \begin{psmallmatrix} I_{m-i-1} & X & 0\\ 0 & 1 & 0\\ 0 & 0 & I_{n-m+i} \end{psmallmatrix}\right\} \end{equation} and ${}^t \mathfrak{c}_{m-i}$ as \begin{equation} {}^t \mathfrak{c}_{m-i} = \left\{ y(Y)= \begin{psmallmatrix} 0_{m-i-1} & 0 & 0\\ Y & 0 & 0\\ 0 & 0 & 0_{n-m+i} \end{psmallmatrix}\right\} \, . \end{equation} For each $y \in {}^t \mathfrak{c}_{m-i}$, the corresponding character $\psi_{y}$ of $C_{m-i}$ is defined by $\psi_{y}(c)=\psi({\mathrm{tr}}(y \log(c))$. For any $g \in {\mathrm{GL}}_{m-i-1}$, let \begin{equation} \iota(g)= \begin{psmallmatrix} g & 0 & 0\\ 0 & I_{n-m+i} & 0\\ 0 & 0 & \det(g)^{-1} \end{psmallmatrix} \in {\mathrm{SL}}_n \, . \end{equation} \begin{examp} For ${\mathrm{SL}}_5$ we have that \begin{equation} U_3 = \left\{ \begin{psmallmatrix} 1 & & & * & * \\ & 1 & & * & * \\ & & 1 & * & * \\ & & & 1 & \\ & & & & 1 \end{psmallmatrix} \right\} \qquad U_3^1 = \left\{ \begin{psmallmatrix} 1 & & * & * & * \\ & 1 & * & * & * \\ & & 1 & * & * \\ & & & 1 & \\ & & & & 1 \end{psmallmatrix} \right\} \qquad C_3 = \left\{ \begin{psmallmatrix} 1 & & * & & \\ & 1 & * & & \\ & & 1 & & \\ & & & 1 & \\ & & & & 1 \end{psmallmatrix} \right\} \, . \end{equation} \end{examp} Note that $U_m^{m-1} = V_m$ and $U_m^{i+1} = C_{m-i} U_m^i$. We will sometimes use $j = m-i$ instead to denote column as follows $U_m^{m-j+1} = C_j U_m^{m-j}$. We will now construct a semi-simple element $s = s_{U_m^i}$ for which $\lie g^s_1 = \emptyset$ and such that $\lie n_s = \lie g^s_{\geq 2}$ corresponds to $N_s = U_m^i$. These conditions are satisfied by \begin{equation} \label{eq:s-Umi} \operatorname{trdiag}(2i, \ldots, 2i, 2(i-1) \ldots, 2, 0, -2, \ldots, -2) \end{equation} with $m-i$ copies of $2i$ and $n-m$ copies of $-2$. Note that any character $\psi$ on $N$ trivial on the complement of $U_m^i$ in $N$ is also a character on $U_m^i$ by restriction and can be expressed as $\psi_y$ with $y \in \lie g^s_{-2}$ where $s = s_{U_m^i}$ such that $(s,y)$ forms a Whittaker pair. Indeed, we have that $y \in \lie g^{s_N}_{-2}$ where $s_N = \operatorname{trdiag}(2(n-1), 2(n-2), \ldots, 0,-2)$ from \eqref{eq:s-Vi} and the complement of $U_m^i$ is described by $s-s_N$ meaning that $[y, s-s_N] = 0$ for $\psi$ to be trivial on the complement and thus $[y, s] = [y, s_N] = -2y$. \begin{lem} \label{lem:col-conjugation} Let $\varphi$ be an automorphic form on ${\mathrm{SL}}_n(\mathbb{A})$ and $2 \leq j \leq n$. Let also $\psi_0$ be a character on $N$ which, if $j = 2$, should be trivial along $\alpha_1$ and (at least) two adjacent other simple roots. Then, \begin{equation} \sum_{\substack{y \in {}^t \lie c_j(F) \\ y \neq 0}} \intl_{[C_j]} \varphi(xg) \psi_y^{-1}(x) \, dx = \sum_{\gamma \in \Lambda_{j-1}(\psi_0)} \intl_{[C_j]} \varphi(x \hat\iota(\gamma) g) \psi^{-1}_{\alpha_{j-1}}(x) \, dx \, . \end{equation} where $\Lambda_j(\psi_0)$ is defined in \eqref{eq:Lambda-j} and only depends on $\psi_0$ for $j = 2$. \end{lem} \begin{proof} The proof is similar to those of lemmas~\ref{lem:gamma} and \ref{lem:gamma-last-row}. \begin{itemize-small} \item For $2 < j \leq n$, we parametrize $y \in {}^t \lie c_j(F)$ by row vectors $Y \in \Mat_{1\times(j-1)}(F)$ with representative $\hat X = (0, \ldots, 0, 1)$ such that $\psi_{y(\hat X)} = \psi_{\alpha_{j-1}}$. The surjective map ${\mathrm{SL}}_{j-1}(F) \to {}^t \lie c_j(F)^\times : \gamma \mapsto \hat X \gamma$ gives that ${}^t \lie c_j(F)^\times \cong ({\mathrm{SL}}_{j-1}(F))_{\hat X} \backslash {\mathrm{SL}}_{j-1}(F) = \Lambda_{j-1}$. As in lemma~\ref{lem:gamma}, we can write the action as a conjugation $y(\hat X \gamma) = \hat\iota(\gamma)^{-1} y(\hat X) \hat\iota(\gamma)$ and, using \eqref{eq:character-conjugation}, $\psi_{y(\hat X \gamma)}(x) = \psi_{y(\hat X)}(\hat\iota(\gamma) x \hat\iota(\gamma)^{-1})$. Since $\hat \iota(\gamma)$ normalizes $C_j$ a variable change gives the wanted expression. \item For $j = 2$, with $y \in {}^t \lie c_2(F) \cong F$ we instead consider the map $T_{\psi_0} \to {}^t \lie c_2(F)^\times : h \mapsto h^{-1} y(1) h$ which is surjective by similar arguments as in lemma~\ref{lem:gamma-last-row} and thus gives ${}^t \lie c_2(F)^\times \cong (T_{\psi_0} \cap T_{\psi_{\alpha_1}}) \backslash T_{\psi_0} = \Lambda_1(\psi_0)$. Writing the conjugation of $y$ as a conjugation of the character's argument and then substituing variables in the Fourier integral thus proves the lemma. \end{itemize-small} \end{proof} \begin{lem} \label{lem:col-expansion} Let $\varphi$ be an automorphic form on ${\mathrm{SL}}_n(\mathbb{A})$, $1 \leq m \leq n-1$, $2 \leq j \leq m$ and $\psi_0$ a character on $N$ trivial on the complement of $U_m^{m-j}$ in $N$. For $j = 2$, $\psi_0$ should also be trivial along (at least) two adjacent simple roots other than $\alpha_1$. \begin{multline} \intl_{[U_m^{m-j}]} \hspace{-0.6em} \varphi(ug) \psi_0^{-1}(u) \, du = \\[-0.5em] = \hspace{-1em} \intl_{[U_m^{m-j+1}]} \hspace{-1em} \varphi(ug) \psi_0^{-1}(u) \, du + \hspace{-1.4em} \sum_{\gamma \in \Lambda_{j-1}(\psi_0)} \, \intl_{[U_m^{m-j+1}]} \hspace{-1em} \varphi(u \hat\iota(\gamma) g) \psi_0^{-1}(u) \psi_{\alpha_{j-1}}^{-1}(u) \, du\,. \end{multline} \end{lem} \begin{proof} For $2 \leq j \leq m$ we have that $\varphi(xug) = \varphi(ug)$ for $x \in C_i(F)$ and $u \in U_m^i(\mathbb{A})$ and since $C_j$ is abelian \begin{equation} \label{eq:integrand-col-exp} \varphi(ug) = \sum_{y \in {}^t \lie c_j(F)} \intl_{[C_j]} \varphi(xug) \psi_y^{-1}(x) \, dx \, . \end{equation} Using lemma~\ref{lem:col-conjugation}, we get that \begin{equation} \varphi(ug) = \intl_{[C_j]} \varphi(xug) \, dx + \sum_{\gamma \in \Lambda_{j-1}(\psi_0)} \intl_{[C_j]} \varphi(x \hat \iota(\gamma) ug) \psi_{\alpha_{j-1}}^{-1}(x) \, dx \, . \end{equation} Let $u \in U_m^{m-j}$ be parametrized as \begin{equation} u = \begin{pmatrix} I_{j-1} & B \\ 0 & A \end{pmatrix} \end{equation} where $A \in \Mat_{(n-j+1)\times(n-j+1)}$ is upper unitriangular (with several upper triangular elements being zero) and $B \in \Mat_{(j-1)\times(n-j+1)}$ with elements in the first column being zero. Since $B$ does not intersect the abelianization $[N,N]\backslash N$ (that is, the Lie algebra of $B$ does not contain any generator of a simple root), we have, by assumption, that $\psi_0$ only depends on $A$. We also have that $x \in C_j$ can be parametrized as \begin{equation} x = \begin{pmatrix} I_{j-1} & B' \\ 0 & I_{n-j+1} \end{pmatrix} \end{equation} where $B' \in \Mat_{(j-1)\times(n-j+1)}$ with only the first column non-zero. Thus, \begin{equation} xu = \begin{pmatrix} I_{j-1} & B + B' A \\ 0 & A \end{pmatrix} \end{equation} which means that $\psi_0(u) = \psi_0(xu)$. The first column of $B$ is zero and $A$ is upper unitriangular which means that the first column of $B+B'A$ is the same as the first column of $B'$ and since $\psi_{\alpha_{j-1}}$ only depends on the first column of $B'$ this implies that $\psi_{\alpha_{j-1}}(x) = \psi_{\alpha_{j-1}}(xu)$. \begin{itemize-small} \item For $3 \leq j \leq m$ with $\gamma \in \Lambda_{j-1}$ and $l = \hat\iota(\gamma)$, \begin{equation} l u l^{-1} = \begin{pmatrix} \gamma & 0 \\ 0 & I_{n-j+1} \end{pmatrix} \begin{pmatrix} I_{j-1} & B \\ 0 & A \end{pmatrix} \begin{pmatrix} \gamma^{-1} & 0 \\ 0 & I_{n-j+1} \end{pmatrix} = \begin{pmatrix} I_{j-1} & \gamma B \\ 0 & A \end{pmatrix} \end{equation} and since $\psi_0$, by assumption, only depends on $A$ we have that $\psi_0(u) = \psi_0(l u l^{-1})$. \item For $j = 2$ with $\gamma \in \Lambda_1$ and $l = \hat\iota(\gamma) = \gamma$ is in the stabilizer $T_{\psi_0}$ which, by definition, means that $\psi_0(u) = \psi_0(lul^{-1})$. \end{itemize-small} Hence, for $2 \leq j \leq m$, and after making a variable change $lul^{-1} \to u$, we get that \begin{equation} \begin{split} \MoveEqLeft \intl_{[U_m^{m-j}]} \intl_{[C_j]} \varphi(x l ug) \psi_0^{-1}(u) \psi_{\alpha_{j-1}}^{-1}(x)\, dx \, du = \\ &= \intl_{[U_m^{m-j}]} \intl_{[C_j]} \varphi(xulg) \psi_0^{-1}(u) \psi_{\alpha_{j-1}}^{-1}(x)\, dx \, du \\ &= \intl_{[U_m^{m-j}]} \intl_{[C_j]} \varphi(xulg) \psi_0^{-1}(xu) \psi_{\alpha_{j-1}}^{-1}(xu)\, dx \, du \\ &= \intl_{[U_m^{m-j+1}]} \varphi(ulg) \psi_0^{-1}(u) \psi_{\alpha_{j-1}}^{-1}(u)\, du \, . \end{split} \end{equation} After similar manipulations for the constant term we obtain \begin{multline} \intl_{[U_m^{m-j}]} \hspace{-0.6em} \varphi(ug) \psi_0^{-1}(u) \, du = \\[-0.5em] = \hspace{-1em} \intl_{[U_m^{m-j+1}]} \hspace{-1em} \varphi(ug) \psi_0^{-1}(u) \, du + \hspace{-1.4em} \sum_{\gamma \in \Lambda_{j-1}(\psi_0)} \, \intl_{[U_m^{m-j+1}]} \hspace{-1em} \varphi(ulg) \psi_0^{-1}(u) \psi_{\alpha_{j-1}}^{-1}(u)\, du \, . \end{multline} \end{proof} \begin{rmk} \label{rem:col-expansion} We note that if $\psi_0$ is trivial along $\alpha_1$ but not along at least two adjacent other simple roots we cannot use lemma~\ref{lem:col-conjugation}, but we could still make an expansion over $C_2$ and keep the sum over $y \in {}^t \lie c_2(F) \cong F$ in the proof above. Since the character $\psi_y$ has the same support as $\psi_{\alpha_1}$ on $N$ we still have that $\psi_y(x) = \psi_y(xu)$ for $x \in C_j(\mathbb{A})$ and $u \in U_m^{m-j}(\mathbb{A})$ and since $\psi_0$ is still a character on $N$ trivial on the complement of $U_m^{m-j}$ it is still true that $\psi_0(u) = \psi_0(xu)$. Thus, using~\eqref{eq:integrand-col-exp} \begin{equation} \begin{split} \intl_{[U_m^{m-2}]} \hspace{-0.6em} \varphi(ug) \psi_0^{-1}(u) \, du &= \sum_{y \in {}^t \lie c_2(F)} \intl_{[U_m^{m-2}]} \intl_{[C_2]} \varphi(xug) \psi_0^{-1}(u) \psi_y^{-1}(x) \,dx \, du \\ &= \sum_{y \in {}^t \lie c_2(F)} \intl_{[U_m^{m-2}]} \intl_{[C_2]} \varphi(xug) \psi_0^{-1}(xu) \psi_y^{-1}(xu) \,dx \, du \\ &= \sum_{y \in {}^t \lie c_2(F)} \intl_{[V_m]} \intl_{[C_2]} \varphi(vg) \psi_0^{-1}(v) \psi_y^{-1}(v) \, dv \, . \end{split} \end{equation} \end{rmk} \begin{lem}\label{TheoremB:Lemma1} Assume that $\pi$ is an irreducible minimal automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$, $\varphi \in \pi$. For $1 \leq m \leq n-1$, $0 \leq i \leq m-2$, and $g \in {\mathrm{SL}}_n({\mathbb {A}})$, \begin{equation} \int_{[U_m^i]} \varphi(ug) \psi^{-1}_{y(Y_1)}(u) \,du = \int_{[U_m^{i+1}]} \varphi(ug) \psi^{-1}_{y(Y_1)}(u) \,du\,. \end{equation} \end{lem} \begin{proof} Using lemma~\ref{lem:col-expansion} with $\psi_0 = \psi_{y(Y_1)} = \psi_{\alpha_m}$ we get that \begin{equation} \label{eq:col-min-rep} \intl_{[U_m^i]} \varphi(ug) \psi_0^{-1}(u) \, du = \intl_{[U_m^{i+1}]} \varphi(ug) \psi_0^{-1}(u) \, du + \sum_{\gamma \in \Lambda_{m-i-1}(\psi_0)} \hspace{-1em} \mathcal{F}(\varphi; m,i,\gamma,g) \, , \end{equation} where we have introduced \begin{equation} \mathcal{F}(\varphi; m,i,\gamma,g) = \intl_{[U_m^{i+1}]} \hspace{-0.7em} \varphi(u \hat\iota(\gamma) g) \psi_0^{-1}(u) \psi_{\alpha_{m-i-1}}(u)\,du \, . \end{equation} Let $s = s_{U_m^{i+1}}$ from \eqref{eq:s-Umi}, and let $u \in \lie{sl}_n$ with two non-zero entries, both being $1$, at positions $(m-i, m-i-1)$ and $(m+1, m)$. Then, $\mathcal{F}(\varphi; m,i,\gamma,g) = \mathcal{F}_{s,u}(\varphi)(\hat\iota(\gamma)g)$ and since $u$ is not in the closure of the minimal orbit, theorem~\ref{thm:ggsglobal} gives that $\mathcal{F}_{s,u}(\varphi)$ is identically zero leaving only the constant mode in \eqref{eq:col-min-rep}. \end{proof} \pagebreak[3] \noindent\textbf{Proof of Theorem \ref{thm:max-parabolic}.} \begin{itemize-small} \item{\bf Minimal representation.} Assume that $\pi$ be an irreducible minimal automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$, and $\varphi \in \pi$. Applying Lemma \ref{TheoremB:Lemma1} repeatedly, we get that for each $1 \leq m \leq n-1$, $$\int_{[U_m]}\varphi(ug)\psi_{y(Y_1)}^{-1}(u)\,du = \int_{[U_m^{m-1}]}\varphi(ug)\psi_{y(Y_1)}^{-1}(u)\,du\,.$$ Note that $U_m^{m-1}=V_m$ and $\psi_{y(Y_1)} = \psi_{\alpha_m}$. Applying corollary~\ref{cor:min-row} repeatedly, we get that for each $1 \leq m \leq n-1$, $$\int_{[U_m^{m-1}]}\varphi(ug)\psi_{y(Y_1)}^{-1}(u)\,du= \int_{[N]}\varphi(ng)\psi_{y(Y_1)}^{-1}(n)\,dn\,,$$ which is exactly $$\int_{[N]}\varphi(ng)\psi_{\alpha_m}^{-1}(n)\,dn\,.$$ \item{\bf Next-to-minimal representation - rank 1.} Let $\pi$ be an irreducible next-to-minimal automorphic representation of ${\mathrm{SL}}_n(\mathbb{A})$ and let $\varphi \in \pi$. Recalling that $U_m = U_m^0$ and applying lemma~\ref{TheoremB:Lemma1} with $\psi_0 = \psi_{y(Y_1)} = \psi_{\alpha_m}$ we get \begin{equation} \intl_{[U_m]} \varphi(ug) \psi_{y(Y_1)}^{-1}(u) \, du = \intl_{[U_m^1]} \varphi(ug) \psi_0^{-1}(u) \, du \end{equation} since $\psi_0 \psi_{\alpha_{m-1}} = \psi_{\alpha_m,\alpha_{m-1}} = \psi_u$ for some $u$ that is not in the closure of the next-to-minimal orbit and thus the non-constant modes in lemma~\ref{TheoremB:Lemma1} can be expressed as Fourier coefficients $\mathcal{F}_{s,u}$ with $s = s_{U_m^1}$ from \eqref{eq:s-Umi} which vanish according to theorem~\ref{thm:ggsglobal}. Let us make an iteration in $1 \leq i \leq m-2$. Using lemma~\ref{TheoremB:Lemma1} we have that \begin{multline} \label{eq:ntm-rank1-induction} \intl_{[U_m^i]} \varphi(ug) \psi_0^{-1}(u) \, du = \\ = \intl_{[U_m^{i+1}]} \hspace{-0.7em} \varphi(ug) \psi_0^{-1}(u) \, du + \hspace{-1.6em} \sum_{\gamma \in \Lambda_{m-i-1}(\psi_{\alpha_m}\!)} \, \intl_{[U_m^{i+1}]} \hspace{-0.7em} \varphi(u\hat\iota(\gamma) g) \psi_{\alpha_{m-i-1}, \alpha_m}^{-1}(u) \, du \, . \end{multline} Since $\psi_{\alpha_m, \alpha_{m-i-1}}$ is a character on $N$ trivial on the complement of $U_m^{i+1}$ we can expand the second term further with lemma~\ref{lem:col-expansion} (or remark~\ref{rem:col-expansion} if $m-i-1 = 2$ and $\psi_{\alpha_m, \alpha_{m-i-1}}$ is not trivial along at least two adjacent roots other than $\alpha_1$). This would lead to characters $\psi_u = \psi_{\alpha_m, \alpha_{m-i-1}, \alpha_{m-i-2}}$ \linebreak[3](or $\psi_u = \psi_{\alpha_m, \alpha_{m-i-1}}\psi_y$ with $y \in {}^t \lie c_2(F)$ respectively) where $u$ is not in the closure of the next-to-minimal orbit. Then, $\mathcal{F}_{s,u}$ with $s = s_{U_m^{i+2}}$ from \eqref{eq:s-Umi} vanishes according to theorem~\ref{thm:ggsglobal} and the second term only receives the constant mode contribution. Repeating these arguments for the second term in \eqref{eq:ntm-rank1-induction}, it becomes \begin{equation} \sum_{\gamma \in \Lambda_{m-i-1}(\psi_{\alpha_m}\!)} \, \intl_{[V_m]} \varphi(u\hat\iota(\gamma) g) \psi_{\alpha_{m-i-1}, \alpha_m}^{-1}(u) \, du\,. \end{equation} Iterating over $i$, starting from $i = 1$ above, we get that \begin{multline} \label{eq:ntm-rank-1-cols-done} \intl_{[U_m]} \varphi(ug) \psi_{y(Y_1)}^{-1}(u) \, du = \\ \intl_{[V_m]} \varphi(ug) \psi_{\alpha_m}^{-1}(u) \, du + \sum_{j=1}^{m-2} \sum_{\gamma \in \Lambda_j(\psi_{\alpha_m}\!)}\, \intl_{[V_m]} \varphi(u\hat\iota(\gamma) g) \psi_{\alpha_j, \alpha_m}^{-1}(u) \, du \, . \end{multline} For $m = 1$, $U_1 = V_1$ and for $m = 2$ we only get the first term in \eqref{eq:ntm-rank-1-cols-done}. We will now use the methods of section~\ref{sec:thmA} to expand along rows. Using corollary~\ref{cor:ntm-row} case~\ref{itm:ntm-double}, we see that the second term in \eqref{eq:ntm-rank-1-cols-done} does not get any further contributions when expanding to $N$. Starting with the first term in \eqref{eq:ntm-rank-1-cols-done} and using corollary~\ref{cor:ntm-row} first with case~\ref{itm:ntm-single-i} to $V_{m+1}$ and then repeatedly with cases~\ref{itm:ntm-single-j} and \ref{itm:ntm-double} it becomes \begin{multline} \intl_{[V_{m+1}]} \varphi(ug) \psi_{\alpha_m}^{-1}(u) \, du = \\ = \intl_{[N]} \varphi(ng) \psi_{\alpha_m}^{-1}(n) \, dn + \sum_{i=m+2}^{n-1} \sum_{\gamma \in \Gamma_i(\psi_{\alpha_m}\!)} \, \intl_{[N]} \varphi(n \iota(\gamma) g) \psi_{\alpha_m, \alpha_i}^{-1}(n) \, dn\,. \end{multline} Lastly, \begin{multline} \intl_{[U_m]} \varphi(ug) \psi_{y(Y_1)}^{-1}(u) \, du = \intl_{[N]} \varphi(ng) \psi_{\alpha_m}^{-1}(n) \, dn +{} \\ + \sum_{j=1}^{m-2} \sum_{\gamma \in \Lambda_j(\psi_{\alpha_m}\!)}\, \intl_{[N]} \varphi(n\hat\iota(\gamma) g) \psi_{\alpha_j, \alpha_m}^{-1}(n) \, dn +{} \\ + \sum_{i=m+2}^{n-1} \sum_{\gamma \in \Gamma_i(\psi_{\alpha_m}\!)} \, \intl_{[N]} \varphi(n \iota(\gamma) g) \psi_{\alpha_m, \alpha_i}^{-1}(n) \, dn \, . \end{multline} \item{\bf Next-to-minimal representation - rank 2.} Let $\pi$ be an irreducible next-to-minimal automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$ and let $\varphi \in \pi$. We start from the integral \begin{equation*} \int_{[U_m]} \varphi (ug) \psi_{y(Y_2)}^{-1}(u) \,du\,. \end{equation*} For each root $\alpha$, let $X_{\alpha}$ be the corresponding one-dimensional root subgroup in ${\mathrm{SL}}_n$. Let $$C_1 = X_{e_m-e_{m+2}} \prod_{i=1}^{m-2} X_{e_i - e_{m+2}}\,,$$ and $$R_1 = X_{e_{m-1}-e_{m}} \prod_{i=1}^{m-2} X_{e_{m-1} - e_{i}}\,.$$ Then $C_1$ is a subgroup of $U_m$. Let $U_m'$ be the subgroup of $U_m$ with $C_1$-part identically zero. Then one can see that the quadruple $$(U_m', C_1, R_1, \psi_{y(Y_2)})$$ satisfies all the conditions of Lemma \ref{exchangeroots}. By this lemma, \begin{align*} \begin{split} & \ \int_{[U_m]} \varphi (ug) \psi_{y(Y_2)}^{-1}(u) du\\ = & \ \int_{C_1({\mathbb {A}})}\int_{[R_1U_m']} \varphi (ucg) \psi_{y(Y_2)}^{-1}(u)\, du\,dc\,. \end{split} \end{align*} Let $$C_2 = \prod_{i=1}^{m-2} X_{e_i - e_{m+1}}\,,$$ and $$R_2 = \prod_{i=1}^{m-2} X_{e_{m} - e_{i}}\,.$$ Then $C_2$ is a subgroup of $R_1U_m'$. Let $U_m''$ be the subgroup of $R_1U_m'$ with $C_2$-part identically zero. Then one can see that the quadruple $$(U_m'', C_2, R_2, \psi_{y(Y_2)})$$ satisfies all the conditions of Lemma \ref{exchangeroots}. Applying this lemma and by changing of variables, \begin{align}\label{theoremB:part2-equ1} \begin{split} & \ \int_{C_1({\mathbb {A}})}\int_{[R_1U_m']} \varphi (ucg) \psi_{y(Y_2)}^{-1}(u) \,du\,dc\\ = & \ \int_{C_1({\mathbb {A}})}\int_{C_2({\mathbb {A}})}\int_{[R_2U_m'']} \varphi (uc_2c_1g) \psi_{y(Y_2)}^{-1}(u) \,du\,dc_2\,dc_1\\ = & \ \int_{(C_1C_2)({\mathbb {A}})}\int_{[R_2U_m'']} \varphi (ucg) \psi_{y(Y_2)}^{-1}(u) \,du\,dc\,. \end{split} \end{align} Let $\omega$ be the Weyl element sending torus elements $$(t_1, t_2, \ldots, t_n)$$ to torus elements $$(t_{m-1}, t_{m+2}, t_m, t_{m+1}, t_1, t_2, \ldots, t_{m-2}, t_{m+3}, t_{m+4}, \ldots, t_n)\,.$$ Conjugating $\omega$ cross from left, the integral in \eqref{theoremB:part2-equ1} becomes \begin{equation}\label{theoremB:part2-equ2} \int_{C({\mathbb {A}})}\int_{[U_m^{\omega}]} \varphi (u\omega cg) \psi_{y(Y_2)}^{\omega, -1}(u) \,du\,dc\,, \end{equation} where $U_m^{\omega} = \omega R_2U_m'' \omega^{-1}$, $C=C_1C_2$, for $u \in U_m^{\omega}$, $\psi_{y(Y_2)}^{\omega}(u) = \psi_{y(Y_2)}(\omega^{-1} u \omega)$. $$U_m^{\omega} = U_m^{\omega,1}V_1\,,$$ where elements $u \in U_m^{\omega,1}$ have the following form $$\begin{pmatrix} I_2 & 0 \\ 0 & u' \end{pmatrix}\,,$$ and $U_m^{\omega,1}$ normalizes $V_1$. Recall that $V_i$ be unipotent radical of parabolic subgroup of type $(1^i,n-i)$. Note that $\psi_{y(Y_2)}^{\omega}|_{V_1}=\psi_{\alpha_1}$, $\psi_{y(Y_2)}^{\omega}|_{U_m^{\omega,1}}=\psi_{\alpha_3}$. Recall that $\alpha_1=e_1-e_2$, $\alpha_3=e_3-e_4$. Hence, the integral in \eqref{theoremB:part2-equ2} becomes \begin{equation}\label{theoremB:part2-equ3} \int_{C({\mathbb {A}})}\int_{[U_m^{\omega,1}]} \int_{[V_1]} \varphi (vu\omega cg)\psi_{\alpha_1}^{-1}(v) \psi_{\alpha_3}^{-1}(u) \,dv\,du\,dc\,. \end{equation} Since $\pi$ is an irreducible next-to-minimal automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$, by corollary~\ref{cor:ntm-row}, case~\ref{itm:ntm-single-i}, the integral in \eqref{theoremB:part2-equ3} becomes \begin{equation}\label{theoremB:part2-equ4} \int_{C({\mathbb {A}})}\int_{[U_m^{\omega,1}]} \int_{[V_2]} \varphi (vu\omega cg)\psi_{\alpha_1}^{-1}(v) \psi_{\alpha_3}^{-1}(u) \,dv\,du\,dc\,. \end{equation} $U_m^{\omega,1}$ still normalizes $V_2$, and $$U_m^{\omega,1}V_2 = U_m^{\omega,2}V_3\,,$$ where elements $u \in U_m^{\omega,2}$ have the following form $$\begin{pmatrix} I_4 & 0 \\ 0 & u'' \end{pmatrix}\,,$$ $u''$ is in the radical of the parabolic subgroup of type $(m-2,n-m-2)$ in ${\mathrm{SL}}_{n-4}$, and $U_m^{\omega,2}$ normalizes $V_3$. Note that $\psi_{y(Y_2)}^{\omega}|_{V_3}=\psi_{\alpha_1,\alpha_3}$ and $\psi_{y(Y_2)}^{\omega}|_{U_m^{\omega,2}}$ is the trivial character. By corollary~\ref{cor:ntm-row}, case~\ref{itm:ntm-double}, the integral in \eqref{theoremB:part2-equ4} becomes \begin{equation}\label{theoremB:part2-equ5} \int_{C({\mathbb {A}})}\int_{[U_m^{\omega,2}]} \int_{[V_4]} \varphi (vu\omega cg)\psi_{\alpha_1,\alpha_3}^{-1}(v) \,dv\,du\,dc\,. \end{equation} Applying corollary~\ref{cor:ntm-row}, case~\ref{itm:ntm-double}, repeatedly, the integral in \eqref{theoremB:part2-equ5} becomes \begin{equation*} \int_{C({\mathbb {A}})}\int_{[U_m^{\omega,2}]} \int_{[N]} \varphi (nu\omega cg)\psi_{\alpha_1,\alpha_3}^{-1}(n) \,dn\,du\,dc\,, \end{equation*} which becomes \begin{equation}\label{theoremB:part2-equ6} \int_{C({\mathbb {A}})}\int_{[U_m^{\omega,2}]} \int_{[N]} \varphi (n\omega cg)\psi_{\alpha_1,\alpha_3}^{-1}(n) \,dn\,du\,dc\,, \end{equation} by changing of variables. Since $\int_{[U_m^{\omega,2}]}du=1$, we have obtained that \begin{equation*} \int_{[U_m]} \varphi (ug) \psi_{y(Y_2)}^{-1}(u) \,du = \int_{C({\mathbb {A}})}\int_{[N]} \varphi (n\omega cg)\psi_{\alpha_1,\alpha_3}^{-1}(n) \,dn\,dc\,. \end{equation*} \end{itemize-small} This completes the proof of Theorem \ref{thm:max-parabolic}. \qed \section{Proof of theorems \ref{thm:min-coeff} and \ref{thm:ntm-coeff}} \label{sec:orbit-coefficients} \textbf{Proof of Theorem \ref{thm:min-coeff}.} Let $\pi$ be any irreducible automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$ and let $\varphi \in \pi$. The generalized Fourier coefficient of $\varphi$ attached to the partition $[21^{n-2}]$ has been defined in Section~\ref{sec:fourier}. We recall it as follows. Let $s=(1, -1, 0, \ldots, 0)$, and let $u = J_{[21^{n-2}]}$ which is a matrix zero everywhere except the (2,1) entry being $1$. Then the generalized Fourier coefficient of $\varphi$ attached to the partition $[21^{n-2}]$ is as follows: \begin{equation*} \mathcal{F}^{[211\ldots]} (\varphi;g) = {\mathcal {F}}_{s,u} (\varphi;g)= \int_{[N_s]} \varphi(ng)\psi_u^{-1}(n) \,dn\,, \end{equation*} where elements in the one-dimensional unipotent $N_s$ have the form $$\begin{pmatrix} 1&* & 0\\ 0&1&0\\ 0 &0& I_{n-2} \end{pmatrix}\,.$$ Let $X=\prod_{i=3}^{n} X_{e_1-e_i}$ and $Y=\prod_{i=3}^{n} X_{e_i-e_2}$. Then one can see that $Y(F)$ can be identified with the character space of $[X]$ as follows: given $y \in Y(F)$, $\psi_y(x)=\psi_u([x,y])$, for any $x \in [X]$. Note that both $X$ and $Y$ normalize $N_s$. Taking the Fourier expansion of ${\mathcal {F}}_{s,u} (\varphi)(g)$ along $[X]$, we obtain that $${\mathcal {F}}_{s,u} (\varphi;g)=\sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(xng)\psi_u^{-1}(n) \psi_y^{-1}(x)\,dn\,dx\,.$$ Since $y^{-1}\in Y(F)$ and $\varphi$ is automorphic, the above integral becomes \begin{align*} & \ \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(xng)\psi_u^{-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(y^{-1}xng)\psi_u^{-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(y^{-1}xnyy^{-1}g)\psi_u^{-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(xn' y^{-1}g)\psi_u^{-1}(n) \psi_y^{-1}(x)\,dn\,dx\,, \end{align*} where $n'=n+[x,y]$. By changing variables, we obtain that \begin{align*} & \ \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(xn' y^{-1}g)\psi_u^{-1}(n) \psi_y^{-1}(x)dndx\\ = & \ \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(xn y^{-1}g)\psi_u^{-1}(n) \psi_u^{-1}(-[x,y]) \psi_y^{-1}(x)\,dn\,dx\,. \end{align*} Note that \begin{align*} & \ \psi_u^{-1}(-[x,y]) \psi_y^{-1}(x)\\ = & \ \psi_u([x,y]) \psi_u(-[x,y]) \\ = & \ 1\,. \end{align*} Hence, we have that $${\mathcal {F}}_{s,u} (\varphi;g) = \sum_{y\in Y(F)} \int_{[X]}\int_{[N_s]} \varphi(xn y^{-1}g)\psi_u^{-1}(n) \,dn\,dx\,.$$ Note that $XN_s=U_1$ and $\psi_u=\psi_{\alpha_1}$. Therefore, we have that $${\mathcal {F}}_{s,u} (\varphi;g) = \sum_{y\in Y(F)} \int_{[U_1]} \varphi(u y^{-1}g)\psi_{\alpha_1}^{-1}(u) \,du\,.$$ This completes the proof of Theorem \ref{thm:min-coeff}. \qed \textbf{Proof of Theorem \ref{thm:ntm-coeff}.} Let $\pi$ be any irreducible automorphic representation of ${\mathrm{SL}}_n({\mathbb {A}})$ and let $\varphi \in \pi$. The generalized Fourier coefficient of $\varphi$ attached to the partition $[2^21^{n-4}]$ has also been defined in Section~\ref{sec:fourier}. We recall it as follows. Let $s=(1, -1, 1, -1, 0, \ldots, 0)$, and let $u = J_{[2^21^{n-4}]}$ which is a matrix zero everywhere except the (2,1) and (4,3) entries being $1$. Then the generalized Fourier coefficient of $\varphi$ attached to the partition $[2^21^{n-4}]$ is as follows: \begin{equation*} \mathcal{F}^{[221\ldots]}(\varphi;g) = {\mathcal {F}}_{s,u} (\varphi;g)= \int_{[N_s]} \varphi(ng)\psi_u^{-1}(n) \,dn\,, \end{equation*} where elements in $N_s$ have the form $$\begin{pmatrix} 1&* & 0 & * & 0\\ 0&1&0&0&0\\ 0&*&1&*&0\\ 0&0&0&1&0\\ 0 &0& 0&0&I_{n-4} \end{pmatrix}\,.$$ Let $\omega$ be the Weyl element sending the torus element $$(t_1, t_2, \ldots, t_n)$$ to the torus element $$(t_1, t_3, t_4, t_2, t_5, t_6, \ldots, t_n)\,.$$ Conjugating $\omega$ across from left, we obtain that \begin{equation*} {\mathcal {F}}_{s,u} (\varphi;g)= \int_{[N_s^{\omega}]} \varphi(n\omega g)\psi_u^{\omega,-1}(n) \,dn\,, \end{equation*} where $N_s^{\omega} = \omega N_s \omega^{-1}$, and for $n \in N_s^{\omega}$, $\psi_u^{\omega}(n) =\psi_u(\omega^{-1} n \omega)$. Elements in $n \in N_s^{\omega}$ have the following form $$n=n(z)=\begin{pmatrix} I_2&z & 0\\ 0&I_2&0\\ 0 &0& I_{n-4} \end{pmatrix}\,,$$ and $\psi_u^{\omega}(n)=\psi(z_{1,2}+z_{2,1})$. Let $$X'=\prod_{i=5}^{n} X_{e_1-e_i}\prod_{i=5}^{n} X_{e_2-e_i}$$ and $$Y'=\prod_{i=5}^{n} X_{e_i-e_4}\prod_{i=5}^{n} X_{e_i-e_3}\,.$$ Then one can see that $Y'(F)$ can be identified with the character space of $[X']$ as follows: given $y \in Y'(F)$, $\psi_y(x)=\psi_u^{\omega}([x,y])$, for any $x \in [X']$. Note that both $X'$ and $Y'$ normalize $N_s$. Taking the Fourier expansion of ${\mathcal {F}}_{s,u} (\varphi)(g)$ along $[X']$, we obtain that $${\mathcal {F}}_{s,u} (\varphi;g)=\sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(xn\omega g)\psi_u^{\omega,-1}(n) \psi_y^{-1}(x)\,dn\,dx\,.$$ Since $y^{-1}\in Y'(F)$ and $\varphi$ is automorphic, the above integral becomes \begin{align*} & \ \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(xn\omega g)\psi_u^{\omega,-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(y^{-1}xn\omega g)\psi_u^{\omega,-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(y^{-1}xnyy^{-1}\omega g)\psi_u^{\omega,-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(xn' y^{-1}\omega g)\psi_u^{\omega,-1}(n) \psi_y^{-1}(x)\,dn\,dx\,, \end{align*} where $n'=n+[x,y]$. By changing variables, we obtain that \begin{align*} & \ \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(xn' y^{-1}\omega g)\psi_u^{\omega,-1}(n) \psi_y^{-1}(x)\,dn\,dx\\ = & \ \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(xn y^{-1}\omega g)\psi_u^{\omega,-1}(n) \psi_u^{\omega,-1}(-[x,y]) \psi_y^{-1}(x)\,dn\,dx\,. \end{align*} Note that \begin{align*} & \ \psi_u^{\omega,-1}(-[x,y]) \psi_y^{-1}(x)\\ = & \ \psi_u^{\omega}([x,y]) \psi_u^{\omega}(-[x,y]) \\ = & \ 1\,. \end{align*} Hence, we have that $${\mathcal {F}}_{s,u} (\varphi;g) = \sum_{y\in Y'(F)} \int_{[X']}\int_{[N_s^{\omega}]} \varphi(xn y^{-1} \omega g)\psi_u^{-1}(n) \,dn\,dx\,.$$ Note that $X'N_s^{\omega}=U_2$ and $\psi_u=\psi_{y(Y_2)}$, using the notation from section~\ref{sec:thmB}. Therefore, we have that $${\mathcal {F}}_{s,u} (\varphi;g) = \sum_{y\in Y'(F)} \int_{[U_2]} \varphi(u y^{-1}\omega g)\psi_{y(Y_2)}^{-1}(u) \,du\,dx\,.$$ This completes the proof of Theorem \ref{thm:ntm-coeff}. \qed \section{Applications} \label{sec:sl5} As is evident from table~\ref{tab:duality}, the case ${\mathrm{SL}}_5$ appears in the list of symmetry and duality groups in string theory. It is related to compactification of type II string theory on a three-torus $T^3$ from ten to seven spacetime dimensions. Fourier coefficients of automorphic forms on ${\mathrm{SL}}_5$ are related to non-perturbative effects as discussed in the introduction. Therefore we analyse here in some detail the structure of Fourier coefficients for automorphic forms attached to a minimal or next-to-minimal automorphic representation of ${\mathrm{SL}}_5$ that are relevant to the first two higher-derivative corrections in four-graviton scattering amplitudes. We will give a detailed description of how the formalism developed above can be used to calculate explicit expressions for Fourier coefficients on maximal parabolic subgroups for automorphic forms attached to a minimal or next-to-minimal automorphic representation. Following a general discussion, we will treat two explicit examples for $n = 5$. \subsection{Generalities} With applications to string theory in mind, throughout this section we are restricting to $F = {\mathbb {Q}}$ and let ${\mathbb {A}} \equiv {\mathbb {A}}_{{\mathbb {Q}}}$. The types of expressions that are of interest are of the form: \begin{equation} \mathcal{F}^{{\mathbb {R}}}(\varphi, \psi; g) = \intl_{U({\mathbb {Z}}) \backslash U({\mathbb {R}})}\varphi(ug) \psi^{-1}(u) \, du\,, \label{eqn:realunipotentcoeff} \end{equation} where $U({\mathbb {R}})$ is a parabolic subgroup of $\mathrm{G}({\mathbb {R}})$, $\psi$ is some rank-1 or rank-2 character on $U({\mathbb {R}})$ and $\varphi$ is an automorphic form in the minimal- or next-to-minimal automorphic representations of $\mathrm{G}({\mathbb {R}})$. Any such coefficient can be brought to a standard form using the action of the arithmetic Levi subgroup $L({\mathbb {Z}})$. For rank-1 this form is $\psi = \psi_{y(kY_1)}$ for some integer $k\neq 0$ and for rank-2 one has $\psi(k_1 u_{m,m+1}+k_2 u_{m-1,m+2})$ for integers $k_1$ and $k_2$, cf.~\eqref{eq:psi-Y2}. For simplicity, we will restrict ourselves to the case $Y_{y(Y_2)}$ corresponding to $k_1=k_2=1$ and demonstrate how to apply theorem \ref{thm:max-parabolic}. The techniques demonstrated here allow for the calculation of all such Fourier coefficients for automorphic forms in the minimal and next-to-minimal representations on ${\mathrm{SL}}_n$. In order to apply theorem \ref{thm:max-parabolic}, we first perform an adelic lift~\cite{FGKP18} \begin{equation} \mathcal{F}^{{\mathbb {R}}}(\varphi, \psi; g_\infty) = \mathcal{F}^{{\mathbb {A}}}(\varphi, \psi; (g_\infty, I_n, I_n, \cdots)) = \intl_{[U]}\varphi(u(g_\infty, I_n, I_n, \dots)) \psi^{-1}(u) \,du\, . \end{equation} The theorem now gives $\mathcal{F}^{{\mathbb {A}}}$ in terms of adelic Whittaker functions. These Whittaker functions will then be evaluated using the adelic reduction formula \begin{equation} W_\psi(\lambda, a) = \sum_{w_c w_0' \in \mathcal{C}_\psi}a^{(w_c w_0')^{-1}\lambda + \rho} M(w_c^{-1}, \lambda) W_{\psi^a}'(w_c^{-1}\lambda, 1) \label{eqn:reduction} \end{equation} of \cite{FKP14}. The power of this formula lies in that it expresses a degenerate Whittaker function evaluated on the Cartan torus of a group $\mathrm{G}({\mathbb {A}})$ as a sum of generic Whittaker functions on a subgroup $\mathrm{G}'({\mathbb {A}})$. This subgroup $\mathrm{G}'({\mathbb {A}})$ is determined by deleting all nodes in the Dynkin diagram of $\mathrm{G}({\mathbb {A}})$ on which $\psi$ is not supported. $\lambda$ denotes the weight of the Eisenstein series, $w_0'$ denotes the longest Weyl word on $\mathrm{G}'$, $\mathcal{C}_\psi$ denotes the set \begin{equation} \mathcal{C}_\psi = \{ w \in \mathcal{W} \;| \;w \Pi' < 0 \} \end{equation} where $\Pi'$ is the set of simple roots of $\mathrm{G}'$ and $w_c$ is hence the summation variable and corresponds to a specific representative of the quotient Weyl group $\mathcal{W}/\mathcal{W}'$ described in~\cite{FKP14}. $\rho$ denotes the Weyl vector, $M$ denotes the intertwiner \begin{equation} M(w, \lambda) = \prod_{\substack{\alpha > 0 \\ w\alpha < 0}} \frac{\xi(\langle \lambda | \alpha \rangle)}{\xi(\langle \lambda | \alpha \rangle + 1)} \end{equation} as featured in the Langlands constant term formula, where $\xi$ is the completed Riemann zeta function and $\psi^a$ denotes the ``twisted character'' both defined in appendix \ref{app:euler}. The evaluation of a real Fourier coefficient $\mathcal{F}^{{\mathbb {R}}}$ over a unipotent schematically looks like \begin{equation} \begin{aligned} & \mathcal{F}^{{\mathbb {R}}}(\varphi, \psi; g_\infty) = \mathcal{F}^{{\mathbb {A}}}(\varphi, \psi; (g_\infty, I_n, I_n, \cdots)) && \text{Adelic lift} \\ ={}& \sum_\psi \sum_{l\in \Lambda \text{ or } l\in\Gamma} W_{\psi}(l(g_\infty, I_n, I_n, \cdots)) && \text{Theorem \ref{thm:max-parabolic}} \\ ={}& \sum_\psi \sum_{l\in \Lambda \text{ or } l\in\Gamma} W_{\psi}((n_\infty a_\infty k_\infty, n_2 a_2 k_2, n_3 a_3 k_3, \cdots)) && \text{Iwasawa-decomposition} \\ ={}& \sum_\psi \left( \prod_{p\leq \infty} \psi_p(n_p) \right) \sum_{l\in \Lambda \text{ or } l\in\Gamma} W_{\psi}((a_\infty, a_2,a_3, \cdots)) && W_\psi(nak) = \psi(n) W_\psi(a) \\ ={}& \sum_\psi \psi_{\infty}(n_{\infty}) \sum_{l\in \Lambda \text{ or } l\in\Gamma} \sum_{w} a^{\ldots} M(\cdots) W_{\psi^a}'(\cdots, 1) && \text{Reduction formula \eqref{eqn:reduction}} . \end{aligned} \end{equation} The fourth line extracts the unipotent $n_p$-dependence at each of the local places $p \leq \infty$. In the fifth line we have used that only the archimedean unipotent $n_\infty$ contributes. The reason that the $p$-adic unipotent matrices $n_p$ of the $p$-adic Iwasawa-decomposition of $l \in \mathrm{G}(F) \subset \mathrm{G}({\mathbb {Q}}_p)$ above drop out is as follows. In using theorem \ref{thm:max-parabolic}, we will be faced with evaluating Whittaker functions such as \begin{equation} \begin{aligned} W_{\alpha_j, \alpha_m}(\hat{\iota}(\lambda_{j}) g) &\quad{}\text{for}\quad{} j \leq m-2 \quad{}\text{where}\quad{} \lambda_{j} \in \Lambda_{j} \quad{}\text{and}\quad{} \\ W_{\alpha_m, \alpha_i}(\iota(\gamma_i) g) &\quad{}\text{for}\quad{} i \geq m+2 \quad{}\text{where}\quad{} \gamma_i \in \Gamma_i\, . \end{aligned} \end{equation} We have that $\gamma_i$ and $\lambda_{j}$ are embedded in ${\mathrm{SL}}_n$ as (cf.~\eqref{eq:iota}) \begin{equation} \hat{\iota}(\lambda_{j}) = \left( \begin{smallmatrix} \lambda_{j} \\ & I_{n-j} \end{smallmatrix} \right) \quad{}\text{and}\quad{} \iota(\gamma_i) = \left( \begin{smallmatrix} I_i \\ & \gamma_i \end{smallmatrix} \right)\, . \end{equation} It is clear from their block-diagonal form that the unipotent $n_p$ in the $p$-adic Iwasawa-decomposition of $\hat{\iota}(\lambda_j)$ (and $\iota(\gamma_i)$) will feature the same block-diagonal form. Since $W_{\alpha_j, \alpha_m}$ (and $W_{\alpha_m, \alpha_i}$) is only sensitive to the unipotent on rows $j$ and $m \geq j+2 > j$ (on rows $i$ and $m \leq i-2 \leq i$), the block diagonal structure of $n_p$ implies $\psi_{\alpha_j, \alpha_m; p}(n_p) = 1$ (and $\psi_{\alpha_m, \alpha_i; p}(n_p) = 1$). For a real matrix $g \in {\mathrm{SL}}_n({\mathbb {R}})$, we will denote its Iwasawa-decomposition \begin{equation} \label{eq:realIwa} g = n_\infty a_\infty k_\infty = \left( \begin{smallmatrix} 1 & x_{12} & \cdots & \cdots & x_{1n} \\ & 1 & \ddots & \ddots & \vdots \\ & & \ddots & \ddots & \vdots \\ & & & 1 & x_{n-1, n} \\ & & & & 1 \end{smallmatrix} \right) \left( \begin{smallmatrix} y_1 & & & & \\ & y_2/y_1 & & & \\ & & \ddots & & \\ & & & y_{n-1}/y_{n-2} & \\ & & & & 1/y_{n-1} \end{smallmatrix} \right) k_{\infty}\, . \end{equation} Similarly, for a $p$-adic matrix $g \in {\mathrm{SL}}_n({\mathbb {Q}}_p)$ we denote it as \begin{equation} g = n_p a_p k_p = n_p \left( \begin{smallmatrix} \eta_{1, p} & & & & \\ & \eta_{2, p}/\eta_{1, p} & & & \\ & & \ddots & & \\ & & & \eta_{n-1, p}/\eta_{n-2, p} & \\ & & & & 1/\eta_{n-1, p} \end{smallmatrix} \right) k_p\, . \end{equation} Appendix \ref{sec:iwasawa} contains closed formulae for the $x$'s and the $y$'s, as well as a closed formula for the $p$-adic norm $|\eta_{i, p}|_p$ of the $\eta$'s. In what follows, we will make use of all formulae that are derived or stated in appendices \ref{app:euler}, \ref{sec:iwasawa} and \ref{sec:cosets} along with the following notation \begin{itemize}\setlength{\itemsep}{1mm} \item A prime on a variable, eg.\ $x'$, generally denotes $x' \neq 0$. \item For sums we write $\displaystyle \sum_x \equiv \sum_{x \in {\mathbb {Q}}}$. \item We write $\displaystyle \sum_{x'} f(x) \equiv \sum_{x \in {\mathbb {Q}} \backslash \{0\}} f(x)$ and $\displaystyle \sum_{x' \in {\mathbb {Z}}} f(x) \equiv \sum_{x \in {\mathbb {Z}} \backslash \{0\}} f(x)$. Note that the prime is used to indicate whether or not zero is included in the sum but the prime is omitted in the summand. \item For products we write $\displaystyle \prod_p \equiv \prod_{p \text{ prime}}$. Writing $\displaystyle \prod_{p \leq \infty}$ denotes the product over all primes $p$ (the non-archimedean places) as well as the element $p = \infty$ (the archimedean place). \item For $x \in {\mathbb {R}}$ we denote $\e{x} \equiv e^{2\pi i x}$. \end{itemize} \subsection{Example: Rank-1 coefficient of \texorpdfstring{$\pi_{\text{min}}$}{pimin} on \texorpdfstring{$P_{\alpha_4}\subset {\mathrm{SL}}_5$}{Palpha4 in SL5}} Here, we will calculate the real rank-1 Fourier coefficient \eqref{eqn:realunipotentcoeff} for the minimal Eisenstein series $E(\lambda; g)$ with $\lambda = 2s\Lambda_1 - \rho$ in the maximal parabolic \begin{equation} P_{\alpha_4} = {\mathrm{GL}}(4) \times {\mathrm{GL}}(1) \times U_{\alpha_4} \subset {\mathrm{SL}}(5) \quad{}\text{subject to } \quad \det({\mathrm{GL}}(4) \times {\mathrm{GL}}(1)) = 1\,, \end{equation} associated with removing the ``last'' node in the Dynkin diagram of ${\mathrm{SL}}(5)$. The unipotent radical is \begin{equation} U({\mathbb {R}}) = U_{\alpha_4}({\mathbb {R}}) = \left\{ \left( \begin{smallmatrix} 1 & & & & * \\ &1& & & * \\ & &1& & * \\ & & &1& * \\ & & & & 1 \end{smallmatrix} \right) \right\}\, . \end{equation} Theorem \ref{thm:max-parabolic} gives for the unramified character $\psi_{y(Y_1)}$ that \begin{align} \mathcal{F}^{{\mathbb {A}}}(E(2s\Lambda_1 - \rho), \psi_{y(Y_1)}; g) ={}& W_{\alpha_4}(g) \, . \end{align} \begin{table}[t] \begin{align*} \begin{tabu}{cl|c|c|c} &w_c & \left\langle w_c^{-1} \lambda + \rho | \alpha_4 \right\rangle & M\left( w_c^{-1}, \lambda \right) & \left( w_c w_0' \right)^{-1} \lambda + \rho \\ \hline &\operatorname{Id} & 0 & 1 & \dots \\ &w_{1} & 0 & \dots & \dots \\ &w_{12} & 0 & \dots & \dots \\ *&w_{123} & 2\left( s - \frac{3}{2} \right) & \frac{\xi(2s-3)}{\xi(2s)} & [0, 0, 0, 5 - 2s] \\ \end{tabu} \end{align*} \caption{\label{tab:2}Data for the reduction formula \eqref{eqn:reduction} to evaluate $W_{\alpha_4}(a)$ on ${\mathrm{SL}}_5$ with $\lambda = 2s\Lambda_1 - \rho$. The star indicates the one and only row that contributes in the sum over Weyl words.} \end{table} The Whittaker function is found by the reduction formula with data given in table~\ref{tab:2}. In this case, there is no diagonally embedded rational matrix $l$, or equivalently $l = I_5$, in the general procedure and hence we have $|\eta_{1, p}|_p = |\eta_{2, p}|_p = |\eta_{3, 4}|_p = |\eta_{4, p}|_p = 1$. We get \begin{align} \begin{aligned} & W_{\alpha_4}(\lambda; (g_\infty, I_5, I_5, \cdots)) = \\ ={}& \e{x_{45}} \left( y_4^{5-2s} \frac{\xi\left( 2s-3 \right)}{\xi\left( 2s \right)} \prod_{p < \infty} |\eta_{4, p}|_p^{5-2s} \right) B_{s-3/2}\left( \frac{y_4^2}{y_3}, 1 \right) \\ & \times \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{4, p}^2}{\eta_{3, p}} \right) \left( 1 - p^{-2(s-3/2)} \right) \frac{1-p^{-2(s-3/2)+1} \left|\frac{\eta_{4, p}^2}{\eta_{3, p}} \right|_{p}^{2(s-3/2)-1}}{1-p^{-2(s-3/2)+1}} \\ ={}& \e{x_{45}} y_4^{5-2s} \frac{1}{\xi\left( 2s \right)} 2 \left| \frac{y_4^2}{y_3}\right|_\infty^{s-2} K_{s-2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) \\ ={}& 2 \e{x_{45}} y_3^{2-s} y_4 \frac{1}{\xi\left( 2s \right)} K_{s-2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) = \mathcal{F}^{{\mathbb {R}}}(E(2s\Lambda_1 - \rho), \psi_{y(Y_1)}; g_\infty)\,. \label{eqn:unramifiedminimal} \end{aligned} \end{align} The $x$'s and $y$'s are the Iwasawa coordinates for the matrix $g_\infty$ as in~\eqref{eq:realIwa}. The function $B_s$ that appears is a more compact way of writing the ${\mathrm{SL}}_2$ Whittaker vector defined explicitly in~\eqref{eq:SL2Whitt}. Parameterizing $g_\infty$ as \begin{equation} g_\infty = ue = \begin{psmallmatrix} I_4 & Q \\ 0 & 1 \end{psmallmatrix} \begin{psmallmatrix} r^{-1/4}e_4 & 0 \\ 0 & r \end{psmallmatrix} \quad{}\text{where}\quad{} e_4 \in {\mathrm{SL}}_4({\mathbb {R}})\,, \end{equation} we get in particular that \begin{equation} y_3 = r^{-3/4}||N e_4|| \quad{}\text{and}\quad{} y_4 = r^{-1}\,, \end{equation} where $N = \displaystyle \begin{psmallmatrix} 0 & 0 & 0 & 1 \end{psmallmatrix} $ so that $N e_4$ is equal to the last row in $e_4$. This is obtained using the formula \eqref{eqn:realdilatons}. We get in particular that \begin{equation} y_3^{2-s} y_4 = r^{2s-5}\left( r^{-5/4} ||N e_4 || \right)^{s-2} \quad{}\text{and}\quad{} \frac{y_4^2}{y_3} = r^{-5/4} ||N e_4 || \, . \end{equation} The more general (real) ramified Fourier coefficient has the expression \begin{equation} \begin{aligned} &\int E\left( 2s\Lambda_1 - \rho; \begin{psmallmatrix} 1 & & & & u_1 \\ & 1 & & & u_2 \\ & & 1 & & u_3 \\ & & & 1 & u_4 \\ & & & & 1 \end{psmallmatrix} g_\infty \right) \overline{\e{m_1 u_1 + m_2 u_2 + m_3 u_3 + m'_4 u_4}} d^4u = \\ ={}& \e{x_{45}} r^{2s-5} \frac{2}{\xi(2s)} \sigma_{4-2s}(k) \left( r^{-5/4} ||N e_4 || \right)^{s-2} K_{s-2}\left( 2\pi r^{-5/4}||N e_4|| \right)\\ =&\e{x_{45}} r^{\frac{3s}{4}-\frac{5}{2}} \frac{2}{\xi(2s)} \frac{\sigma_{2s-4}(k)}{|k|^{s-2}} ||\tilde{N} e_4 ||^{s-2} K_{s-2}\left( 2\pi|k| r^{-5/4}||\tilde{N} e_4|| \right)\label{eq:minFC} \end{aligned} \end{equation} for integer $m$'s while for non-integer rational $m$'s it vanishes. Here $g_\infty$ has been parametrized as above, $N = \displaystyle \begin{psmallmatrix} m_1 & m_2 & m_3 & m'_4 \end{psmallmatrix}=k \tilde{N} $, $k = \gcd(N)$ and $m'_4 \neq 0$. This expression can also be found by starting from $\psi_{y(kY_1)}$ for the standard Fourier coefficient instead. This corresponds to $N = \displaystyle \begin{psmallmatrix} 0 & 0 & 0 &k \end{psmallmatrix}$ and its $L({\mathbb {Z}})$ orbit gives the general expression~\eqref{eq:minFC}. Formula~\eqref{eq:minFC} agrees with~\cite[Eq.~(H.37)]{GMV15} where the Fourier coefficients were computed by Poisson resummation technique after a translation of conventions. \subsection{Example: Rank-1 coefficient of \texorpdfstring{$\pi_{\text{ntm}}$}{pintm} on \texorpdfstring{$P_{\alpha_4} \subset {\mathrm{SL}}_5$}{Palpha4 in SL5}} Here, we will calculate the real rank-1\footnote{There is no rank-2 character for this parabolic.} Fourier coefficient \eqref{eqn:realunipotentcoeff} for the next-to-minimal Eisenstein series $E(\lambda; g)$ with $\lambda = 2s\Lambda_2 - \rho$ in the maximal parabolic \begin{equation} P_{\alpha_4} = {\mathrm{GL}}(4) \times {\mathrm{GL}}(1) \times U_{\alpha_4} \subset {\mathrm{SL}}(5) \quad{}\text{subject to } \quad \det({\mathrm{GL}}(4) \times {\mathrm{GL}}(1)) = 1 \end{equation} associated with removing the ``last'' node in the Dynkin diagram of ${\mathrm{SL}}(5)$. The unipotent radical is \begin{equation} U({\mathbb {R}}) = U_{\beta_4}({\mathbb {R}}) = \left\{ \left( \begin{smallmatrix} 1 & & & & * \\ &1& & & * \\ & &1& & * \\ & & &1& * \\ & & & & 1 \end{smallmatrix} \right) \right\}\, . \end{equation} Theorem \ref{thm:max-parabolic} gives \begin{align} \begin{aligned} & \mathcal{F}^{{\mathbb {A}}}(E(2s\Lambda_2 - \rho), \psi_{y(Y_1)}; g) = \\ ={}& W_{\alpha_4}(g) + \sum_{\lambda_1 \in \Lambda_1} W_{\alpha_1, \alpha_4}(\lambda_1 g) + \sum_{\lambda_2 \in \Lambda_2} W_{\alpha_2, \alpha_4}(\lambda_2 g) \\ ={}& W_{\alpha_4}(g) \\ &+{} \sum_{z'} W_{\alpha_1, \alpha_4} \Bigg( \underbrace{ \left( \begin{smallmatrix} z \\ & 1 \\ & & 1 \\ & & & 1 \\ & & & & 1/z \end{smallmatrix} \right) }_{l_z} g \Bigg) \\ &+{} \sum_{x', y} W_{\alpha_2, \alpha_4} \bigg( \underbrace{ \left( \begin{smallmatrix} x^{-1} \\ y & x \\ & & I_3 \\ \end{smallmatrix} \right) }_{l_{xy}} g \bigg) \\ &+{} \sum_{x'} W_{\alpha_2, \alpha_4} \bigg( \underbrace{ \left( \begin{smallmatrix} 0 & -x^{-1} \\ x & 0 \\ & & I_3 \\ \end{smallmatrix} \right) }_{l_x} g \bigg)\,, \end{aligned} \end{align} using the representatives derived in appendix \ref{sec:cosets}. \begin{table}[t] \begin{align*} \begin{tabu}{cl|c|c|c} &w_c & \left\langle w_c^{-1} \lambda + \rho | \alpha_4 \right\rangle & M\left( w_c^{-1}, \lambda \right) & \left( w_c w_0' \right)^{-1} \lambda + \rho \\ \hline &\operatorname{Id} & 0 & 1 & \dots \\ &w_{2} & 0 & \dots & \dots \\ &w_{21} & 0 & \dots & \dots \\ *&w_{23} & 2\left( s - 1 \right) & \frac{\xi(2s-2)}{\xi(2s)} & [2s-1, 0, 0, 4 - 2s] \\ *&w_{213} & 2\left( s - 1 \right) & \frac{\xi(2s-2)^2}{\xi(2s) \xi(2s-1)} & [3-2s, 2s-2, 0, 4 - 2s] \\ *&w_{2132} & 2\left( s - 1 \right) & \frac{\xi(2s-3)\xi(2s-2)}{\xi(2s)\xi(2s-1)} & [0, 4-2s, 2s-3, 4 - 2s] \\ &w_{213243} & 0 & \dots & \dots \\ \end{tabu} \end{align*} \caption{\label{tab:3}Data for the reduction formula \eqref{eqn:reduction} to evaluate $W_{\alpha_4}(a)$ on ${\mathrm{SL}}_5$ with $\lambda = 2s\Lambda_2 - \rho$. The stars indicate which rows contribute in the sum over Weyl words.} \end{table} The first Whittaker function is found by the reduction formula with the data of table~\ref{tab:3}. In this case, there is no diagonally embedded rational matrix $l$, or equivalently $l = I_5$, and hence we have $|\eta_{1, p}|_p = |\eta_{2, p}|_p = |\eta_{3, 4}|_p = |\eta_{4, p}|_p = 1$. We get{\allowdisplaybreaks \begin{align} \label{ex1part1}\\ & W_{\alpha_4}(\lambda; (g_\infty, I_n, I_n, \cdots)) = \\ ={}& \e{x_{45}} B_{s-1}\left( \frac{y_4^2}{y_3}, 1 \right) \left( y_1^{2s-1} y_4^{4-2s} \frac{\xi\left( 2s-2 \right)}{\xi\left( 2s \right)} \prod_{p < \infty} |\eta_{1, p}|_p^{2s-1}|\eta_{4, p}|_p^{4-2s} \right. \\ &{}+ \left. y_1^{3-2s} y_2^{2s-2} y_4^{4-2s} \frac{\xi(2s-2)^2}{\xi(2s) \xi(2s-1)} \prod_{p < \infty} |\eta_{1, p}|_p^{3-2s} |\eta_{2, p}|_p^{2s-2} |\eta_{4, p}|_p^{4-2s} \right. + \\ &{}+ \left. y_2^{4-2s} y_3^{2s-3} y_4^{4-2s} \frac{\xi(2s-3)\xi(2s-2)}{\xi(2s)\xi(2s-1)} \prod_{p < \infty} |\eta_{2, p}|_p^{4-2s} |\eta_{3, p}|_p^{2s-3} |\eta_{4, p}|_p^{4-2s} \right) \\ & \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{4, p}^2}{\eta_{3, p}} \right) \left( 1 - p^{-2(s-1)} \right) \frac{1-p^{-2(s-1)+1} \left|\frac{\eta_{4, p}^2}{\eta_{3, p}} \right|_{p}^{2(s-1)-1}}{1-p^{-2(s-1)+1}} \\ ={}& \e{x_{45}} \left( y_1^{2s-1} y_4^{4-2s} \frac{1}{\xi\left( 2s \right)} + y_1^{3-2s} y_2^{2s-2} y_4^{4-2s} \frac{\xi(2s-2)}{\xi(2s) \xi(2s-1)}\right. \\ &{}+ \left. y_2^{4-2s} y_3^{2s-3} y_4^{4-2s} \frac{\xi(2s-3)}{\xi(2s)\xi(2s-1)} \right) 2 \left| \frac{y_4^2}{y_3}\right|_\infty^{s-3/2} K_{s-3/2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) \\ ={}& 2 \e{x_{45}} \left( y_1^{2s-1} y_3^{3/2-s} y_4 \frac{1}{\xi\left( 2s \right)} + y_1^{3-2s} y_2^{2s-2} y_3^{3/2-s} y_4 \frac{\xi(2s-2)}{\xi(2s) \xi(2s-1)}\right. \\ &{}+ \left. y_2^{4-2s} y_3^{s-3/2} y_4 \frac{\xi(2s-3)}{\xi(2s)\xi(2s-1)} \right) K_{s-3/2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right)\,. \end{align} The $x$'s and $y$'s are the Iwasawa coordinates for the matrix $g_\infty$ as in~\eqref{eq:realIwa}.} \begin{table}[t] \begin{align*} \begin{tabu}{cl|c|c|c|c} &w_c & \left\langle w_c^{-1} \lambda + \rho | \alpha_1 \right\rangle & \left\langle w_c^{-1} \lambda + \rho | \alpha_4 \right\rangle & M\left( w_c^{-1}, \lambda \right) & \left( w_c w_0' \right)^{-1} \lambda + \rho \\ \hline &\operatorname{Id} & 0 & 0 & 1 & \dots \\ &w_{2} & 2\left( s - \frac{1}{2} \right) & 0 & \dots & \dots \\ *&w_{23} & 2\left( s-\frac{1}{2} \right) & 2(s-1) & \frac{\xi(2s-2)}{\xi(2s)} & v \\ &w_{2132} & 0 & 2(s-1) & \dots & \dots \\ &w_{213243} & 0 & 0 & \dots & \dots \\ \end{tabu} \end{align*} \caption{\label{tab:4}Data for the reduction formula \eqref{eqn:reduction} to evaluate $W_{\alpha_1, \alpha_4}(a)$ on ${\mathrm{SL}}_5$ with $\lambda = 2s\Lambda_2 - \rho$. The stars indicate which Weyl words contribute to the reduction formula. We wrote $v = [3-2s, 2s-2, 0, 4-2s]$ here to conserve space.} \end{table} The second Whittaker function is found by the reduction formula with the data given in table~\ref{tab:4}. The $p$-adic Iwasawa-decomposition of $l_z$ has \begin{equation} |\eta_{1, p}|_p = |\eta_{2, p}|_p = |\eta_{3, p}|_p = |\eta_{4, p}|_p = |z|_p\, . \end{equation} {\allowdisplaybreaks We get \begin{align} \label{ex1part2}\\ \sum_{z'} {}& W_{\alpha_1, \alpha_4}\left( \lambda; l_z (g_\infty, I_n, I_n, \cdots) \right) = \\ = \sum_{z'} {}& \e{x_{12} + x_{45}} y_1^{3-2s} y_2^{2s-2} y_4^{4-2s} \frac{\xi(2s-2)}{\xi(2s)} \\ & B_{s-1/2}\left( \frac{y_1^2}{y_2}, 1 \right) B_{s-1}\left( \frac{y_4^2}{y_3}, 1 \right) \prod_{p < \infty} |\eta_{1, p}|_p^{3-2s} |\eta_{2, p}|_p^{2s-2} |\eta_{4, p}|_p^{4-2s}\\ & \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{1, p}^2}{\eta_{2, p}} \right) \left( 1 - p^{-2(s-1/2)} \right) \frac{1-p^{-2(s-1/2)+1} \left| \frac{\eta_{1, p}^2}{\eta_{2, p}} \right|_{p}^{2(s-1/2)-1}}{1-p^{-2(s-1/2)+1}} \times \\ & \times \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{4, p}^2}{\eta_{3, p}} \right) \left( 1 - p^{-2(s-1)} \right) \frac{1-p^{-2(s-1)+1} \left| \frac{\eta_{4, p}^2}{\eta_{3, p}} \right|_{p}^{2(s-1)-1}}{1-p^{-2(s-1)+1}} \\ =\sum_{z' \in {\mathbb {Z}}} {}& \e{x_{12} + x_{45}} y_1^{3-2s} y_2^{2s-2}y_4^{4-2s} \frac{\xi(2s-1)}{\xi(2s)} \prod_{p < \infty} |z|_p^{5-2s} \\ & 4\left| \frac{y_1^2}{y_2} \right|_{\infty}^{s-3/2}\left| \frac{y_4^2}{y_3} \right|_{\infty}^{s-2} K_{s-3/2}\left( 2\pi \left| \frac{y_1^2}{y_2} \right|_{\infty} \right) K_{s-2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_{\infty} \right) \\ & \sigma_{-2(s-1/2)+1}(|z|_\infty) \sigma_{-2(s-1)+1}(|z|_\infty) \\ =\sum_{z' \in {\mathbb {Z}}} {}& 4 \e{x_{12} + x_{45}} y_2^{s-1/2}y_3^{2-s} \frac{\xi(2s-1)}{\xi(2s)} |z|_\infty^{2s-5} \\ & K_{s-3/2}\left( 2\pi \left| \frac{y_1^2}{y_2} \right|_{\infty} \right) K_{s-2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_{\infty} \right) \sigma_{2-2s}(|z|_\infty) \sigma_{3-2s}(|z|_\infty)\,. \end{align} The $x$'s and $y$'s are the Iwasawa coordinates for the matrix $l_z g_\infty$.} \begin{table}[t] \begin{align*} \begin{tabu}{cl|c|c|c|c} &w_c & \left\langle w_c^{-1} \lambda + \rho | \alpha_2 \right\rangle & \left\langle w_c^{-1} \lambda + \rho | \alpha_4 \right\rangle & M\left( w_c^{-1}, \lambda \right) & \left( w_c w_0' \right)^{-1} \lambda + \rho \\ \hline &\operatorname{Id} & 2s & 0 & 1 & \dots \\ &w_{21} & 0 & 0 & \dots & \dots \\ &w_{23} & 0 & 2(s-1) & \dots & \dots \\ *&w_{213} & 2(s-1) & 2(s-1) & \frac{\xi(2s-2)^2}{\xi(2s)\xi(2s-1)} & v \\ &w_{213243} & 0 & 0 & \dots & \dots \\ \end{tabu} \end{align*} \caption{\label{tab:5}Data for the reduction formula \eqref{eqn:reduction} to evaluate $W_{\alpha_2, \alpha_4}(a)$ on ${\mathrm{SL}}_5$ with $\lambda = 2s\Lambda_2 - \rho$. The star indicates the Weyl word that contributes to the reduction formula. We wrote $v = [0, 4-2s, 2s-3, 4-2s]$ to save space.} \end{table} The third and fourth Whittaker functions are found by the reduction formula with the data from table~\ref{tab:5}. The $p$-adic Iwasawa-decomposition of $l_{xy}$ has \begin{equation} |\eta_{1, p}|_p^{-1} = \max\{|y|_p, |x|_p\} \qtextq{and} |\eta_{2, p}|_p = |\eta_{3, p}|_p = |\eta_{4, p}|_p = 1\, . \end{equation} {\allowdisplaybreaks We get \begin{align} \label{ex1part3}\\ \sum_{x', y} {}& W_{\alpha_2, \alpha_4}\left( \lambda; l_{xy} (g_\infty, I_n, I_n, \cdots) \right) = \\ = \sum_{x', y} {}& \e{x_{23} + x_{45}} y_2^{4-2s} y_3^{2s-3} y_4^{4-2s} \frac{\xi(2s-2)^2}{\xi(2s)\xi(2s-1)} \\ & B_{s-1}\left( \frac{y_2^2}{y_1 y_3}, 1 \right) B_{s-1}\left( \frac{y_4^2}{y_3}, 1 \right)\prod_{p < \infty} |\eta_{2, p}|_p^{4-2s} |\eta_{3, p}|_p^{2s-3} |\eta_{4, p}|_p^{4-2s} \\ & \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{2, p}^2}{\eta_{1, p} \eta_{3, p}} \right) \left( 1 - p^{-2(s-1)} \right) \frac{1-p^{-2(s-1)+1} \left| \frac{\eta_{2, p}^2}{\eta_{1, p} \eta_{3, p}} \right|_{p}^{2(s-1)-1}}{1-p^{-2(s-1)+1}} \times \\ & \times \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{4, p}^2}{\eta_{3, p}} \right) \left( 1 - p^{-2(s-1)} \right) \frac{1-p^{-2(s-1)+1} \left| \frac{\eta_{4, p}^2}{\eta_{3, p}} \right|_{p}^{2(s-1)-1}}{1-p^{-2(s-1)+1}} \\ = \sum_{x', y \in {\mathbb {Z}}} {}& \e{x_{23} + x_{45}} y_2^{4-2s} y_3^{2s-3} y_4^{4-2s} \frac{1}{\xi(2s)\xi(2s-1)} 4 \left| \frac{y_2^2}{y_1 y_3} \right|_\infty^{s-3/2} \left| \frac{y_4^2}{y_3} \right|_\infty^{s-3/2} \\ & K_{s-3/2}\left( 2\pi \left|\frac{y_2^2}{y_1 y_3} \right|_\infty \right) K_{s-3/2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) \sigma_{-2(s-1)+1}(k) \\ = \sum_{x', y \in {\mathbb {Z}}} {}& 4 \e{x_{23} + x_{45}} y_1^{3/2-s} y_2^{1} y_4^{1} \frac{1}{\xi(2s)\xi(2s-1)} \\ & K_{s-3/2}\left( 2\pi \left|\frac{y_2^2}{y_1 y_3} \right|_\infty \right) K_{s-3/2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) \sigma_{3-2s}(k)\,, \end{align} where $k = \gcd(|y|, |x|)$. Here, the $x$'s and $y$'s are the Iwasawa coordinates for the matrix $l_{xy} g_\infty$.} The $p$-adic Iwasawa-decomposition of $l_{x}$ has \begin{equation} |\eta_{1, p}|_p^{-1} = \max\{|0|_p, |x|_p\} = |x|_p \qtextq{and} |\eta_{2, p}|_p = |\eta_{3, p}|_p = |\eta_{4, p}|_p = 1\, . \end{equation} We get \begin{equation} \begin{aligned} \sum_{x'} {}& W_{\alpha_2, \alpha_4}\left( \Lambda; l_x (g_\infty, I_n, I_n, \cdots) \right) = \\ = \sum_{x'} {}& \e{x_{23} + x_{45}} y_2^{4-2s} y_3^{2s-3} y_4^{4-2s} \frac{\xi(2s-2)^2}{\xi(2s)\xi(2s-1)} \\ & B_{s-1}\left( \frac{y_2^2}{y_1 y_3}, 1 \right) B_{s-1}\left( \frac{y_4^2}{y_3}, 1 \right)\prod_{p < \infty} |\eta_{2, p}|_p^{4-2s} |\eta_{3, p}|_p^{2s-3} |\eta_{4, p}|_p^{4-2s}\\ & \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{2, p}^2}{\eta_{1, p} \eta_{3, p}} \right) \left( 1 - p^{-2(s-1)} \right) \frac{1-p^{-2(s-1)+1} \left| \frac{\eta_{2, p}^2}{\eta_{1, p} \eta_{3, p}} \right|_{p}^{2(s-1)-1}}{1-p^{-2(s-1)+1}} \\ & \prod_{p < \infty} \gamma_{p}\left( \frac{\eta_{4, p}^2}{\eta_{3, p}} \right) \left( 1 - p^{-2(s-1)} \right) \frac{1-p^{-2(s-1)+1} \left| \frac{\eta_{4, p}^2}{\eta_{3, p}} \right|_{p}^{2(s-1)-1}}{1-p^{-2(s-1)+1}} \\ = \sum_{x' \in {\mathbb {Z}}} {}& \e{x_{23} + x_{45}} y_2^{4-2s} y_3^{2s-3} y_4^{4-2s} \frac{1}{\xi(2s)\xi(2s-1)} 4 \left| \frac{y_2^2}{y_1 y_3} \right|_\infty^{s-3/2} \left| \frac{y_4^2}{y_3} \right|_\infty^{s-3/2} \\ & K_{s-3/2}\left( 2\pi \left|\frac{y_2^2}{y_1 y_3} \right|_\infty \right) K_{s-3/2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) \sigma_{-2(s-1)+1}(|x|_\infty) \\ = \sum_{x' \in {\mathbb {Z}}} {}& 4 \e{x_{23} + x_{45}} y_1^{3/2-s} y_2^{1} y_4^{1} \frac{1}{\xi(2s)\xi(2s-1)} \\ & K_{s-3/2}\left( 2\pi \left|\frac{y_2^2}{y_1 y_3} \right|_\infty \right) K_{s-3/2}\left( 2\pi \left| \frac{y_4^2}{y_3} \right|_\infty \right) \sigma_{3-2s}(|x|_\infty) \, . \label{ex1part4} \end{aligned} \end{equation} The $x$'s and $y$'s are the Iwasawa coordinates for the matrix $l_x g_\infty$. The complete Fourier coefficient $\mathcal{F}^{{\mathbb {R}}}(E(2s\Lambda_2 - \rho), \psi_{y(Y_1)}; g_\infty)$ is then given by the combination of~\eqref{ex1part1}, \eqref{ex1part2}, \eqref{ex1part3} and \eqref{ex1part4}. We note that the our final result differs formally from the one given in~\cite[Eq.~(H.52)]{GMV15} where the result is given as a convoluted integral over two Bessel functions whereas we do not have any remaining integral. The two results need not be in actual disagreement as there are many non-trivial relations involving infinite sums or integrals of Bessel functions. The automorphic form \begin{align} \lim_{s\to 1/2} \frac{2\zeta(3)\xi(2s-3)}{\xi(2s)} E(2s\Lambda_2-\rho;g) = 2\zeta(3) E(3\Lambda_4-\rho) \end{align} lies in a minimal automorphic representation and controls the first non-trivial corrections that string theory predicts to the four-graviton scattering amplitude beyond standard general relativity~\cite{GMRV10,P10}. The Fourier coefficients that we computed above can then be used to to extract so-called 1/2 BPS instanton contributions in the string perturbation limit of the amplitude. More precisely, they represent non-perturbative corrections to the scattering amplitude that, albeit smooth, are not analytic in the string coupling constant around vanishing coupling. They are therefore not visible in standard perturbation theory for small coupling but represent important correction nonetheless. Their interpretation is in terms of specific D$p$-branes ($p\le 2$) that are extended $(p+1)$-dimensional objects that can extend on non-trivial cycles of the torus $T^3$ that is present when ${\mathrm{SL}}_5$ is the duality group. The detailed structure of the Fourier coefficient, in particular the arithmetic divisor sums appearing, can shed some light on the combinatorics of these D-branes similar to what is happening in the ${\mathrm{SL}}_2$ case~\cite{Yi:1997eg,Sethi:1997pa,Moore:1998et}. For the next non-trivial correction to the four-graviton scattering amplitude one requires an automorphic form in the next-to-minimal automorphic representation~\cite{GMRV10,P10,GMV15}. This function is not a single Eisenstein series of the type we have analysed above but a very special combination of two formally divergent Eisenstein series with some Fourier coefficients computed using the Mellin transform of a theta lift in~\cite{GMV15}.
proofpile-arXiv_065-3565
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} People's awareness about their nutrition habits is increasing either because they suffer from some kind of food intolerance; they have mild or severe weight problems; or they are simply interested in keeping a healthy diet. This increasing awareness is also being reflected in the technological world. Several applications exist for manually keeping track of what we eat, but they rarely offer any automatic mechanism for easing the tracking of the nutrition habits \cite{aizawa2015foodlog}. Tools for automatic food and ingredient recognition could heavily alleviate the problem. Since the reborn of Convolutional Neural Networks (CNNs), several works have been proposed to ease the creation of nutrition diaries. The most widely spread approach is food recognition \cite{martinel2016wide}. These proposals allow to recognize the type of food present in an image and, consequently, could allow to approximately guess the ingredients contained and the overall nutritional composition. The main problem of these approaches is that no dataset covers the high amount of existent types of dishes worldwide (more than 8,000 according to Wikipedia). On the other hand, a clear solution for this problem can be achieved if we formulate the task as an ingredients recognition problem instead \cite{chen2016deep}. Although tens of thousands of types of dishes exist, in fact they are composed of a much smaller number of ingredients, which at the same time define the nutritional composition of the food. If we formulate the problem from the ingredients recognition perspective, we must consider the difficulty of distinguishing the presence of certain ingredients in cooked dishes Their visual appearance can greatly vary from one dish to another (e.g. the appearance of the ingredient 'apple' in an 'apple pie', an 'apple juice' or a 'fresh apple'), and in some cases they can even be invisible at sight without the proper knowledge of the true composition of the dish. An additional benefit of approaching the problem from the ingredients recognition perspective is that, unlike in food recognition, it has the potential to predict valid outputs on data that has never been seen by the system. In this paper, we explore the problem of food ingredients recognition from a multi-label perspective by proposing a model based on CNNs that allows to discover the ingredients present in an image even if they are not visible to the naked eye. We present two new datasets for tackling the problem and prove that our method is capable of generalizing to new data that has never been seen by the system. Our contributions are four-fold. 1) Propose a model for food ingredients recognition; 2) Prove that by using a varied dataset of images and their associated ingredients, the generalization capabilities of the model on never seen data can be greatly boosted; 3) Delve into the inner layers of the model for analysing the ingredients specialization of the neurons; and 4) Release two datasets for ingredients recognition. This paper is organized as follows: in Section \ref{sec:related_work}, we review the state of the art; in Section \ref{sec:methodology}, explain our methodology; in Section \ref{sec:results}, we present our proposed datasets, show and analyse the results of the experiments performed, as well as interpret the predictions; and in Section \ref{sec:conclusions}, we draw some conclusions. \section{Related work} \label{sec:related_work} \vspace{-1em} \vspace{.5em} \textbf{Food analysis.} Several works have been published on applications related to automatic food analysis. Some of them proposed food detection models \cite{aguilar2017exploring} in order to distinguish when there is food present in a given image. Others focused on developing food recognition algorithms, either using conventional hand-crafted features, or powerful deep learning models \cite{martinel2016wide}. Others have applied food segmentation \cite{shimoda2015cnn}; use multi-modal data (i.e. images and recipe texts) for recipe recognition \cite{wang2015recipe}; tags from social networks for food characteristics perception \cite{ofli2017saki}; food localization and recognition in the wild for egocentric vision analysis \cite{bolanos2016simultaneous}, etc. \vspace{.5em} \textbf{Multi-Label learning.} Multi-label learning \cite{tsoumakas2006multi} consists in predicting more than one output category for each input sample. Thus, the problem of food ingredients recognition can be treated as a multi-label learning problem. Several works \cite{wang2016cnn} argued that, when working with CNNs, they have to be reformulated for dealing with multi-label learning problems. Some multi-label learning works have already been proposed for restaurant classification. So far, only one paper \cite{chen2016deep} has been proposed related to ingredients recognition. Their dataset, composed of 172 food types, was manually labelled considering visible ingredients only, which limits it to find 3 ingredients on average. Furthermore, they propose a double-output model for simultaneous food type recognition and multi-label ingredients recognition. Although, the use of the food type for optimizing the model limits its capability of generalization only to seen recipes and food types. This fact becomes an important handicap in a real-world scenario when dealing with new recipes. As we demonstrate in Sections \ref{subsec:results} and \ref{subsec:visualization}, unlike \cite{chen2016deep}, our model is able to: 1) recognize the ingredients appearing in unseen recipes (see Fig.\ref{fig:ingredients_recipes5k}); 2) learn abstract representations of the ingredients directly from food appearance (see Fig.\ref{fig:neuron_activations}); and 3) infer invisible ingredients. \vspace{.5em} \textbf{Interpreting learning through visualization.} Applying visualization techniques is an important aspect in order to interpret what has been learned by our model. The authors in \cite{yosinski2015understanding have focused on proposing new ways of performing this visualization. At the same time, they have proven that CNNs have the ability to learn high level representations of the data and even hidden interrelated information, which can help us when dealing with ingredients that are apparently invisible in the image. \section{Methodology} \label{sec:methodology} \textbf{Deep multi-ingredients recognition.} Most of the top performing CNN architectures have been originally proposed and intended for the problem of object recognition. At the same time, they have been proven to be directly applicable to other related classification tasks and have served as powerful pre-trained models for achieving state of the art results. In our case, we compared either using the InceptionV3 \cite{szegedy2016rethinking} or the ResNet50 \cite{he2016deep} as the basic architectures for our model. We pre-trained it on the data from the ILSVRC challenge \cite{russakovsky2015imagenet} and modified the last layer for applying a multi-label classification over the $N$ possible output ingredients. When dealing with classification problems, CNNs typically use the softmax activation in the last layer. The softmax function allows to obtain a probability distribution for the input sample $x$ over all possible outputs and thus, predicts the most probable outcome, $\hat{y}_x = \argmax_{y_i} P(y_i|x)$. The softmax activation is usually combined with the categorical cross-entropy loss function $L_c$ during model optimization, which penalizes the model when the optimal output value is far away from 1: \begin{equation} \vspace{-1em} L_c = - \sum_x \log(P(\hat{y}_x|x)). \end{equation} In our model, we are dealing with ingredients recognition in a multi-label framework. Therefore, the model must predict for each sample $x$ a set of outputs represented as a binary vector $\hat{Y}_x = \{\hat{y}_x^1, ..., \hat{y}_x^N\}$, where $N$ is the number of output labels and each $\hat{y}_x^i$ is either 1 or 0 depending if it is present or not in sample $x$. For this reason, instead of softmax, we use a sigmoid activation function: \begin{equation} P(y_i|x) = \frac{1}{1-\exp^{-f(x)_i}} \end{equation} which allows to have multiple highly activated outputs. For considering the binary representation of $\hat{Y}_x$, we chose the binary cross-entropy function $L_b$ \cite{buja2005loss}: \begin{equation} \vspace{-1em} L_b = - \sum_x \sum_{i}^N (\hat{y}_x^i\cdot log(P(y_i|x)) + (1 - \hat{y}_x^i) \cdot log(1 - P(y_i|x))) \end{equation} which during backpropagation rewards the model when the output values are close to the target vector $\hat{Y}_x$ (i.e. either close to 1 for positive labels or close to 0 for negative labels). \iffalse \vspace{.5em} \textcolor{red}{No ha de ir en Related works?} \textbf{The importance of Zero-shot learning.} The authors in \cite{chen2016deep} also proposed a model for food ingredients recognition. In their approach they considered a double-output layer architecture which performed simultaneously food and ingredients recognition with good final results. However, the use of the food type for optimizing their model limits its capabilities of generalization to unseen recipes and food types. Furthermore, as we demonstrate in Section \ref{subsec:visualization} our model is able to; 1) recognize the ingredients appearing in unseen recipes (see Fig.\ref{fig:neuron_activations}), and 2) infer the food type without being specifically trained to do so (see Fig.\textcolor{red}{[...]}). \fi \section{Results} \label{sec:results} \vspace{-1em} In this section, we describe the two datasets proposed for the problem of food ingredients recognition. Later we describe our experimental setup and at the end, we present the final results obtained both for ingredients recognition on known classes as well as recognition results for generalization on samples never seen by the model. \vspace{-1em} \subsection{Datasets} In this section we describe the datasets proposed for food ingredients recognition and the already public datasets used. \vspace{.5em} \textbf{Food101 \cite{bossard2014food}} is one of the most widely extended datasets for food recognition. It consists of 101,000 images equally divided in 101 food types. \vspace{.5em} \textbf{Ingredients101}\footnote{\url{http://www.ub.edu/cvub/ingredients101/}} is a dataset for ingredients recognition that we constructed and make public in this article. It consists of the list of most common ingredients for each of the 101 types of food contained in the Food101 dataset, making a total of 446 unique ingredients (9 per recipe on average). The dataset was divided in training, validation and test splits making sure that the 101 food types were balanced. We make public the lists of ingredients together with the train/val/test split applied to the images from the Food101 dataset. \vspace{.5em} \textbf{Recipes5k}\footnote{\url{http://www.ub.edu/cvub/recipes5k/}} is a dataset for ingredients recognition with 4,826 unique recipes composed of an image and the corresponding list of ingredients. It contains a total of 3,213 unique ingredients (10 per recipe on average). Each recipe is an alternative way to prepare one of the 101 food types in Food101. Hence, it captures at the same time the intra-class variability and inter-class similarity of cooking recipes. The nearly 50 alternative recipes belonging to each of the 101 classes were divided in train, val and test splits in a balanced way. We make also public this dataset together with the splits division. A problem when dealing with the 3,213 raw ingredients is that many of them are sub-classes (e.g. 'sliced tomato' or 'tomato sauce') of more general versions of themselves (e.g. 'tomato'). Thus, we propose a simplified version by applying a simple removal of overly-descriptive particles\footnote{\url{https://github.com/altosaar/food2vec}} (e.g. 'sliced' or 'sauce'), resulting in 1,013 ingredients used for additional evaluation (see Section \ref{subsec:results}). We must note the difference between our proposed datasets and the one from \cite{chen2016deep}. While we consider any present ingredient in a recipe either visible or not, the work in \cite{chen2016deep} only labelled manually the visible ingredients in certain foods. Hence, a comparison between both works is infeasible. \begin{figure}[!ht] \vspace{-2em} \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[trim={9cm 0.5cm 0 0.2cm},clip,width=0.8\linewidth]{fig/ingredients_101_results.jpg} \caption{\label{fig:ingredients_101_results} Ingredients101 samples.} \end{subfigure}\hfill% \begin{subfigure}{.48\textwidth} \centering \includegraphics[trim={0 23cm 0 0},clip,width=0.9\linewidth]{fig/recipes5k_results.jpg} \caption{\label{fig:ingredients_recipes5k} Recipes5k using the fine-grained 3,213 ingredients (left), and using the 1,013 simplified ingredients (right).} \end{subfigure} \caption{Our method's results. TPs in green, FPs in red and FNs in orange.} \vspace{-1em} \end{figure} \begin{table*}[ht] \begin{center} \begin{tabular}{l c c c c c c} & \multicolumn{3}{c}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & Prec & Rec & $F_1$ & Prec & Rec & $F_1$ \\ \hline Random prediction & 2.05 & 2.01 & 2.03 & 2.06 & 2.01 & 2.04 \\ InceptionV3 + Ingredients101 & 80.86 & 72.12 & 76.24 & 83.51 & \textbf{76.87} & 80.06\\ ResNet50 + Ingredients101& 84.80 & 67.62 & 75.24 & \textbf{88.11} & 73.45 & \textbf{80.11}\\ \hline \end{tabular} \end{center} \caption{\label{tab:results_ingredients101}Ingredients recognition results obtained on the dataset Ingredients101. Prec stands for \textit{Precision}, Rec for \textit{Recall} and $F_1$ for \textit{$F_1$ score}. All measures reported in \%. The best test results are highlighted in boldface.} \vspace{-1em} \end{table*} \begin{table*}[ht] \vspace{-1em} \begin{center} \begin{tabular}{l c c c c c c} & \multicolumn{3}{c}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & Prec & Rec & $F_1$ & Prec & Rec & $F_1$ \\ \hline Random prediction & 0.33 & 0.32 & 0.33 & 0.54 & 0.53 & 0.53 \\ InceptionV3 + Ingredients101 & & & & 23.80 & 18.24 & 20.66 \\ ResNet50 + Ingredients101 & & & & 26.28 & 16.85 & 20.54 \\ InceptionV3 + Recipes5k & 36.18 & 20.69 & 26.32 & 35.47 & \textbf{21.00} & \textbf{26.38}\\ ResNet50 + Recipes5k & 38.41 & 19.67 & 26.02 & \textbf{38.93} & 19.57 & 26.05 \\ \hline Random prediction & 6.27 & 6.29 & 6.28 & 6.14 & 6.24 & 6.19 \\ InceptionV3 + Ingredients101 & & & & 44.01 & 34.04 & 38.39 \\ ResNet50 + Ingredients101 & & & & 47.53 & 30.91 & 37.46 \\ InceptionV3 + Recipes5k & 56.77 & 31.40 & 40.44 & 55.37 & 31.52 & 40.18\\ ResNet50 + Recipes5k & 56.73 & 28.07 & 37.56 & \textbf{58.55} & 28.49 & 38.33 \\ InceptionV3 + Recipes5k simplified & 53.91 & 42.13 & 47.30 & 53.43 & \textbf{42.77} & \textbf{47.51}\\ \hline \end{tabular} \end{center} \caption{\label{tab:results_recipes5k}Ingredients recognition results on Recipes5k (top) and on Recipes5k simplified (bottom). Prec stands for \textit{Precision}, Rec for \textit{Recall} and $F_1$ for \textit{$F_1$ score}. All measures reported in \%. Best test results are highlighted in boldface.} \vspace{-2.5em} \end{table*} \begin{figure*}[!ht] \centering \includegraphics[trim={0 18cm 0 0},clip,width=0.65\textwidth]{fig/neuron_activations_v2.jpg} \caption{\label{fig:neuron_activations} Visualization of neuron activations. Each row is associated to a specific neuron from the network. The images with top activation are shown as well as the top ingredient activation they have in common. The name of their respective food class is only for visualization purposes and is displayed in green if the recipe contains the top ingredient. Otherwise, it is shown in red.} \vspace{-2em} \end{figure*} \vspace{-1em} \subsection{Experimental setup} Our model was implemented in Keras\footnote{\url{www.keras.io}}, using Theano as backend. Next, we detail the different configurations and tests performed. \textbf{Random prediction}: (baseline) a set of $K$ labels are generated uniformly distributed among all possible outputs. $K$ depends on the average number of labels per recipe in the corresponding dataset. \textbf{InceptionV3 + Ingredients101}: InceptionV3 model pre-trained on ImageNet and adapted for multi-label learning. \textbf{ResNet50 + Ingredients101}: ResNet50 model pre-trained on ImageNet and adapted for multi-label learning. \textbf{InceptionV3 + Recipes5k}: InceptionV3 model pre-trained on InceptionV3 + Ingredients101. \textbf{ResNet50 + Recipes5k}: ResNet50 model pre-trained on ResNet50 + Ingredients101. \vspace{-1.5em} \subsection{Experimental results}\label{subsec:results} \vspace{-0.5em} In Table \ref{tab:results_ingredients101}, we show the ingredient recognition results on the Ingredients101 dataset. In Fig.\ref{fig:ingredients_101_results} some qualitative results are shown. Both the numerical results and the qualitative examples prove the high performance of the models in most of the cases. Note that although a multi-label classification is being applied, considering that all the samples from a food class share the same set of ingredients, the model is indirectly learning the inherent food classes. Furthermore, looking at the results on the Recipes5k dataset in Table \ref{tab:results_recipes5k} (top), we can see that the very same model obtains reasonable results even considering that it was not specifically trained on that dataset. Note that only test results are reported for the models trained on Ingredients101 because we only intend to show its generalization capabilities on new data. Comparing the results with the models specifically trained on Recipes5k, it appears that, as expected, a model trained on a set of samples with high variability of output labels is more capable of obtaining high results on never seen recipes. Thus, it is more capable of generalizing on unseen data. Table \ref{tab:results_recipes5k} (bottom) shows the results on the Recipes5k dataset with a simplified list of ingredients. Note that for all tests, the list was simplified only during the evaluation procedure for maintaining the fine-grained recognition capabilities of the model, with the exception of \textit{Inception V3 + Recipes5k simplified}, where the simplified set was also used for training. The simplification of the ingredients list enhances the capabilities of the model when comparing the results, reaching more than 40\% in the $F_1$ metric and 47.5\% also training with them. Fig.\ref{fig:ingredients_recipes5k} shows a comparison of the output of the model either using the fine-grained or the simplified list of ingredients. Overall, although usually only a single type of semantically related fine-grained ingredients (e.g. 'large eggs', 'beaten eggs' or 'eggs') appears at the same time in the ground truth, it seems that the model is inherently learning an embedding of the ingredients. Therefore, it is able to understand that some fine-grained ingredients are related and predicts them at once in the fine-grained version (see waffles example). \vspace{-1em} \subsection{Neuron representation of ingredients}\label{subsec:visualization} \vspace{-0.5em} When training a CNN model, it is important to understand what it is able to learn and interpret from the data. To this purpose, we visualized the activations of certain neurons of the network in order to interpret what is it able to learn. Fig.\ref{fig:neuron_activations} shows the results of this visualization. As we can see, it appears that certain neurons of the network are specialized to distinguish specific ingredients. For example, most images of the 1st and 2nd rows illustrate that the characteristic shape of a hamburger implies that it will probably contain the ingredients 'lettuce' and 'ketchup'. Also, looking at the 'granulated sugar' row, we can see that the model learns to interpret the characteristic shape of \textit{creme brulee} and \textit{macarons} as containing sugar, although it is not specifically seen in the image. \section{Discussion} \label{sec:discussion} \section{Conclusions and future work} \label{sec:conclusions} \vspace{-0.5em} Analysing both the quantitative and qualitative results, we can conclude that the proposed model and the two datasets published offer very promising results for the multi-label problem of food ingredients recognition. Our proposal allows to obtain great generalization results on unseen recipes and sets the basis for applying further, more detailed food analysis methods. As future work, we will create a hierarchical structure \cite{wu2016learning} relationship of the existent ingredients and extend the model to utilize this information. \vspace{-1em}
proofpile-arXiv_065-3568
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} The {\it tangent numbers} \footnote{Some mathematical literature uses a slightly different notation where $\tan x$ is written $T_1 x + T_2 x^3/3! + T_3 x^5/5! + \cdots$ (See \cite{KnuthBuckholtz1967})} $(T_{2n+1})_{n\geq 0}$ appear in the Taylor expansion of $\tan(x)$: \begin{equation} \tan x = \sum_{n\geq 0} T_{2n+1} \frac {x^{2n+1}}{(2n+1)!}. \end{equation} It is known that the tangent number $T_{2n+1}$ is equal to the number of all {\it alternating permutations} of length $2n+1$ (see \cite{Andre1879, Euler1755, KnuthBuckholtz1967, Nielsen1923}). Also, $T_{2n+1}$ counts the number of {\it increasing labelled complete binary trees} with $2n+1$ vertices. This combinatorial interpretation immediately implies that $T_{2n+1}$ is divisible by $2^n$. However, a stronger divisibility property is known related to the study of Bernoulli and Genocchi numbers \cite{Carlitz1960,Carlitz1971, RiordanStein1973}, as stated in the following theorem. \begin{thm}\label{th:tan} The number $(n+1)T_{2n+1}$ is divisible by $2^{2n}$, and the quotient is an odd number. \end{thm} The quotient is called {\it Genocchi number} and denoted by \begin{equation}\label{eq:genocchi} G_{2n+2}:=(n+1)T_{2n+1}/2^{2n}. \end{equation} Let $$g(x):=\displaystyle\sum_{n\ge 0}G_{2n+2}\frac{x^{2n+2}}{(2n+2)!}$$ be the exponential generating function for the Genocchi numbers. Then, \eqref{eq:genocchi} is equivalent to \begin{equation}\label{eq:gx} g(x)=x\tan{\frac x2}. \end{equation} The initial values of the tangent and Genocchi numbers are listed below: $$ \begin{tabular}{c | c c c c c c c } $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline $T_{2n+1}$ & 1 & 2 & 16 & 272 & 7936 & 353792 & 22368256\\ $G_{2n+2}$ & 1 & 1 & 3 & 17 & 155 & 2073 & 38227\\ \hline \end{tabular} $$ \medskip The fact that the Genocchi numbers are odd integers is traditionally proved by using the von Staudt-Clausen theorem on Bernoulli numbers and the little Fermat theorem \cite{Carlitz1960,Carlitz1971, RiordanStein1973}. Barsky \cite{Barsky1980, FoataHan2008} gave a different proof by using the Laplace transform. To the best of the authors' knowledge, no simple combinatorial proof has been derived yet and it is the purpose of this paper to provide one. Our approach is based on the geometry of the so-called {\it leaf-labelled tree} and the fact that the hook length $h_v$ of such a tree is always an odd integer (see Sections \ref{sec:BT} and~\ref{sec:tan}). In Section \ref{sec:kary} we consider the $k$-ary trees instead of the binary trees and obtain a new generalization of the Genocchi numbers. For each integer $k\geq 2$, let $L_{kn+1}^{(k)}$ be the number of increasing labelled complete $k$-ary trees with $kn+1$ vertices. Thus, $L^{(k)}_{kn+1}$ will appear to be a natural generalization of the tangent number. The general result is stated next. \begin{thm}\label{th:kary} (a) For each integer $k\geq 2$, the integer $$\frac{(k^2 n-kn+k)!\,L^{(k)}_{kn+1}}{ (kn+1)!}$$ is divisible by $(k!)^{kn+1}$. (b) Moreover, the quotient \begin{align*} M^{(k)}_{k^2 n-kn+k}:= \frac{(k^2 n-kn+k)!\, L^{(k)}_{kn+1}}{(k!)^{kn+1}(kn+1)!}\equiv \begin{cases} 1\pmod {k}, &k=p, \\ 1\pmod {p^2}, &k=p^t,\ t\ge 2, \\ 0\pmod {k}, &\text{otherwise}, \end{cases} \end{align*} where $n\ge 1$ and $p$ is a prime number. \end{thm} We can realize that Theorem \ref{th:kary} is a direct generalization of Theorem \ref{th:tan}, if we restate the problem in terms of generating functions. Let $\phi^{(k)}(x)$ and $\psi^{(k)}(x)$ denote the exponential generating functions for $L^{(k)}_{kn+1}$ and $M^{(k)}_{k^2n-kn+k}$, respectively, that is, \begin{align*} \phi^{(k)}(x)&=\sum_{n\ge 0}L^{(k)}_{kn+1}\frac{x^{kn+1}}{(kn+1)!}; \\ \psi^{(k)}(x)&=\sum_{n\ge 0}M^{(k)}_{k^2n-kn+k}\frac{x^{k^2n-kn+k}}{(k^2n-kn+k)!}. \end{align*} If $k$ is clear from the context, the superscript $(k)$ will be omitted. Thus, we will write $L_{kn+1}:=L^{(k)}_{kn+1},\, M_{k^2 n-kn+k} := M^{(k)}_{k^2 n-kn+k},\, \phi(x):=\phi^{(k)}(x),\, \psi(x):=\phi^{(k)}(x)$. From Theorem \ref{th:kary} we have \begin{align*} \phi'(x)=1+\phi^k(x); \end{align*} \begin{align*} {\psi(x)}=x \cdot \phi\left(\displaystyle \frac{x^{k-1}}{k!}\right). \end{align*} The last relation becomes the well-known formula \eqref{eq:gx} when $k=2$. Several generalizations of the Genocchi numbers have been studied in recent decades. They are based on the Gandhi polynomials \cite{Domaratzki2004,Carlitz1971, RiordanStein1973}, Seidel triangles \cite{DumontRand1994, ZengZhou2006}, continued fractions \cite{Viennot1982, HanZeng1999den}, combinatorial models \cite{HanZeng1999den}, etc. Our generalization seems to be the first extension dealing with the divisibility of $(n+1)T_{2n+1}$ by $2^{2n}$. It also raises the following open problems. \smallskip {\bf Problem 1}. Find a proof of Theorem \ref{th:kary} \`a la Carlitz, or \`a la Barsky. \smallskip {\bf Problem 2}. Find the Gandhi polynomials, Seidel triangles, continued fractions and a combinatorial model for the new generalization of Genocchi numbers $M_{k^2n-kn+k}$ \`a la Dumont. \smallskip {\bf Problem 3}. Evaluate $m_n:=M_{k^2n-kn+k} \pmod k$ for $k=p^t$, where $p$ is a prime number and $t\geq 3$. It seems that the sequence $(m_n)_{n\geq 0}$ is always periodic for any $p$ and $t$. Computer calculation has provided the initial values: \begin{align*} (m_n)_{n\geq 0} &= (1,1,5,5,1,1,5,5,\cdots) \qquad \text{for } k=2^3,\\ (m_n)_{n\geq 0} &= (1,1,10,1,1,10,1,1,10\cdots) \qquad \text{for } k=3^3,\\ (m_n)_{n\geq 0} &= (1,1,126,376,126,1,1,126,376,126,\cdots) \qquad \text{for } k=5^4, \\ (m_n)_{n\geq 0} &= (1,1,13,5,9,9,5,13,1,1,13,5,9,9,5,13,\cdots) \qquad \text{for } k=2^4. \end{align*} \section{Increasing labelled binary trees}\label{sec:BT} In this section we recall some basic notions on increasing labelled binary trees. Consider the set $\mathcal{T}(n)$ of all (unlabelled) binary trees with $n$ vertices. For each $t\in \mathcal{T}(n)$ let $\mathcal{L}(t)$ denote the set of all {\it increasing labelled binary trees} of shape $t$, obtained from $t$ by labeling its $n$ vertices with $\{1,2,\ldots,n\}$ in such a way that the label of each vertex is less than that of its descendants. For each vertex~$v$ of~$t$, the {\it hook length} of~$v$, denoted by $h_v(t)$ or $h_v$, is the number of descendants of~$v$ (including $v$). The {\it hook length formula} (\cite[\S5.1.4. Ex. 20]{Knuth1998Vol3}) claims that the number of increasing labelled binary trees of shape $t$ is equal to $n!$ divided by the product of the $h_v$'s ($v\in t$) \begin{equation}\label{eq:hooklength} \#\mathcal{L}(t)=\frac{n!}{\prod_{v\in t} h_v}. \end{equation} Let $\mathcal{S}(2n+1)$ denote the set of all {\it complete binary trees} $s$ with $2n+1$ vertices, which are defined to be the binary trees such that the two subtrees of each vertex are, either both empty, or both non-empty. For example, there are five complete binary trees with $2n+1=7$ vertices, labelled by their hook lengths in Fig.~1. \medskip \pythonbegin beginfig(1, "1.5mm"); setLegoUnit([3,3]) #showgrid([0,0], [20,14]) # show grid r=0.15 rtext=r+0.5 # distance between text and the point def ShowPoint(ptL, labelL, dir, fill=True): [circle(p[z-1], r, fill=fill) for z in ptL] [label(p[ptL[z]-1], labelL[z], dist=[rtext, rtext], dist_direction=dir) for z in range(len(ptL))] dist=[1,1,1,1] p=btree([6,4,7,2,5,1,3], [4,8], dist=dist, dot="fill", dotradius=r) ShowPoint([6,7,5,3], [1,1,1,1], 270) ShowPoint([4,2,1], [3,5,7],135) label(addpt(p[0],[0,1.6]), "$s_1$") p=btree([4,2,6,5,7,1,3], [8.2,8], dist=dist, dot="fill", dotradius=r) ShowPoint([4,6,7], [1,1,1], 270) ShowPoint([2,1], [5,7],135) ShowPoint([5,3], [3,1],45) label(addpt(p[0],[0,1.6]), "$s_2$") p=btree([2,1,6,4,7,3,5], [12.4,8], dist=dist, dot="fill", dotradius=r) ShowPoint([5,6,7], [1,1,1], 270) ShowPoint([4,2,1], [3,1,7],135) ShowPoint([3], [5],45) label(addpt(p[0],[0,1.6]), "$s_3$") p=btree([2,1,4,3,6,5,7], [16.6,8], dist=dist, dot="fill", dotradius=r) ShowPoint([2,4,6,7], [1,1,1,1], 270) ShowPoint([5,3,1], [3,5,7],45) label(addpt(p[0],[0,1.6]), "$s_4$") p=btree([4,2,5,1,6,3,7], [22.8,8], dist=[1.2, 0.8], dot="fill", dotradius=r) ShowPoint([5,4,6,7], [1,1,1,1], 270) ShowPoint([1,2], [7,3],135) ShowPoint([3], [3],45) label(addpt(p[0],[0,1.6]), "$s_5$") endfig(); \pythonend \begin{center}{\includegraphics[width=0.8\textwidth]{tan1.eps}}\end{center} \begin{center}{Fig.~1.~Complete binary trees with 7 vertices}\end{center} \medskip We now define an equivalence relation on $\mathcal{S}(2n+1)$, called {\it pivoting}. A {\it basic pivoting} is an exchange of the two subtrees of a non-leaf vertex $v$. For $s_1, s_2\in \mathcal{S}(2n+1)$, if $s_1$ can be changed to $s_2$ by a finite sequence of basic pivotings, we write $s_1\sim s_2$. It's routine to check that $\sim$ is an equivalence relation. Let $\mathcal{\bar S}(2n+1) = \mathcal{S}(2n+1)/\!\!\sim$. Since $s_1\sim s_2$ implies that $\#\mathcal{L}(s_1)=\#\mathcal{L}(s_2)$, we define $\#\mathcal{L}(\bar s)=\#\mathcal{L}(s)$ for $s\in \bar s$. Then \begin{equation}\label{eq:normaltree} T_{2n+1} = \sum_{\bar s\in\mathcal{\bar S}(2n+1)} T(\bar s), \end{equation} where \begin{equation}\label{eq:Ts} T(\bar s)=\sum_{s \in \bar s} \# \mathcal{L}(s) = \#\bar s \times \#\mathcal{L}(\bar s). \end{equation} For example, consider $\mathcal{S}(7)$ (see Fig. 1), we have \[\begin{array}{cccccc} \text{shape} & s_1 & s_2 & s_3 & s_4 & s_5 \\ \prod_v h_v & 3\cdot 5\cdot 7 & 3\cdot 5\cdot 7 & 3\cdot 5\cdot 7 & 3\cdot 5\cdot 7 & 3\cdot 3\cdot 7 \\ n!/\prod_v h_v & 48 & 48 & 48 & 48 & 80 \end{array}\] Trees $s_1, s_2, s_3$ and $s_4$ belong to the same equivalence class $\overline {s_1}$, while $s_5$ is in another equivalence class $\overline {s_5}$. Thus $T(\overline{s_1})=4\times 48=192$, $T(\overline{s_5})=80$ and $T_7=T(\overline{s_1})+T(\overline{s_5})=272$. \medskip The pivoting can also be viewed as an equivalence relation on the set $\cup_{s\in \bar s} \mathcal{L}(s)$, that is, all increasing labelled trees of shape $s$ with $s\in \bar s$. Since the number of non-leaf vertices is $n$ in $s$, there are exactly $2^n$ labelled trees in each equivalence class. Hence, $T(\bar s)$ is divisible by $2^n$. Take again the example above, $T(\overline{s_1})/2^3=24$, $T(\overline{s_5})/2^3=10$, and $T_7/2^3 = 24+10=34$. \medskip This is not enough to derive that $2^{2n}\mid (n+1)T_{2n+1}$. However, the above process leads us to reconsider the question in each equivalence class. We can show that the divisibility actually holds in each $\bar s$, as stated below. \medskip \begin{prop}\label{th:divisibilitybar} For each $\bar s\in \mathcal{S}(2n+1)$, the integer $(n+1)T(\bar s)$ is divisible by $2^{2n}$. \end{prop} \medskip Let $G(\bar s):= (n+1)T(\bar s)/2^{2n}$. Proposition \ref{th:divisibilitybar} implies that $G(\bar s)$ is an integer. By \eqref{eq:genocchi} and \eqref{eq:normaltree}, \begin{equation}\label{eq:Genocchi} G_{2n+2} = \sum_{\bar s\in\mathcal{\bar S}(2n+1)} G(\bar s). \end{equation} We give an example here and present the proof in the next section. For $n=4$, there are three equivalence classes. \medskip \pythonbegin beginfig(2, "1.6mm"); setLegoUnit([3,3]) #showgrid([0,0], [20,14]) # show grid r=0.15 rtext=r+0.5 # distance between text and the point def ShowPoint(ptL, labelL, dir, fill=True): [circle(p[z-1], r, fill=fill) for z in ptL] [label(p[ptL[z]-1], labelL[z], dist=[rtext, rtext], dist_direction=dir) for z in range(len(ptL))] dist=[1,1,1,1] p=btree([8,6,9,4,7,2,5,1,3], [4,8], dist=dist, dot="fill", dotradius=r) ShowPoint([8,9,7,5,3], [1,1,1,1,1], 270) ShowPoint([6,4,2,1], [3,5,7,9],135) label(addpt(p[0],[0,1.6]), "$s_1\in\overline{s_1}$") dist=[1.6,1.6,1,1] p=btree([6,4,7,2,8,5,9,1,3], [11,8], dist=dist, dot="fill", dotradius=r) ShowPoint([8,9,7,6,3], [1,1,1,1,1], 270) ShowPoint([4,2,1], [3,7,9],135) ShowPoint([5], [3],45) label(addpt(p[0],[0,1.6]), "$s_2\in\overline{s_2}$") dist=[1.6,1,1,1] p=btree([6,4,7,2,5,1, 8,3,9], [18,8], dist=dist, dot="fill", dotradius=r) ShowPoint([8,9,7,6,5], [1,1,1,1,1], 270) ShowPoint([4,2,1], [3,5,9],135) ShowPoint([3], [3],45) label(addpt(p[0],[0,1.6]), "$s_3\in\overline{s_3}$") endfig(); \pythonend \begin{center}{\includegraphics[width=0.8\textwidth]{tan2.eps}}\end{center} \begin{center}{Fig.~2.~Three equivalence classes for $n=4$}\end{center} \goodbreak In this case, Proposition \ref{th:divisibilitybar} and relation \eqref{eq:Genocchi} can be verified by the following table. \nobreak \begin{center}\begin{tabular}{cccccc} $\bar s$ & $\#\bar s$ & $\prod h_v$ & $\#\mathcal{L}(\bar s)$ & $T(\bar s)$ & $G(\bar s)$ \\ \hline $\overline{s_1}$ & 8 & $3\cdot 5\cdot 7\cdot 9$ & 384 & 3072 & 60 \\ $\overline{s_2}$ & 2 & $3\cdot 3\cdot 7\cdot 9$ & 640 & 1280 & 25 \\ $\overline{s_3}$ & 4 & $3\cdot 3\cdot 5\cdot 9$ & 896 & 3584 & 70 \\ \hline sum & 14 &\quad & \quad & 7936 & 155 \end{tabular}\end{center} \section{Combinatorial proof of Theorem \ref{th:tan}} \label{sec:tan} Let $n$ be a nonnegative integer and $\bar s\in \mathcal{\bar S}(2n+1)$ be an equivalence class in the set of increasing labelled complete binary trees. The key of the proof is the fact that the hook length $h_v$ is always an odd integer. For each complete binary tree $s$, we denote the product of all hook lengths by $H(s)=\prod_{v\in s} h_v$. Also, let $H(\bar s)=H(s)$ for $s\in \bar s$, since all trees in the equivalence class $\bar s$ share the same product of all hook lengths. \begin{lem}\label{lem:hooklength} For each complete binary tree $s$, the product of all hook lengths $H(s)$ is an odd integer. \end{lem} By Lemma \ref{lem:hooklength}, Proposition \ref{th:divisibilitybar} has the following equivalent form. \begin{prop}\label{th:divisibilitybarh} For each $\bar s\in \mathcal{\bar S}(2n+1)$, the integer $(2n+2)H(\bar s)T(\bar s)$ is divisible by $2^{2n}$. \end{prop} \begin{proof} By identities \eqref{eq:Ts} and \eqref{eq:hooklength} we have \begin{align} (2n+2)H(\bar s)T(\bar s)&=(2n+2)H(\bar s)\times \#\bar s \times \#\mathcal{L}(\bar s) \nonumber\\ &=(2n+2)\times \#\bar s\times (2n+1)! \nonumber \\ &=(2n+2)!\times \#\bar s.\label{eq:combinatorialinterpretation} \end{align} Suppose that $s$ is a complete binary tree with $2n+1$ vertices, then $s$ has $n+1$ leaves. Let $s^+$ be the complete binary tree with $4n+3$ vertices obtained from $s$ by replacing each leaf of $s$ by the complete binary tree with 3 vertices. So $s^+$ has $2n+2$ leaves. Let $\mathcal{L}^+(s^+)$ be the set of all leaf-labelled trees of shape $s^+$, obtained from $s^+$ by labeling its $2n+2$ leaves with $\{1,2,\ldots, 2n+2\}$. It is clear that $\#\mathcal{L}^+(s^+)=(2n+2)!$. By (\ref{eq:combinatorialinterpretation}) we have the following combinatorial interpretation: \medskip {\it For each $\bar s\in \mathcal{\bar S}(2n+1)$, the number of all leaf-labelled trees of shape $s^+$ such that $s\in \bar s$ is equal to $(2n+2)H(\bar s)T(\bar s)$. } \medskip This time we take the pivoting for an equivalence equation on the set of leaf-labelled trees $\cup_{s\in \bar s}\mathcal{L}^+(s^+)$. Since a leaf-labelled tree $s^+$ has $2n+1$ non-leaf vertices, and each non-trivial sequence of pivotings will make a difference on the labels of leaves, every equivalence class contains $2^{2n+1}$ elements. Hence, we can conclude that $(2n+2)H(\bar s)T(\bar s)$ is divisible by $2^{2n+1}$. \end{proof} For example, in Fig.~3, we reproduce a labelled tree with $9$ vertices and a leaf-labelled tree with $19$ vertices. There are $4$ non-leaf vertices in the labelled tree and the $9$ non-leaf vertices in the leaf-labelled tree, as indicated by the fat dot symbol ``$\bullet$''. Comparing with the traditional combinatorial model, our method increases the number of non-leaf vertices. Consequently, we establish a stronger divisibility property. \medskip \pythonbegin beginfig(3, "1.6mm"); setLegoUnit([3,3]) #showgrid([0,0], [20,14]) # show grid dist=[1.6, 1.6, 1] r=0.15 rtext=r+0.5 # distance between text and the point def ShowPoint(ptL, labelL, dir, fill=True): [circle(p[z-1], r, fill=fill) for z in ptL] [label(p[ptL[z]-1], labelL[z], dist=[rtext, rtext], dist_direction=dir) for z in range(len(ptL))] p=btree([6,4,7,2,8,5,9,1,3],pt=[7,0], dist=dist, dot="frame", dotradius=r, labeled=False) ShowPoint([1,2,4], [1,2,4], 135, fill=True) ShowPoint([6,7,8,9,3], [8,5,7,9,3], 270, fill=False) ShowPoint([5], [6], 60, fill=True) label(addpt(p[0],[0,2.2]), "Labeled tree") label(addpt(p[0],[0,1.4]), "$n=4$ non-leaf vertices") # third tree is composed by 2 trees, because dist is not equal: (9--4) small dist=[2.2, 2, 1, 0.5] pa=[19,0] p=btree([10,6,11,4,12,7,13,2,14,8,15,5,16,9,17,1,3],pt=pa, dist=dist, dot="frame", dotradius=r) [circle(p[z-1], r, fill=True) for z in [1,2,3,4,5,6,7,8,9]] ShowPoint([10,11,12,13,14,15,16,17], [5,8,2,6,1,7,10,3], 270, fill=False) p=btree([2,1,3], pt=p[2], dist=[dist[3]], dot="frame", dotradius=r) ShowPoint([2,3], [9,4], 270, fill=False) label(addpt(pa,[-0.8,2.2]), "Leaf-labelled tree") label(addpt(pa,[-0.8,1.4]), "$2n+1=9$ non-leaf vertices") endfig(); \pythonend \begin{center}{\includegraphics[width=0.8\textwidth]{tan3.eps}}\end{center} \begin{center}{Fig.~3.~Trees, non-leaf vertices and divisibilities}\end{center} \medskip For proving Theorem \ref{th:tan}, it remains to show that $G_{2n+2}=\sum G(\bar s)$ is an odd number. Since $H(\bar s)$ is odd, we need only to prove that the {\it weighted Genocchi number} \begin{equation}\label{def:fn} f(n)=\sum_{\bar s\in \mathcal{\bar S}(2n+1)} H(\bar s)G(\bar s) \end{equation} is odd. For example, in Fig. 2., $G_{10}=G(\overline{s_1})+G(\overline{s_2})+G(\overline{s_3})=60+25+70=155$, and \begin{align*} f(4)&= H(\overline{s_1})G(\overline{s_1})+H(\overline{s_2})G(\overline{s_2})+H(\overline{s_3})G(\overline{s_3}) \cr &=3\cdot5\cdot7\cdot9\cdot60 +3\cdot3\cdot7\cdot9\cdot25+3\cdot3\cdot5\cdot9\cdot70\cr &=(3\cdot5\cdot7)^2\cdot 9. \end{align*} The weighted Genocchi number $f(n)$ is more convenient for us to study, since it has an explicit simple expression. \medskip \begin{thm}\label{th:fn} Let $f(n)$ be the weighted Genocchi number defined in \eqref{def:fn}. Then, \begin{equation} f(n)=(1\cdot 3 \cdot 5 \cdot 7 \cdots (2n-1))^2 \cdot (2n+1)=(2n-1)!!\cdot (2n+1)!!. \end{equation} \end{thm} \begin{proof} We successively have \begin{align*} f(n)&= \displaystyle\sum_{\bar s} H(\bar s)G(\bar s) \\ &= \displaystyle\sum_{\bar s} \displaystyle\frac{H(\bar s) (n+1) T(\bar s)}{2^{2n}} \\ &= \displaystyle\sum_{\bar s} \displaystyle\frac{(2n+2)! \times \#\bar s}{2^{2n+1}} \\ &= \displaystyle\frac{(2n+2)!}{2^{2n+1}} \sum_{\bar s} \#\bar s \\ &= \displaystyle\frac{(2n+2)!}{2^{2n+1}}\cdot \#\mathcal{S}(2n+1). \end{align*} While $\#\mathcal{S}(2n+1)$ equals to the Catalan number $C_n$, we can calculate that \begin{align*} f(n)&= \frac{(2n+2)!}{2^{2n+1}}\cdot C_n \\ &= \frac{(2n+2)!}{2^{2n+1}}\cdot \frac{1}{n+1} \binom{2n}{n} \\ &= (2n-1)!!\cdot (2n+1)!!.\qedhere \end{align*} \end{proof} From Theorem \ref{th:fn}, the weighted Genocchi number $f(n)$ is an odd number. Therefore, the normal Genocchi number $G_{2n+2}$ is also odd. This achieves the proof of Theorem~\ref{th:tan}. \section{Generalizations to $k$-ary trees} \label{sec:kary} \medskip In this section we assume that $k\geq 2$ is an integer. Recall the {\it hook length formula} for binary trees described in Section 2. For general rooted trees $t$ (see \cite[\S5.1.4, Ex. 20]{Knuth1998Vol3}), we also have \begin{equation} \#\mathcal{L}(t)=\frac{n!}{\prod_{v\in t} h_v}, \end{equation} where $\mathcal{L}(t)$ denote the set of all {\it increasing labelled trees} of shape $t$. Let $L_{kn+1}$ be the number of increasing labelled complete $k$-ary trees with $kn+1$ vertices. Then, \begin{align} L_{kn+1}=\sum_{n_1+\cdots+n_k=n-1}\binom{kn}{kn_1+1, \cdots, kn_k+1}L_{kn_1+1}\cdots L_{kn_k+1}. \end{align} Equivalently, the exponential generating function $\phi(x)$ for $L_{kn+1}$ \begin{align*} \phi(x)=\sum_{n\ge 0}L_{kn+1}\frac{x^{kn+1}}{(kn+1)!} \end{align*} is the solution of the differential equation \begin{equation}\label{eq:phi} \phi'(x)=1+\phi^k(x) \end{equation} such that $\phi(0)=0$. Let $\psi(x)$ be the exponential generating function for $M_{k^2 n-kn+k}$ which is defined in Theorem \ref{th:kary}, $$\psi(x):= \sum_{n\ge 0}M_{k^2 n-kn+k}\frac{x^{k^2n-kn+k}}{(k^2n-kn+k)!}.$$ Then \begin{equation}\label{eq:psi} {\psi(x)}=x \cdot \phi\left(\displaystyle \frac{x^{k-1}}{k!}\right). \end{equation} From identities \eqref{eq:phi} and \eqref{eq:psi}, Theorem \ref{th:kary} can be restated in the form of power series and differential equations: \begin{cor} Let $\psi(x)$ be a power series satisfying the following differential equation $$ x\psi'(x)-\psi(x)=\frac{k-1}{k!}\Bigl(x^k+\psi^k(x)\Bigr), $$ with $\psi(0)=0$. Then, for each $n\geq 1$, the coefficient of $\displaystyle\frac{x^{k^2n-kn+k}}{(k^2n-kn+k)!}$ in $\psi(x)$ is an integer. Moreover, it is congruent to $(i)$ $1 \pmod k$, if $k=p$; $(ii)$ $1 \pmod {p^2}$, if $k=p^t$ with $t\geq 2$; $(iii)$ $0 \pmod k$, otherwise. \end{cor} \medskip When $k=2$, $L_{2n+1}$ is just the tangent number $T_{2n+1}$ and $M_{2n+2}$ is the Genocchi number $G_{2n+2}$. For $k=3$ and $4$, the initial values of $L_{kn+1}$ and $M_{k^2 n-kn+k}$ are reproduced below: \medskip \[\begin{tabular}{c|c|c} $n$ & $L_{3n+1}$ & $M_{6n+3}$ \\ \hline 0 & 1 & 1 \\ 1 & 6 & 70 \\ 2 & 540 & 500500 \\ 3 & 184680 & 43001959000\\ 4 & 157600080 & 21100495466050000 \\ 5 & 270419925600 & 39781831724228093500000 \end{tabular}\] \smallskip \centerline{Table for $k=3$} \medskip \[\begin{tabular}{c|c|c} $n$ & $L_{4n+1}$ & $M_{12n+4}$ \\ \hline 0 & 1 & 1 \\ 1 & 24 & 525525 \\ 2 & 32256 & 10258577044340625 \\ 3 & 285272064 & 42645955937142729593062265625 \\ 4 & 8967114326016 & 6992644904557760596067178252404694486328125 \\ \end{tabular}\] \centerline{Table for $k=4$} \medskip \medskip Now we define an equivalence relation ({\it $k$-pivoting}) on the set of all (unlabelled) complete $k$-ary trees $\mathcal{R}(kn+1)$. A {\it basic $k$-pivoting} is a rearrangement of the $k$ subtrees of a non-leaf vertex $v$. Let $r_1$, $r_2$ be two complete $k$-ary trees, if $r_1$ can be changed to $r_2$ by a finite sequence of basic $k$-pivotings, we write $r_1\sim r_2$. Hence the set of all complete $k$-ary trees can be partitioned into several equivalence classes. Let $\mathcal{\bar R}(kn+1) = \mathcal{R}(kn+1)/\!\!\sim$, define $\#\mathcal{L}(\bar r) = \#{\mathcal{L}}(r)$ for $r \in\bar r$, then we have \begin{equation}\label{eq:knormaltree} \sum_{\bar r\in\mathcal{\bar R}(kn+1)} L(\bar r)= L_{kn+1}, \end{equation} where \begin{equation} L(\bar r)=\sum_{r\in \bar r} \# \mathcal{L}(r) = \#\bar r\times \#\mathcal{L}(\bar r). \end{equation} \medskip Similar to the case of the tangent numbers, this equivalence relation implies that $L(\bar r)$ is divisible by $(k!)^n$. There is still a stronger divisibility, stated as below: \begin{lem}\label{lem:kdivisibility} For each $\bar r\in \mathcal{\bar R}(kn+1)$, the number $(k^2 n-kn+k)!L(\bar r)/(kn+1)!$ is divisible by $(k!)^{kn+1}$. \end{lem} \begin{proof} First, we show that the coefficient $(k^2 n-kn+k)!/(kn+1)!$ is divisible by $(k-1)!^{kn+1}$. In fact, \begin{equation}\label{eq:divk-1} \displaystyle\frac{(k^2 n-kn+k)!}{(kn+1)!\cdot (k-1)!^{kn+1}} = (k^2n-kn+k)\cdot \displaystyle\prod_{i=1}^{kn+1}\binom{i(k-1)-1}{k-2}. \end{equation} It remains to prove \begin{equation}\label{eq:kdivisibility} k^{kn+1}\mid \frac{(k^2 n-kn+k)!\ L(\bar r)}{(kn+1)!\cdot (k-1)!^{kn+1}}. \end{equation} For each vertex $v$ in a complete $k$-ary tree $r$, we observe that the hook length $h_v$ satisfies $h_v\equiv 1\pmod k$. Thus, \begin{align*} H(\bar r)=\prod_{v\in r}h_v\equiv 1\pmod k. \end{align*} Consequently, relation (\ref{eq:kdivisibility}) is equivalent to \begin{align*} k^{kn+1}\mid \frac {(k^2 n-kn+k)!\ L(\bar r)H(\bar r)}{(kn+1)!\cdot (k-1)!^{kn+1}}, \end{align*} which can be rewritten as \begin{align}\label{eq:divk!} (k!)^{kn+1}\mid (k^2 n-kn+k)! \times \frac{L(\bar r)H(\bar r)} {(kn+1)!}. \end{align} We will prove this divisibility using the following combinatorial model. Let $r$ be a complete $k$-ary tree with $kn+1$ vertices. It is easy to show that $r$ has $(k-1)n+1$ leaves. Replacing all leaves of $r$ by the complete $k$-ary tree with $k+1$ vertices, we get a new tree with $k^2 n-kn+k$ leaves, denoted by $r^+$. Let $\mathcal{L}^+(r^+)$ be the set of all leaf-labelled tree of shape $r^+$, obtained from $r^+$ by labeling all the leaves with ${1,2,\ldots, k^2 n-kn+k}$. It is clear that $\#\mathcal{L}^+(r^+)=(k^2 n-kn+k)!$. On the other hand, by the hook length formula we have \begin{equation*} \displaystyle\frac{L(\bar r)H(\bar r)}{(kn+1)!} = \displaystyle\frac{H(\bar r)\times\#\bar r\times \#\mathcal{L}(r)}{(kn+1)!} = \#\bar r. \end{equation*} Thus, the right-hand side of \eqref{eq:divk!} is equal to $(k^2 n-kn+k)! \times \#\bar r $, that is, the number of all leaf-labelled trees of shape $r^+$ such that $r\in\bar r$. \medskip \medskip Translate the $k$-pivoting to the set of all leaf-labelled trees of shape $r^+$ such that $r\in\bar r$. It is easy to check that the $k$-pivoting is still an equivalence relation. Since a leaf-labelled tree has $kn+1$ non-leaf vertices, there are $(k!)^{kn+1}$ leaf-labelled trees in each equivalence class, which implies that the right-hand side of \eqref{eq:divk!} is divisible by $(k!)^{kn+1}$. \end{proof} The following two lemmas will be used for proving Theorem \ref{th:kary}. \begin{lem}[Legendre's formula]\label{th:Legendre} Suppose that $p$ is prime number. For each positive integer $k$, let $\alpha(k)$ be the highest power of $p$ dividing $k!$ and $\beta(k)$ be the sum of all digits of $k$ in base $p$. Then, \begin{align}\label{eq:Legendre} \alpha(k)=\sum_{i\ge 1}\left\lfloor\frac{k}{p^i}\right\rfloor =\frac {k-\beta(k)}{p-1}. \end{align} \end{lem} For the proof of Lemma \ref{th:Legendre}, see \cite[p. 263]{Dickson1919}. \begin{lem}\label{th:p2} Let $p\ge 3$ be a prime number, then \begin{align}\label{eq:p2} (pk+1)(pk+2)\cdots(pk+p-1)\equiv (p-1)! \pmod{p^2}. \end{align} \end{lem} \begin{proof} The left-hand side of \eqref{eq:p2} is equal to \begin{align*} (pk)^{p-1}e_0+\cdots+(pk)^2e_{p-3}+(pk)e_{p-2}+e_{p-1}\equiv (pk)e_{p-2}+(p-1)!\pmod{p^2}, \end{align*} where $e_j:=e_j(1, 2, \cdots, p-1)$ are the elementary symmetric functions. See \cite{Macdonald1995}. Since \begin{align*} e_{p-2}=(p-1)!\displaystyle\sum_{i}i^{-1}\equiv (p-1)!\sum_{i}i\equiv(p-1)!\frac{p(p-1)}2\equiv 0\pmod p, \end{align*} equality \eqref{eq:p2} is true. \end{proof} \goodbreak We are ready to prove Theorem \ref{th:kary}. \begin{proof}[Proof of Theorem \ref{th:kary}] The first part (a) is an immediate consequence of Lemma \ref{lem:kdivisibility} and (\ref{eq:knormaltree}). Let $n\ge 1$, we construct the following weighted function \begin{equation*} f(n)=\sum_{\bar r\in\mathcal{\bar R}(kn+1)}H(\bar r)M(\bar r), \end{equation*} where \begin{align*} M(\bar r)=\displaystyle\frac{(k^2 n-kn+k)!\, L(\bar r)}{(k!)^{kn+1}\, (kn+1)!}. \end{align*} Since $H(\bar r)\equiv 1\pmod k$, we have \begin{equation}\label{eq:modfn} f(n)\equiv \sum_{\bar r\in\mathcal{\bar R}(kn+1)}M(\bar r) = M_{k^2 n-kn+k} \pmod k. \end{equation} Thus, we only need to calculate $f(n)$. \begin{align*} f(n)&= \displaystyle\sum_{\bar r} H(\bar r)M(\bar r) \\ &= \displaystyle\sum_{\bar r} \displaystyle\frac{H(\bar r) \times (k^2 n-kn+k)!\, L(\bar r)}{(k!)^{kn+1}\, (kn+1)!} \\ &= \displaystyle\sum_{\bar r} \displaystyle\frac{(k^2 n-kn+k)! \times \#\bar r}{(k!)^{kn+1}} \\ &= \displaystyle\frac{(k^2 n-kn+k)!}{(k!)^{kn+1}} C_k(n), \end{align*} where $C_k(n)$ is the number of all (unlabelled) complete $k$-ary trees, that is equal to the Fuss-Catalan number \cite{Aval2008} \begin{align*} C_k(n)=\displaystyle\frac{(kn)!}{n!(kn-n+1)!}. \end{align*} Consequently, \begin{align} f(n) &= \displaystyle\frac{(k^2 n-kn+k)!}{(k!)^{kn-n+1}(kn-n+1)!}\cdot\frac{(kn)!}{(k!)^nn!} \label{eq:fn1} \\ &= \displaystyle\prod_{i=0}^{kn-n}\binom{ik+k-1}{k-1}\times \displaystyle \prod_{j=0}^{n-1}\binom{jk+k-1}{k-1}.\label{eq:fn2} \end{align} For proving the second part (b), there are three cases to be considered depending on the value of $k$. (b1) $k=p$ is a prime integer. We have \begin{equation*} \binom{ip+p-1}{p-1} =\frac{(ip+1)(ip+2)\cdots(ip+p-1)}{1\times 2\times \cdots \times (p-1)} \equiv 1\pmod{p}. \end{equation*} Thus $f(n)\equiv 1 \pmod p$ by identity \eqref{eq:fn2}. \smallskip (b2) $k=p^t \ (t\ge 2)$ where $p$ is a prime integer. If $p\ge 3$, by Lemma \ref{th:p2}, we have \begin{align*} \binom{ip^t+p^t-1}{p^t-1} &=\prod_{s=0}^{p^{t-1}-1}\frac{(ip^t+sp+1)\cdots(ip^t+sp+p-1)}{(sp+1)\cdots(sp+p-1)}\cdot \prod_{s=1}^{p^{t-1}-1}\frac{ip^t+sp}{sp} \\ & \equiv \left[\frac{(p-1)!}{(p-1)!}\right]^{p^{t-1}}\cdot \binom{ip^{t-1}+p^{t-1}-1}{p^{t-1}-1} \pmod{p^2} \\ & \equiv \binom{ip^{t-1}+p^{t-1}-1}{p^{t-1}-1} \pmod{p^2}\\ & \equiv \cdots\\ & \equiv \binom{ip+p-1}{p-1} \pmod{p^2}\\ & = \frac{(ip+1)(ip+2)\cdots(ip+p-1)}{1\times 2\times \cdots \times (p-1)} \\ & \equiv 1\pmod{p^2}. \end{align*} Thus $f(n)\equiv 1\pmod {p^2}$ for $k=p^t$ with $p\geq 3$ and $t\geq 2$. Now suppose $p=2$ and $k=2^t$ ($t\geq 2$). We have \begin{align*} \binom{i2^t+2^t-1}{2^t-1} &=\prod_{s=0}^{2^{t-1}-1}\frac{i\cdot 2^t+2s+1}{2s+1}\cdot \prod_{s=1}^{2^{t-1}-1}\frac{i\cdot 2^t+2s}{2s} \\ & = \prod_{s=0}^{2^{t-2}-1}\frac{(i\cdot 2^t+4s+1)(i\cdot 2^t +4s+3)}{(4s+1)(4s+3)}\cdot \prod_{s=1}^{2^{t-1}-1}\frac{i\cdot 2^{t-1}+s}{s} \\ & \equiv \left(\frac{-1}{-1}\right)^{2^{t-2}}\cdot \binom{i\cdot 2^{t-1}+2^{t-1}-1}{2^{t-1}-1} \pmod{4}\\ & \equiv\binom{i\cdot 2^{t-1}+2^{t-1}-1}{2^{t-1}-1}\pmod{4}\\ & \equiv \cdots \\ & \equiv\binom{i\cdot 2+2-1}{2-1}\pmod{4}\\ & = 2i+1. \end{align*} Therefore, by identity \eqref{eq:fn2}, we can check that \begin{align*} f(n)\equiv \prod_{i=0}^{(2^t-1)n}(2i+1)\times \prod_{j=0}^{n-1}(2j+1) \equiv 1 \pmod 4. \end{align*} (b3) Suppose that $k$ has more than one prime factors. We want to prove $f(n)\equiv 0\pmod k$. Let $p$ be a prime factor of $k$, and write $k=b p^m$ with $b\geq 2$ and $p\nmid b$. Notice that $f(n)\mid f(n+1)$ by identity \eqref{eq:fn1}. Thus, it suffices to show that \begin{equation}\label{eq:f1} f(1)= \frac{(k^2)!}{(k!)^{k+1}} \equiv 0\pmod {p^m}, \end{equation} which is equivalent to \begin{equation}\label{eq:f1'} \alpha(b^2 p^{2m}) - (bp^m+1)\, \alpha(b p^m) \geq m. \end{equation} By Legendre's formula \eqref{eq:Legendre}, the left-hand side of \eqref{eq:f1'} is equal to \begin{align*} \Delta &= \frac 1{p-1} \Bigl( b^2 p^{2m} - \beta(b^2) -(b p^m+1) (bp^m -\beta(b)) \Bigr) \\ &= \frac 1{p-1} \Bigl( \beta(b) - \beta(b^2) +b p^m \beta(b) -b p^m \Bigr). \end{align*} Since $\beta(b^2) \leq b \beta(b)$ and $\beta(b)\geq 2,\, b\geq 2$, we have \begin{align} \Delta &\geq \frac 1{p-1} \Bigl( (bp^m -b +1)\beta(b) -b p^m \Bigr)\nonumber\\ &\geq \frac 1{p-1} \Bigl( b(p^m -2) +2 \Bigr)\nonumber\\ &\geq \frac 1{p-1} \Bigl( 2p^m -2 \Bigr)\nonumber\\ &\geq m. \end{align} This completes the proof. \end{proof} \smallskip {\bf Acknowledgments}. The first author would like to thank Zhi-Ying Wen for inviting me to Tsinghua University where the paper was finalized.
proofpile-arXiv_065-3573
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Datasets} \begin{table}[t] \centering \caption{Summary statistics from datasets used in this study.} \label{tab:DataSets} \begin{tabular}{ccC{0.6in}C{0.5in}cC{0.55in}c} \hline & HYCCUPS & Friends \& Family & High School & Infectious & Primary School & HOPE \\ \hline Number of nodes & $43$ & $123$ & $126$ & $201$ & $242$ & $1178$ \\ Sensor type & Wi-Fi & Bluetooth & RFID & RFID & RFID & RFID \\ Proximity range & N/A & $5$ m & $1$--$1.5$ m & $1$--$1.5$ m & $1$--$1.5$ m & Room \\ Graph density & $0.326$ & $0.228$ & $0.217$ & $0.0328$ & $0.285$ & $0.569$ \\ Clustering coefficient & $0.604$ & $0.496$ & $0.522$ & $0.459$ & $0.480$ & $0.748$ \\ Average degree & $14.0$ & $27.8$ & $27.1$ & $6.56$ & $68.7$ & $671$ \\ Maximum degree & $28$ & $73$ & $55$ & $21$ & $134$ & $1072$ \\ \hline \end{tabular} \end{table} We consider a variety of contact network datasets in this paper. Table \ref{tab:DataSets} shows summary statistics for each dataset along with the sensor type. The HYCCUPS dataset was collected at the University Politehnica of Bucharest in 2012 using a background application for Android smartphones that captures a device's encounters with Wi-Fi access points \cite{Marin2012}. The Friends \& Family (F\&F) dataset was collected from the members of a residential community nearby a major research university using Android phones loaded with an app that records many features including proximity to other Bluetooth devices \cite{Aharony2011643}. The High School (HS) dataset was collected among students from $3$ classes in a high school in Marseilles, France \cite{10.1371/journal.pone.0107878} using wearable sensors that capture face-to-face proximity for more than $20$ seconds. The Infectious dataset was collected at a science gallery in Dublin using wearable electronic badges to sense sustained face-to-face proximity between visitors. \cite{Isella:2011qo}. We use data for one arbitrarily selected day (April 30) on which $201$ people came to visit. The Primary School (PS) dataset was collected over $232$ students and $10$ teachers at a primary school in Lyon, France in a similar manner to the HS dataset \cite{Gemmetto2014}. Lastly, the HOPE dataset is collected from the Attendee Meta-Data project at the seventh Hackers on Planet Earth (HOPE) conference \cite{hope-amd-20080807}. We create a contact network where the attendees at each talk form a clique; that is, each person is assumed to be in contact with every other person in the same room, hence why this network is much denser. \section{Discussion} \label{sec:Discussion} The purpose of our study was to evaluate the effects of contact network models on the results of simulated epidemics over the contact network. While it is well-known and expected that more complex models for contact network topology do a better job of reproducing features of the contact network such as degree distribution and community structure, we demonstrated that, in general, they also result in more accurate epidemic simulations. That is, the results of simulating an epidemic on a more complex network model are usually closer to the results obtained when simulating the epidemic on the actual network than if we had used a simpler network model. Moreover, models that preserve node degrees are shown to produce the most accurate epidemic simulations. Unlike most prior studies such as \cite{Machens2013,Stehlé2011}, we measure the quality of a network model by its area between SIR curves compared to the SIR curve of the actual network, which allows us to capture differences while the disease is still spreading rather than just the difference in the final outcome, i.e.~how many people were infected. Our findings suggest that the degree-corrected stochastic block model (DC-SBM) is the best choice of contact network model in epidemic simulations because it resulted in the minimum average area between SIR curves. Interestingly, using the degree model resulted in an average area between SIR curves to be only slightly larger than the DC-SBM despite having less than half as many parameters, as shown in Table \ref{tab:AvgMetrics}. The SBM (without degree correction) also has half as many parameters as the DC-SBM, but has over twice the area between SIR curves. We note that the difference between the degree model and the SBM \emph{cannot} be observed using log-likelihood as the quality measure, as both models are very close in log-likelihood. This leads us to believe that preserving degree has a greater effect on accuracy of epidemic simulations than preserving community structure. Furthermore, this finding demonstrates that one cannot simply evaluate the accuracy of a contact network model for epidemic simulations only by examining goodness-of-fit on the actual contact network! In practice, one cannot often collect high-resolution contact data on a large scale, so having accurate contact network models is crucial to provide realistic network topologies on which we can simulate epidemics. In this paper, we estimated the parameters for each contact network model using the contact network itself, which we cannot do in practice because the contact network is often unknown. As a result, one would have to estimate the model parameters from prior knowledge or partial observation of the contact network, which introduces additional error that was not studied in this paper. It would be of great interest to perform this type of sensitivity analysis to identify whether the DC-SBM and degree model are still superior even when presented with less accurate parameter estimates. Also, there is a risk of overfitting in more complex models which should be examined in a future extension of this work. Both issues could potentially be addressed by considering hierarchical Bayesian variants of network models such as the degree-generated block model \cite{Zhu2012}, which add an additional generative layer to the model with a smaller set of hyperparameters. Another limitation of this study is our consideration of static unweighted networks. Prior work \cite{karimi2013threshold,Machens2013,Smieszek2009a,Stehlé2011} has shown that it is important to consider the time duration of contacts between people, which can be reflected as weights in the contact network, as well as the times themselves, which can be accommodated by using models of dynamic rather than static networks, such as dynamic SBMs \cite{Xu2014a}. We plan to expand this work in the future by incorporating models of weighted and dynamic networks to provide a more thorough investigation. \section{Introduction} The study of transmission dynamics of infectious diseases often involves simulations using stochastic epidemic models. In a compartmental stochastic epidemic model, transitions between compartments occur randomly with specified probabilities. For example, in a stochastic Susceptible-Infectious-Recovered (SIR) model \cite{Britton2010,Greenwood2009}, a person may transition from S to I with a certain probability upon contact with an infectious person, or a person may transition from I to R with a certain probability to simulate recovering from the disease. The reason for the spread of infection is contact with the infectious individual. Hence, the contact network in a population is a major factor in the transmission dynamics. Collecting an actual contact network over a large population is difficult because of limitations in capturing all the contact information. This makes it necessary to represent the network with some level of abstraction, e.g.~using a statistical model. A variety of statistical models for networks have been proposed \cite{Goldenberg:2010:SSN:1734794.1734795}; such models can be used to simulate contact networks that resemble actual contact networks. Our aim in this paper is to evaluate different models for contact networks in order to find the best model to use to simulate contact networks that are close to an actual observed network. We do this by comparing the disease dynamics of a stochastic SIR model over the simulated networks with the disease dynamics over the actual network. One commonly used approach is to compare the epidemic size at the end of the simulation, i.e.~what fraction of the population caught the disease \cite{Machens2013,Stehlé2011}. A drawback of this approach is that it only considers the steady-state outcome and not the dynamics of the disease as it is spreading. \begin{figure}[t] \centering \subfloat[Susceptible]{\includegraphics[width=1.5in]{AreaSusceptible}} \quad \subfloat[Infectious]{\includegraphics[width=1.5in]{AreaInfected}} \quad \subfloat[Recovered]{\includegraphics[width=1.5in]{AreaRecovered}} \caption{For each of the susceptible (S), infectious (I), and recovered (R) compartments, the mean curve for simulations on the model (shown in blue) is compared to the mean curve for simulations on the actual network (shown in red). The closeness between the model and actual network is given by the sum of the shaded areas between the curves for each compartment (smaller is better).} \label{fig:AreaSIR} \end{figure} We propose to compare the dynamics at each time instant in the simulation by calculating the area between the mean SIR curves for the epidemic over the simulated and actual networks, shown in Fig.~\ref{fig:AreaSIR}. A small area indicates that the dynamics of the epidemic over the simulated contact networks are close to those of the actual network. We use this approach to compare four contact network models (in increasing order of number of parameters): the Erd\H{o}s-R\'{e}nyi model, the degree model, the stochastic block model, and the degree-corrected stochastic block model. Our experiment results over six different real network datasets suggest that the degree-corrected stochastic block model provides the closest approximation to the dynamics of an epidemic on the actual contact networks. Additionally, we find that preserving node degrees appears to be more important than preserving community structure for accuracy of epidemic simulations. \section{Methods} We construct actual networks from the datasets by connecting the individuals (nodes) with an edge if they have a contact at any point of time. We evaluate the quality of a contact network model for simulations of epidemics by conducting the following steps for each dataset: \begin{enumerate} \item Simulate $5,000$ epidemics over the actual network. \item Fit contact network model to actual network. \item Simulate $100$ networks from contact network model. For each simulated network, simulate $50$ epidemics over the network for $5,000$ epidemics total. \item Compare the results of the epidemic simulations over the actual network with those over the simulated networks. \end{enumerate} These steps are repeated for each contact network model that we consider. We describe the stochastic epidemic model we use to simulate epidemics in Section \ref{sec:EpiModel} and the contact network models we use in Section \ref{sec:NetModel}. To get a fair evaluation of the dynamics of epidemics spreading over different contact network models, all of the parameters which are not related to the contact network model, e.g.~probability of infection and probability of recovery are kept constant. Our aim is to single out the effect of using a particular contact network model while simulating an epidemic. \subsection{Stochastic Epidemic Model} \label{sec:EpiModel} An actual infection spread in a population experiences randomness in several factors which may aggravate or inhibit the spread. This is considered in stochastic epidemic models. The initial condition is, in general, to have a set of infectious individuals, while the rest of the population is considered susceptible. We consider a discrete-time process, where at each time step, the infectious individuals can spread the disease with some probability of infection to susceptible individuals they have been in contact with. Also, the infectious individuals can recover from the disease with some probability independent of the individuals' contacts with others. This model is known as the stochastic SIR model and is one of the standard models used in epidemiology \cite{Britton2010,Greenwood2009}. We randomly choose $1$ infectious individual from the population as the initial condition and simulate the epidemic over $30$ time steps. We set the probability of infection for every interaction between people to be $0.025$. The probability of recovery is also set to be $0.025$. Note that the rate at which the disease spreads across the population is dependent not only on the infection probability but also the topology of the contact network; thus, by fixing these probabilities, we are exploring only the effects of the contact network. \subsection{Contact Network Models} \label{sec:NetModel} In practice, it is extremely difficult to obtain accurate contact network data. An alternative is to simulate a contact network by using a statistical network model. We consider several such models, which we briefly describe in the following. We refer interested readers to the survey by Goldenberg et al.~\cite{Goldenberg:2010:SSN:1734794.1734795} for details. \subsubsection{Erd\H{o}s-R\'{e}nyi (E-R) Model} In the E-R model, an edge between any two nodes is formed with probability $p$ independent of all other edges. To fit the E-R model to a network, set the single parameter, the estimated edge probability $\hat{p} = M/\binom{N}{2}$, where $N$ and $M$ denote the number of nodes and edges in the actual network, respectively. By doing so, the expected number of edges in the E-R model will be $\binom{N}{2}\hat{p} = M$, the number of edges in the actual network. \subsubsection{Degree Model} In several network models, including the configuration model and preferential attachment models, the edge probability depends upon the degrees of the nodes it connects \cite{Newman:2010:NI:1809753}. We consider a model that preserves the expected rather than actual degree of each node, often referred to as the Chung-Lu model \cite{Chung2002}. In this model, the probability of an edge between two nodes is proportional to the product of their node degrees, and all edges are formed independently. The model has $N$ parameters, the expected degrees of each node. To fit the degree model to a network, we compute the degrees of all nodes to obtain the degree vector $\vec{d}$. We then set the estimated edge probabilities $\hat{p}_{ij} = \alpha d_i d_j$, where the constant $\alpha$ is chosen so that the sum of all edge probabilities (number of expected edges) is equal to the number of edges in the actual network. \subsubsection{Stochastic Block Model (SBM)} In the SBM \cite{Holland1983}, the network is divided into disjoint sets of individuals forming $K$ communities. The probability of edge formation between two nodes depends only upon the communities to which they belong. This model takes as input a vector of community assignments $\vec{c}$ (length $N$) and a matrix of edge formation probabilities $\Phi$ (size $K \times K$), where $\phi_{ab}$ denotes the probability that a node in community $a$ forms an edge with a node in community $b$, independent of all other edges. For an undirected graph, $\Phi$ is symmetric so the SBM has $N + \binom{K+1}{2}$ parameters in total. To estimate community assignments, we use a regularized spectral clustering algorithm \cite{Qin2013} that is asymptotically consistent and has been demonstrated to be very accurate in practice. We select the number of communities using the eigengap heuristic \cite{VonLuxburg2007}. Once the community assignments $\hat{\vec{c}}$ are estimated, the edge probabilities can be estimated by $\hat{\phi}_{ab} = m_{ab} / n_{ab}$, where $m_{ab}$ denotes the number of edges in the block formed by the communities $a,b$ in the observed network, and $n_{ab}$ denotes the number of possible edges in the block \cite{PhysRevE.83.016107}. \subsubsection{Degree-corrected Stochastic Block Model (DC-SBM)} The DC-SBM is an extension to the SBM in a way that incorporates the concepts of the degree model within an SBM \cite{PhysRevE.83.016107}. The parameters of the DC-SBM are the vector of community assignments $\vec{c}$ (length $N$), a node-level parameter vector $\vec{\theta}$ (length $N$), and a block-level parameter matrix $\Omega$ (size $K \times K$). In a DC-SBM, an edge between a node $i \in a$ (meaning node $i$ is in community $a$) and node $j \in b$ is formed with probability $\theta_i \theta_j \omega_{ab}$ independent of all other edges. $\Omega$ is symmetric, so the DC-SBM has $2N + \binom{K+1}{2}$ parameters in total. To fit the DC-SBM to an actual network, we first estimate the community assignments in the same manner as in the SBM using regularized spectral clustering. We then estimate the remaining parameters to be $\hat{\theta}_i = d_i / \sum_{j \in a} d_j$, for node $i \in a$, and $\hat{\omega}_{ab} = m_{ab}$ \cite{PhysRevE.83.016107}. Using these estimates, we arrive at the estimated edge probabilities $\hat{p}_{ij} = \hat{\theta}_i \hat{\theta}_j \hat{\omega}_{ab}$. \section{Related Work} A significant amount of previous work deals with the duration \cite{Smieszek2009a}, frequency \cite{larson07}, and type \cite{eames08,Smieszek2009} of contacts in a contact network. These findings are often incorporated into simulations of epidemics over different types of contact models. The R package EpiModel \cite{jenness2017package} allows for simulation of a variety of epidemics over temporal exponential random graph models for contact networks and has been used in studies of various different infectious diseases including HIV \cite{doi:10.1093/infdis/jiw223}. There has also been prior work simulating the spread of disease over a variety of contact network models with the goal of finding a good approximation to the actual high resolution data in terms of the epidemic size, i.e.~the final number of people infected \cite{Machens2013,Stehlé2011}. Such work differs from our proposed area metric, which considers the dynamics as the disease is spreading and not just the steady-state outcome. In \cite{Bioglio2016}, the authors use the squared differences between the I curves (fraction of infectious individuals) of an epidemic model on simulated contact networks and on an actual contact network to calibrate parameters of the epidemic model when used on simulated contact networks. Although this metric does consider the dynamics of the epidemic, our proposed metric also involves the S and R curves for a more complete evaluation of population dynamics. \section{Results} To evaluate the quality of a contact network model, we compare the mean SIR curves resulting from epidemic simulations on networks generated from that model to the mean SIR curves from epidemic simulations on the actual network. If the two curves are close, then the network model is providing an accurate representation of what is likely to happen on the actual network. To measure the closeness of the two sets of mean SIR curves, we use the sum of the areas between each set of curves as shown in Fig.~\ref{fig:AreaSIR}. By measuring the area between the curves rather than just the final outcome of the epidemic simulation (e.g.~the fraction of recovered people after the disease dies out as in \cite{Machens2013,Stehlé2011}), we capture the difference in transient dynamics (e.g.~the rate at which the infection spreads) rather than just the difference in final outcomes. \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=2.25in]{AvgAllBars3} \label{fig:AreaBars3}} \quad \subfloat[]{\includegraphics[width=2.25in]{LoglikBars3} \label{fig:LoglikBars3}} \caption[]{Comparison of \subref{fig:AreaBars3} area between SIR curves of each model with respect to actual network for each dataset and \subref{fig:LoglikBars3} negative log-likelihood per node pair for each model (lower is better for both measures). The DC-SBM model appears to be the best model according to both quality measures, but the two measures disagree on the quality of the degree model compared to the SBM.} \end{figure} The area between the SIR curves for each model over each dataset is shown in Fig.~\ref{fig:AreaBars3}. According to this quality measure, the DC-SBM is the most accurate model on F\&F, HS, and PS; the degree model is the most accurate on HYCCUPS and HOPE; and the SBM is most accurate on Infectious. However, the SBM appears to be only slightly more accurate than the E-R model overall, despite having $N+\binom{K+1}{2}$ parameters compared to the single parameter E-R model. The contact network models were most accurate on the HOPE network, which is the densest, causing the epidemics to spread rapidly. We compute also the log-likelihood for each contact network model on each dataset, shown in Fig.~\ref{fig:LoglikBars3}. To normalize across the different sized networks, we compute the log-likelihood per node pair. Since all of the log-likelihoods are less than $0$, we show the negative log-likelihood (i.e.~lower is better) in Fig.~\ref{fig:LoglikBars3}. Unsurprisingly, the DC-SBM, with the most parameters, also has the highest log-likelihood, whereas the relative ordering of the log-likelihoods of the degree model and SBM, both with roughly the same number of parameters, vary depending on the dataset. \begin{table}[t] \centering \caption{Quality measures (lower is better) averaged over all datasets for each model. Best model according to each measure is shown in bold.} \label{tab:AvgMetrics} \setlength\tabcolsep{0.5em} \begin{tabular}[b]{ccccc} \hline Quality Measure & E-R & Degree & SBM & DC-SBM \\ \hline Area between SIR curves & $1.82$ & $0.73$ & $1.43$ & $\b{0.71}$ \\ Negative log-likelihood per node pair & $0.597$ & $0.496$ & $0.504$ & $\b{0.385}$ \\ Number of parameters & $\b{1}$ & $319$ & $328$ & $647$ \\ \hline \end{tabular} \end{table} Both the proposed area between SIR curves and the log-likelihood can be viewed as quality measures for a contact network model. A third quality measure is given by the number of parameters, which denotes the simplicity of the model. A simpler model is generally more desirable to avoid overfitting. These three quality measures for each model (averaged over all datasets) are shown in Table \ref{tab:AvgMetrics}. The DC-SBM achieves the highest quality according to the area between SIR curves and the log-likelihood at the expense of having the most parameters. On the other hand, the E-R model has only a single parameter but is the worst in the other two quality metrics. Interestingly, the degree model and SBM appear to be roughly equal in terms of the number of parameters and log-likelihood, but the area between SIR curves for the two models differs significantly. This suggests that the degree model may be better than the SBM at reproducing features of contact networks that are relevant to disease propagation.
proofpile-arXiv_065-3584
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} {{\em Acoustic Event Classification} (AEC) plays an essential role in enabling the environmental awareness for intelligent machines, and has recently attracted considerable attention~\cite{Chu08-Unstructured,Stowell15-Detection,Ye15-Acoustic,Phan16-Label}.} One central goal of AEC is to extract discriminative representations that are robust enough to capture the acoustic event content. In the past decade, many efforts have been reported towards this direction. For example, following the success in speech recognition, Mel-Frequency Cepstral Coefficients (MFCC) have been applied as the most dominant feature type for AEC~\cite{Ma06-Acoustic,Giannoulis13-Detection}. However, unlike speech recognition, AEC highly relies on longer temporal information to make a decision~\cite{Chu09-Environmental}. In this regard, spectro-temporal features were introduced to capture the event modulation patterns in both time and frequency domains~\cite{Schroeder15-Spectro,Rakotomamonjy15-Histogram,Ren17-Sound}. For instance, a filterbank of two-dimensional Gabor functions was used to decompose the the spectro--temporal power density into multiple components~\cite{Schroeder15-Spectro}. Similar work was done in~\cite{Chu09-Environmental}, where a Gabor dictionary was implemented with atom parameters (i.\,e., scale, time, frequency, and phase) for the Matching Pursuit (MP) decomposition, and the generated MP features have shown their efficiency~\cite{Chu09-Environmental}. Furthermore, another trend aims to build higher-level features from the spectro-temporal features of acoustic event. In this context, Histogram of Gradients (HOG) representation was investigated to provide the information of the changing direction of spectral power~\cite{Rakotomamonjy15-Histogram}. Although huge efforts have been made on designing optimal features, AEC still remains a challenging task since the audio contains high variety owing to the complex acoustic environment. For this reason, a solution is to combine a variety of features extracted in either time or frequency domain into a fused high-dimensional space~\cite{Zhang12-Semi}. The primary assumption is that the classification model can automatically select important features for a specific class, which, however, can be quite challenging during model building. More recently, deep unsupervised representation learning techniques have achieved tremendous success in machine learning~\cite{Hinton06-fast,Bengio06-Greedy,Bengio13-Representation}. The key idea is to learn more complex abstractions as data representations in the higher layers of artificial deep neural networks from simple features in the lower layers in an unsupervised training fashion. Recently, unsupervised representation learning has begun to be applied to AEC, and has shown its efficiency in state-of-the-art research. In~\cite{McLoughlin15-Robust}, Deep Belief Networks (DBN) were employed for pre-training with unlabelled data. The extracted bottleneck features were then fed into a concatenated softmax layer for final classification. To capture the temporal information, sequential frames within a sliding window were batched as the network input. Similarly, a fully Deep Neural Network (DNN) structure was introduced in~\cite{Xu16-Fully}, where the raw features in continuum were scaled into one super high-dimensional vector and then considered to be the input for a deep `pyramid' structure. All these unsupervised representation learning researches have advanced the performance of AEC systems significantly. However, all these works either attempt to learn high-level representations at the frame-level, as the studies did in the field of speech recognition~\cite{McLoughlin15-Robust,Lee09-Unsupervised}, or assume that the analysed recordings share a fixed duration~\cite{Xu16-Fully}. Indeed, many event sounds have a strong temporal domain signature as aforementioned. For instance, the chirping of insects is typically noise-like with a broad and flat spectrum, which makes it hard for a system to distinguish it as a noise or an insect sound within one or several audio frames. Moreover, the acoustic events are often presented in arbitrary lengths, rather than fixed lengths. This renders the work in~\cite{Xu16-Fully} infeasible in realistic applications. To overcome the raised problems for AEC, we propose an {\em unsupervised sequence representation learning} approach, which employs multilayer Grated Recurrent Unit Recurrent Neural Networks (GRU-RNN) to learn representations of audio sequences. The model consists of a RNN encoder to map an input sequence into a fixed-length vector, and a RNN decoder to reconstruct the input sequence from the generated vector into a sequence-to-sequence learning strategy. Our primary assumption is that the representation captures the sequence information as it integrates a `restoration ability' with the help of the decoder. {The employed encoder-decoder architecture is similar to the ones widely used in natural language processing~\cite{Sutskever14-Sequence}, where the architecture was used for, for example, translating sentences from one language to another~\cite{Sutskever14-Sequence,Bahdanau14-Neural,Luong15-Effective}, or predicting the next sentence from previous ones~\cite{Shang15-Neural}. Significantly differing from these works, the essential idea of the proposed framework in this paper aims to learn a {\em vector representation} of a sequence with an arbitrary length. The learnt representations can then be utilised for pattern recognition by any classification models. The proposed approach is partially motivated by the work in~\cite{Srivastava15-Unsupervised}, where a Long Short-Term Memory (LSTM) encoder-decoder architecture was employed for video reconstruction and future prediction. In addition, it relates to~\cite{Dai15-Semi} as well, where the LSTM encoder-decoder was utilised for initialising the neural networks and further improving their generalisation capability. The proposed approach, however, is the attempt to obtain a vector representation in a purely unsupervised learning procedure.} The contributions of this paper mainly include: i) We propose an unsupervised learning framework to extract high-level audio sequence representations via a GRU-RNN encoder-decoder for AEC. Compared with previous works, this framework not only can deal with flexible-length audio recordings, but also holds the potential to distil the inherent event characteristics embedded in the audio sequences through infinite unlabelled data. ii) We evaluate the performance of the learnt sequence representations on a large-scale acoustic event database. The results demonstrate the high effectiveness and robustness of the learnt representations. \section{Related Work} \label{sec:relatedWork} There are two dominant methods to represent the audio sequence for AEC. The first method is likely inspired by speech recognition technology, in which the whole sequence is represented by sequential Low-Level Descriptors (LLDs) (e.\,g., MFCCs) frame by frame. Then, it uses generative models to estimate the joint probability distribution of features and labels to arrive at a final judgment~\cite{Stowell15-Detection}, or uses discriminative models like by a Support Vector Machine (SVM) to predict the frames successively then voting for a final decision~\cite{McLoughlin15-Robust}. While the sequence temporal information is going to be utilised as mentioned above, they are still far from being well-explored. The second method intends to expand all descriptors and concatenates them into a long vector, and then feeds the vector into a model for discriminative training and evaluation~\cite{Xu16-Fully}. This method simply assumes that all audio recordings have a fixed length. Also, this method possibly results in a curse of dimension issue when the recording duration increases. Rather than straightforwardly using the sequential frame-wise LLDs, recent promising methods are more in favour of the sequence-wise {\em statistic} features. These methods show the ability to handle the {\em arbitrary-length} recordings, and map them into fixed-dimensional vector representations. One efficient method is the {\it Bag-of-Audio-Words} (BoAW)~\cite{Aucouturier07-bag,Lu14-Sparse}. It uses a codebook of acoustic words (i.\,e., frame-level LLDs) that are randomly selected or generated via a clustering method (i.\,e., $k$-means) on the training set, to quantise the frame-wise LLDs. Then, a histogram of the occurrences of each word in the dictionary is built over the whole sequence, and regarded as the sequence representation. Another popular method is based on {\it functionals} (e.\,g., mean, standard deviation, skewness, kurtosis, maximum, minimum), which are applied to each of the LLD contours to extract the statistic information over the whole sequence~\cite{Zhang12-Semi}. However, all of these features for audio sequence are still hand-crafted. In this paper, we propose to learn the audio sequence representation in an unsupervised way for the application of AEC. Although a related work has been done in~\cite{Chung16-Audio}, it is mainly focused on word-level audio for spoken term detection. To the best of our knowledge, this is the first effort in this direction towards modelling the long audio sequence for a classification task. \section{Unsupervised Learning of Audio Sequence Representations} \label{sec:methodology} In this paper, we are interested in evaluating the performance of a RNN-based sequence-to-sequence encoder-decoder approach for AEC. Before an empirical evaluation, we first describe the proposed method in this section. \subsection{Grated Recurrent Unit} To implement the RNN encoder-decoder, we select the Grated Recurrent Unit (GRU) as the recurrent hidden unit of our RNNs, which was initially proposed by Cho et al.~\cite{Cho14-properties}. Analogous to the LSTM unit, this recurrent unit can also capture the long-term dependencies in sequence-based tasks and can well address the vanishing gradient problem~\cite{Chung14-Empirical}. Hence, GRU is often regarded as an alternative to LSTM units. However, the GRU has fewer parameters since it does not have a separate memory cell nor an output gate, which results in a faster training process and less-data demand for generalisation. Besides, many experiments have shown that the GRU performs competitive to or slightly better than the LSTM unit in most tasks~\cite{Chung14-Empirical,Jozefowicz15-empirical}. \begin{figure}[!t] \centering \input{gru.tex} \vspace{-.0cm} \caption{Illustration of Gated Recurrent Unit.} \label{fig:gru} \vspace{-.0cm} \end{figure} The typical structure of a GRU is depicted in Fig.~\ref{fig:gru}, which consists of a reset gate $r$, an update gate $z$, an activation $h$, and a candidate activation $\tilde{h}$. Mathematically, let $\mathbf{x}=(\mathbf{x}_1,\mathbf{x}_2, \ldots,\mathbf{x}_T)$ be an input audio sequence, where $\mathbf{x}_t\in \Re^d$ is in a $d$ dimension feature space (e.\,g., MFCC). The activation ${h}_t^j$ of the $j$-th GRU at time $t$ is updated by the previous activation ${h}_{t-1}^j$ and the candidate activation $\tilde{{h}}_t^j$, that is \begin{equation}\label{eq:1} h_t^j=(1-z_t^j)h_{t-1}^j+z_t^j\tilde{h}_t^j. \end{equation} The update gate $z_t$ is calculated by \begin{equation} {z}_t^j=\mathrm{sigm}(W_{xz}\mathbf{x}_t+W_{hz}{\mathbf{h}_{t-1}}+\mathbf{b}_z)^j, \end{equation} where $W$ denotes the weights matrix and $\mathbf{b}$ stands for the bias vector. The update gate $z_t^j$ is used for deciding how much the activation $h_t^j$ is to be updated with a new activation $\tilde{h}_t^j$. Thus, when $z_t^j$ is close to zero, the hidden state almost keeps unchanged in the next time-step. Opposed to this, when $z_t^j$ is close to one, the hidden state will be overwritten by a new version. In this way, it is expected to maintain any important feature owing to the update gate of the GRU. The candidate activation $\tilde{h}_t^j$ is computed mainly by considering the input $\mathbf{x}_t$, the reset gate $\mathbf{r}_t$, and the previous time-step hidden activation $\mathbf{h}_{t-1}$, as follows \begin{equation} \tilde{{h}}_t^j=\mathrm{tanh}(W_{xh}\mathbf{x}_t+W_{hh}(\mathbf{r}_t\odot\mathbf{h}_{t-1})+\mathbf{b}_h)^j, \end{equation} where $\odot$ is an element-wise multiplication, and $\mathbf{r}_t$ is a set of reset gates. Here, the reset gate ${r}_t^j$ decides on how much the previous activation ${h}_{t-1}^j$ impacts the candidate activation $\tilde{{h}}_t^j$. Only when ${r}_t$ equal zero, the candidate activation will be overwritten by the current inputs. Similar to the update gate, the $j$-th reset gate is computed by \begin{equation}\label{eq:4} {r}_t^j=\mathrm{sigm}(W_{xr}\mathbf{x}_t+W_{hr}\mathbf{h}_{t-1}+\mathbf{b}_r)^j. \end{equation} \subsection{Audio Sequence Representation Learning} \begin{figure}[!t] \centering \includegraphics[trim=2.2cm 0cm 2cm 0cm, width=2.8in,height=1.2in]{seqAE.pdf} \caption{Unsupervised framework for learning of audio sequence representations with a sequence-to-sequence Recurrent Neural Network (RNN) encoder-decoder.} \label{fig:rnnAutoencoder} \end{figure} The proposed unsupervised representation learning framework of audio sequences is illustrated in Fig.~\ref{fig:rnnAutoencoder}, which comprises a GRU-RNN {\em encoder} and a GRU-RNN {\em decoder}. The primary objective of this framework is to transform an arbitrary-length audio segment, give as a sequence of feature vectors $\mathbf{x}=(\mathbf{x}_1,\mathbf{x}_2, \ldots,\mathbf{x}_T)$, into {\em one fixed-length} vector representation $\mathbf{v}$. Specifically, the RNN encoder reads the acoustic feature $\mathbf{x}_t$ sequentially and {\em reversely} as done in~\cite{Sutskever14-Sequence}, and the hidden state $\mathbf{h}_t$ is updated accordingly by \begin{equation} \mathbf{h}_t=f(\mathbf{x}_t,\mathbf{h}_{t+1},\mathbf{z}_{t},\mathbf{r}_{t}) \end{equation} where $f$ denotes the GRU activation function as introduced in the above section. After the last acoustic feature $\mathbf{x}_1$ has been read and processed, the hidden state $\mathbf{h}_1$ of the RNN encoder can be viewed as the learnt vector representation $\mathbf{v}$ of the whole input sequence. The decoder aims to reconstruct the input sequence of the encoder in a {\em `normal'} direction, as $\mathbf{\hat{x}}=(\mathbf{\hat{x}}_1, \mathbf{\hat{x}}_{2}, \ldots, \mathbf{\hat{x}}_{T})$. To do this, the last hidden state $\mathbf{h}_1$ of the encoder is copied to the decoder as its initial hidden state, i.\,e., $\mathbf{\hat{h}}_1=\mathbf{h}_1$. Then, the decoder predicts the feature vector $\mathbf{{\hat{x}}_{t}}$ by given its hidden state $\mathbf{\hat{h}}_{t}$, update gate $\mathbf{\hat{z}}_{t-1}$, reset gate $\mathbf{\hat{r}}_{t-1}$, and its input $\mathbf{x}_{t-1}$, that is, \begin{equation} \mathbf{{\hat{x}}}_t = g(\mathbf{x}_{t-1},\mathbf{\hat{h}}_{t},\mathbf{\hat{z}}_{t-1},\mathbf{\hat{r}}_{t-1}), \end{equation} where $g$ is the GRU activation function as well. Note that, rather than using the previously predicted feature sequence, we utilise the original feature sequence as the decoder input, which is motivated by the finding of the work~\cite{Bengio15-Scheduled}. That is, the original feature sequence is helpful in improving the model robustness to its own errors when training. The RNN encoder and decoder are jointly trained by minimising the reconstruction error, measured by the averaged Mean Square Error (MSE): \begin{equation} \frac{1}{T}\sum_{t=1}^T\parallel\mathbf{x}_t-\mathbf{\hat{x}}_t\parallel^2. \end{equation} The whole training process is carried out in a fully {\em unsupervised} manner since label information is not required at all. Finally, when the audio sequences are fed into the pre-trained encoder-decoder, and the last hidden state of the encoder for each audio sequence will be viewed as its fixed-dimensional vector representation $\mathbf{v}$. Since this vector is able to reconstruct itself by the RNN decoder, we believe that such a vector contains the whole sequence information in a compressed way. \section{Experiments and Results} \label{sec:experiments} In this section, we are devoted to estimating the effectiveness and the robustness of the proposed framework for learning audio sequence representations. Extensive experiments are conducted on a large-size acoustic event database, and the empirical results are compared with other state-of-the-art baselines. \begin{table}[!t] \centering \caption{Quantitative description of Findsounds2016.} \vspace{.1cm} \begin{threeparttable} \begin{tabular}{lrrrr} \toprule \bf category & \bf \# class & \bf \# segment & \bf duration\\ \midrule Animals & 67 & 1\,998 & 1h 53m \\ Birds & 102 & 1\,766 & 1h 53m \\ Household & 53 & 2\,097 & 1h 27m \\ Insects & 7 & 235 & 16m \\ Mayhem & 35 & 1\,471 & 50m \\ Miscellaneous & 70 & 2\,628 & 1h 45m \\ Musical Instruments & 57 & 4\,112 & 3h 35m \\ Nature & 18 & 754 & 1h 3m \\ Office & 18 & 1\,188 & 50m \\ People & 45 & 2\,165 & 1h 44m \\ Sports Recreation & 22 & 266 & 9m \\ Tools & 21 & 296 & 18m \\ TV Movies & 22 & 645 & 24m \\ Vehicles & 33 & 1\,714 & 2h 9m \\ \midrule \bf{Total} & \bf 570 & \bf 21\,335 & \bf 18h 23m \\ \bottomrule \end{tabular} \end{threeparttable} \label{tab:findsounds} \vspace{-.0cm} \end{table} \subsection{Database Description} The database selected for our experiments -- {\em Findsounds2016} -- is supposed to be a large publicly available databases for the AEC research when conducting the experiments~\cite{Piczak15-ESC:}. It was collected from the website of `www.findsounds.com', which provides a comprehensive set of event-annotated audio recordings from real environments, reaching from nature (e.\,g., nature and animals) over human beings (e.\,g., people) to manufactured articles (e.\,g., musical instruments and vehicles). Specifically, we discarded two categories (i.\,e., Holidays and Noisemakers) from the original dataset due to the sample-overlapping with other categories, resulting in a final set of 14 common acoustic-event categories. Each category further includes a number of classes (subsets), giving rise to a total of 570 classes and 21\,335 independent audio segments, with a total duration of more than 18 hours. More details on the number of segments and recording time per category are summarised in Table~\ref{tab:findsounds}. The averaged duration over all audio segments is 3.1\,s with a maximum and a minimum of 10.0\,s and 0.1\,s, respectively. {In detail, Fig.~\ref{fig:dataDistribution} illustrates the duration distribution for each acoustic-event category over the whole database. Obviously, these duration distributions are highly overlapped and mainly range from one to six seconds.} Moreover, owing to the diversity of the audio formats in the original dataset retrieved from the web, we converted all audio files into a uniformed format with 16-bit encoding, mono-channel, and 16\,kHz sampling rate. \begin{figure}[!t] \centering \resizebox{0.45\textwidth}{2.7in}{\input{box_duration.pgf}} \caption{Duration distribution of audio segments for each acoustic-event category over the whole Findsounds2016 database.} \label{fig:dataDistribution} \end{figure} For training the back-end classifier, each subset of the Findsounds2016 database was equally and sequentially partitioned into training set (7\,312 instances), test set (7\,106 instances), and validation set (6\,917 instances). In addition, we always upsampled the training set to alleviate the unbalanced class-distribution problem. \subsection{Experimental Setup} \begin{figure*} \centering \subfigure[{SVMs}]{ \includegraphics[width=0.48\textwidth]{SVM_WF1.pdf} } \subfigure[{GRU-RNNs}]{ \includegraphics[width=0.48\textwidth]{RNN_WF1.pdf} } \vspace{0cm} \caption{Performance comparison (F1-measure) between the {\em learnt audio sequence representations} via a variety of RNN Encoder-Decoders (ED) and four {\em hand-crafted features} on the {\em validation} set of Findsounds2016. Performance was evaluated by (a) the SVMs with various complexity values, $C$, or (b) the GRU-RNNs with various numbers of hidden units, $n$.} \label{fig:resultsValid} \vspace{0cm} \end{figure*} For training RNN encoder-decoders to learn the audio sequence representations, we theoretically could feed the raw signals into the network directly. However, the long sequence-length leads to a high requirement of computational resource. As MFCCs have been repeatedly verified to be the efficient features for most acoustic recognition tasks and we have limited computational resource, we extracted 13 MFCCs (including one logarithmic energy) per frame using a window size of 60\,ms at a step size of 60\,ms. Compared with the conventional parameters for extracting MFCCs (i.\,e., window size: 25\,ms, step size: 10\,ms), the ones we selected can further lead to a reduced sequence length and significantly speed up the network training process. In this case, the longest sequence of Findsounds2016 has 167 MFCC feature vectors. Finally, all the extracted features were standardised by the means and variations of the training set. To accelerate the RNN encoder-decoder training process, we used a mini batch of 64 sequences as network input. In this case, we padded zeros at the end of each sequence to force them equilong. The padded zeros, however, are ignored when calculating the reconstruction errors (i.\,e., training loss) in the training process by setting their weights to zero. Further, to control the learning process, we checked the training loss after running every 500 batches. To update the network weights, we employed the classic Stochastic Gradient Decent (SGD) with an initial learning rate of 0.7. This value dynamically reduced with a decay factor of 0.99 when the training loss was not improved any more over the previous three checking points. Additionally, a gradient norm clipping operation was performed with a clipping ratio of 5 to handle the gradient blowup problem. The whole learning process was stopped once there was no training loss improvement over 20 successive checking points. To assess the discrimination and the robustness of the learnt audio sequence representations via the pre-trained RNN encoder-decoder, we further adopted nowadays two of the most frequently used classification models. One of them is the SVM trained with the sequential minimal optimisation algorithm. The complexity value of $C$ was optimised in \{0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5\} on the validation set. Another one is the GRU-RNN with one hidden layer, while the number of hidden units was optimised in \{128, 256, 512, 1024\} on the validation set. Additionally, the GRU-RNNs were trained with Adam SGD with an initial learning rate of $10^{-4}$, to which an exponential decay was applied at every $10^{4}$ steps with a decay rate of 0.96. Further, the gradient norm clipping ratio was set to 1.2, and the batch size was set to 128. For equal comparison, the training processes of all networks were stopped at the 500th epoch. To measure the system performance, we utilised {\em F1-measure} (F1) as a primary metric, {mainly due to the facts that i) F1 provides an overview performance in a multi-class setting as it is calculated by the harmonic mean of unweighted precision and recall; ii) F1 is among the most widely used evaluation metrics in AEC, for example, in a series of challenges of Detection and Classification of Acoustic Scenes and Events (DCASE)~\cite{Stowell15-Detection,Mesaros16-TUT}.} Additionally, we took the {\em Unweighted Accuracy} (UA, or unweighted recall) as a complementary metric. It is obtained by the sum of the accuracies over all classes divided by the number of classes. Thus, UA also well indicates the system performance in a class-unbalanced task. {Further, unless stated otherwise, a one-side $z$-test was undertaken to evaluate the statistical significance of performance improvement. } \subsection{Compared Features of Audio Sequence} To verify the effectiveness of the learnt representation of audio sequence, we selected one BoAW feature set and three functional-based feature sets for comparison. All these feature sets are widely used for AEC or related acoustic tasks (e.\,g., emotion) nowadays. A brief description of the four feature sets is as follows: \begin{itemize} \item {\it BoAW} feature set: The codebook includes 2\,048 audio words. Each frame of the sequence is then assigned to the nearest 256 audio words. Afterwards, a normalised histogram is applied to convert the word occurrence accounts into a fixed-length vector~\cite{Lu14-Sparse}. \item the extended Geneva Minimalistic Acoustic Parameter Set ({\it eGeMAPS}): It consists of 88 important acoustic attributes, which were selected by extensive experiments on acoustic pattern classification tasks~\cite{Eyben16-Real}. \item the 2011 Audio-Visual Emotion recognition Challenge \\({\it AVEC11}) feature set: It contains 1\,941 attributes and was used in~\cite{Zhang12-Semi} for AEC. \item the INTERSPEECH 2013 Computational Paralingusitics ChallengE ({\it ComParE13}) feature set: It includes a large-scale acoustic attributes up to 6\,373~\cite{Eyben16-Real}. \end{itemize} \subsection{Results} To evaluate the robustness of the proposed framework, we constructed the RNN encoder-decoders in several structures, mainly towards a {\em deep} or a {\em wide} direction. To assess the deeper networks, we fixed the number of hidden units per layer as 512, and then set the hidden layers to one, two, or three, resulting in three RNN encoder-decoders in different depths. To assess the wider networks, we fixed the depth of hidden layer as one, but set the number of hidden units to 512, 1\,024, or 2\,048, leading to additional two RNN encoder-decoders in different widths. Note that the RNN encoders and corresponding decoders always share the same structures. Fig.~\ref{fig:resultsValid} illustrates the performance of the learnt representations obtained by diverse RNN encoder-decoders, as well as four conventional feature sets based on BoAW or functionals (i.\,e., eGeMAPS, AVEC11, and ComParE13). The performance was estimated on the validation set of Findsounds2016 for 14 acoustic-event categories. Specifically, Fig.~\ref{fig:resultsValid} (a) depicts the feature performance when taking the SVMs as discriminative models. From this figure, one can obviously observe that the results delivered by the learnt representations are remarkably higher than the other four state-of-the-art baselines. The best result is achieved at 85.6\,\% of F1 by using the representations learnt by the RNN encoder-decoder with one hidden layer of 2\,048 hidden units (ED: 2048-1). This result is almost double of the best baseline achieved by using ComParE13 or AVEC11 feature set (i.\,e., 50.2\,\% of F1). Further, when increasing the depth of the neural networks from one to two and three, one can see a steady and significant performance improvement. Similarly, when extending the width of the neural networks from 512 to 1\,024 and 2\,048, again huge performance improvement is obtained. This indicates that appropriately increasing the complexity of the sequence-to-sequence model, either in a deep way or in a wide way, can notably improve the effectiveness of the learnt representations. \begin{table}[!t] \centering \caption{Performance comparison (F1 and UA) between the {\em learnt audio sequence representations} via a variety of RNN Encoder-Decoders (ED) and four {\em hand-crafted features} on the {\em test} set of Findsounds2016. Performance was evaluated by both SVMs and GRU-RNNs.} \vspace{.15cm} \begin{threeparttable} \begin{tabular}{c|cc|cc} \toprule $[\%]$ & \multicolumn{2}{c|}{\bf SVMs} & \multicolumn{2}{c}{\bf GRU-RNNs} \\ feature types & F1 & UA & F1 & UA \\ \midrule BoAW &41.9 &35.3 &44.4 &39.5 \\ eGeMAPS &36.4 &34.9 &47.6 &41.4 \\ AVEC11 &50.4 &42.8 &54.0 &48.7 \\ ComParE13 &49.7 &43.6 &53.2 &46.2 \\ \midrule ED: 512-1 &58.1 &52.9 &61.1 &54.9 \\ ED: 512-2 &68.4 &63.4 &71.8 &67.4 \\ ED: 512-3 &80.6 &76.6 &80.5 &78.4 \\ ED: 1024-1 &72.0 &65.8 &72.6 &70.0 \\ ED: 2048-1 &\bf85.2 &\bf80.4 &\bf89.0 &\bf87.6 \\ \bottomrule \end{tabular} \end{threeparttable} \label{tab:resultsTest} \end{table} Similar observations can be found in Fig.~\ref{fig:resultsValid} (b), where GRU-RNNs were employed as discriminative models. Generally speaking, however, GRU-RNNs yield better performance than SVM in all cases. The best result further rockets to 88.8\,\% of F1. Additionally, an interesting observation can be seen that the learnt representations performs better when using relatively simple networks for classification, yet the hand-crafted features incline to choose the relative complex networks for classification in order to get better results. This indicates that the learnt representations is easier to be learnt by a simple machine learning model than the selected hand-crafted features. We further evaluated the learnt representations on the test set by employing both SVMs and GRU-RNNs for classification with the best parameter settings optimised on the validation set. Table~\ref{tab:resultsTest} displays the corresponding results in terms of F1 and UA. Consistently, the RNN encoder-decoder with 2048 hidden units offers the most efficient features, contributing to 85.2\,\% of F1 and 80.4\,\% of UA by means of SVMs, and 89.0\,\% of F1 and 87.6\,\% of UA by means of GRU-RNNs. Compared with the best baseline, they provide absolute gains as high as 35.0\,\% of F1 and 38.9\,\% of UA. To further investigate the effectiveness of the learnt representation, we randomly selected 20 samples from each categories and projected them into the leading two discriminant directions found by Linear Discriminant Analysis (LDA). The visualisation of the audio sequence representations is displayed in Fig.~\ref{fig:representVisual}. Notably, the samples belong to different categories are strongly discriminative, which reasonably results in a high prediction accuracy. {To intuitively demonstrate the best performance we achieved by using the GRU-RNN based classifier with one hidden layer and 128 hidden units, Fig.~\ref{fig:confusionMatrix} illustrates the prediction confusion matrix on the test set, which is obtained by using the vector representations learnt by the RNN encoder-decoder comprised of one hidden layer with 2\,048 hidden units. Generally speaking, the acoustic segments represented by the proposed vectors can be well distinguished into corresponding categories. In more detail, one can notice that the category of `Miscellaneous' (labelled as 5 in Fig.~\ref{fig:confusionMatrix}) is relatively easier to be misclassified into the others, which keeps in line with the fact that its contents include many quite similar acoustic events to the other categories. } \begin{figure} \centering \resizebox{0.48\textwidth}{2.15in}{\input{LDA.pgf}} \caption{Visualisation of the audio sequence representations learnt from the RNN encoder-decoder. 20 samples per class are randomly selected through the whole dataset and projected into the leading two discriminant directions found by Linear Discriminant Analysis (LDA). Each sample is remarked by the category number (0$\sim$13) it belongs to. } \label{fig:representVisual} \vspace{-.0cm} \end{figure} \begin{figure} \centering \resizebox{0.45\textwidth}{2.8in}{\input{confusion_matrix.pgf}} \caption{Confusion matrix of predictions on the test set obtained by the best GRU-RNN classification model (one hidden layer with 128 hidden units). The labels from 0 to 13 sequentially indicate the categories from `Animals' to `Vehicles' as listed in Table~\ref{tab:findsounds} within the same order. } \label{fig:confusionMatrix} \end{figure} In addition, we performed the same experiments on the acoustic-event classes. Rather than utilising the whole 570 classes, we discarded those classes having extremely sparse samples (fewer than 20). This leads to a subset of 229 selected classes, and a slightly smaller training set (6\,277), test set (6\,202), and validation set (6\,122). Table~\ref{tab:resultsSubclasses} shows the corresponding results for various features or representations. Interestingly, the learnt representations consistently outperform the frequently used feature sets, and yield the highest F1 and UA of 47.7\,\% and 39.0\,\%, respectively, for 229 types of acoustic events. \begin{table}[!t] \centering \caption{Performance comparison (F1 and UA) for classifying {\bf 229} classes of acoustic events between the {\em learnt audio sequence representations} via a variety of RNN Encoder-Decoders (ED) and four {\em hand-crafted features} on the {\em test} set of Findsounds2016. Performance was evaluated by both SVMs and GRU-RNNs.} \vspace{.15cm} \begin{threeparttable} \begin{tabular}{c|cc|cc} \toprule $[\%]$ & \multicolumn{2}{c|}{\bf SVMs} & \multicolumn{2}{c}{\bf GRU-RNNs} \\ feature types & F1 & UA & F1 & UA \\ \midrule BoAW &18.5 &17.8 &17.7 &17.5 \\ eGeMAPS &18.2&20.1&21.8&20.9 \\ AVEC11 &26.8&23.6&20.0&18.8 \\ ComParE13 &27.1&23.8&23.1&21.5 \\ \midrule ED: 512-1 &19.9 &20.1 &25.6&23.0 \\ ED: 512-2 &29.0 &26.4 &31.5&27.3 \\ ED: 512-3 &34.6 &\bf32.0 &43.2&36.5 \\ ED: 1024-1 &24.5 &23.8 &32.6&28.5 \\ ED: 2048-1 &\bf35.1 &30.8 &\bf47.7&\bf39.0 \\ \bottomrule \end{tabular} \end{threeparttable} \label{tab:resultsSubclasses} \end{table} \section{Conclusions} \label{sec:conclusions} In this paper, we proposed an unsupervised framework to learn the essential patterns of acoustic events that are embedded through the whole audio sequence. In this framework, a Recurrent Neural Networks (RNN) based sequence-to-sequence encoder-decoder is used, where the inputs are the sequential and reverse acoustic feature vectors and the targets are their counterparts in normal order. This encoder-decoder is trained without any category information such that it has the huge potential to explore big unlabelled data in the real world. We then extracted the bottleneck features as the audio sequence representations for acoustic event classification, and evaluated them through traditional machine learning algorithms. This framework can address the audio sequences with arbitrary durations, and compress them into vector representations with a fixed dimension. Since the learnt representation can be well recovered to its original version by the decoder, it is thus supposed to contain the most important sequence information. The effectiveness and robustness of the proposed framework was extensively examined by the experiments on a large dataset, which have raised the state-of-the-art baselines into a significantly high level. {Encouraged by the achieved results, we will further evaluate our proposed method in a recently released weekly labelled dataset AudioSet~\cite{Gemmeke17-Audio}. We believe that the proposed learning representation approach is a major breakthrough in the development of the RNN based encoder-decoder models, which could potentially lead to a range of exciting applications way out of our chosen exemplary application. These applications, which highly characterised with sequential patterns via either audio or video signals, include activity detection, emotion recognition, polyphonic sound tagging, and the like. } \section*{Acknowledgment} \begin{wrapfigure}{l}{0.07\textwidth} \vspace{-15pt} \begin{center} \includegraphics[width=0.07\textwidth]{eu_small.png} \end{center} \vspace{-10pt} \end{wrapfigure} \noindent This work was supported by the European Union's Seventh Framework Programme through the ERC Starting Grant No.\ 338164 (iHEARu), and Horizon 2020 Programme through the Research Innovation Action No.\ 645094 (SEWA). \bibliographystyle{IEEEtran}
proofpile-arXiv_065-3596
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Conclusion} \label{sec:conclusion} In this paper we presented a system that can be seen as a step towards solving end-to-end scene text recognition, using only a single multi-task deep neural network. We trained the text detection component of our model in a semi-supervised way and are able to extract the localization results of the text detection component. The network architecture of our system is simple, but it is not easy to train this system, as a successful training requires extensive pre-training on easier sub-tasks before the model can converge on the real task. We also showed that the same network architecture can be used to reach competitive or state-of-the-art results on a range of different public benchmark datasets for scene text detection/recognition. At the current state we note that our models are not fully capable of detecting text in arbitrary locations in the image, as we saw during our experiments with the \ac{FSNS} dataset. Right now our model is also constrained to a fixed number of maximum textlines/characters that can be detected at once, in our future work we want to redesign the network in a way that makes it possible for the network to determine the number of textlines in an image by itself. \section{Experiments} \label{sec:experiments} In this section we evaluate our presented network architecture on several standard scene text detection/recognition datasets. We present the results of experiments for three different datasets, where the difficulty of the task at hand increases for each dataset. We first begin with experiments on the SVHN dataset \cite{Netzer2011Reading}, that we used to prove that our concept as such is feasible. The second type of dataset we performed experiments on were datasets for focused scene text recognition, where we explored the performance of our model, when it comes to find and recognize single characters. The third dataset we exerimented with was the \acf{FSNS} dataset \cite{Smith2016EndToEnd}, which is the most challenging dataset we used, as this dataset contains a vast amount of irregular, low resolution text lines that are more difficult to locate and recognize than text lines from the SVHN dataset. We begin this section by introducing our experimental setup. We will then present the results and characteristics of the experiments for each of the aforementioned datasets. \subsection{Experimental Setup} \label{ssec:experimental_setup} \paragraph{Localization Network} The localization network used in every experiment is based on the ResNet architecture \cite{He2016Deep}. The input to the network is the image where text shall be localized and later recognized. Before the first residual block the network performs a $3 \times 3$ convolution followed by a $2 \times 2$ average pooling layer with stride 2. After these layers three residual blocks with two $3 \times 3$ convolutions, each followed by batch normalization \cite{Ioffe2015Batcha}, are used. The number of convolutional filters is 32, 48 and 48 respectively and ReLU \cite{Nair2010Rectified} is used as activation function for each convolutional layer. A $2 \times 2$ max-pooling with stride 2 follows after the second residual block. The last residual block is followed by a $5 \times 5$ average pooling layer and this layer is followed by a \ac{BLSTM} with 256 hidden units. For each time step of the \ac{BLSTM} a fully connected layer with 6 hidden units follows. This layer predicts the affine transformation matrix, that is used to generate the sampling grid for the bilinear interpolation. As rectification of scene text is beyond the scope of this work we disabled skew and rotation in the affine transformation matrices by setting the according parameters to 0. We will discuss the rectification capabilities of Spatial Transformers for scene text detection in our future work. \paragraph{Recognition Network} The inputs to the recognition network are $N$ crops from the original input image that represent the text regions found by the localization network. The recognition network has the same structure as the localization network, but the number of convolutional filters is higher. The number of convolutional filters is 32, 64 and 128 respectively. Depending on the experiment we either used an ensemble of $T$ independent softmax classifiers as used in \cite{Goodfellow2014MultiDigit} and \cite{Jaderberg2014Deep}, where $T$ is the maximum length that a word may have, or we used \ac{CTC} with best path decoding as used in \cite{He2016Reading} and \cite{Shi2016EndToEnd}. \paragraph{Implementation} We implemented all our experiments using MXNet \cite{Chen2015Mxnet}. We conduted all our experiments on a work station which has an Intel(R) Core(TM) i7-6900K CPU, 64 GB RAM and 4 TITAN X (Pascal) GPUs. \subsection{Experiments on the SVHN dataset} \label{ssec:svhn_experiments} With our first experiments on the SVHN dataset \cite{Netzer2011Reading} we wanted to prove that our concept works and can be used with real world data. We therefore first conducted experiments similar to the experiments in \cite{Jaderberg2015Spatial} on SVHN image crops with a single house number in each image crop, that is centered around the number and also contains background noise. \autoref{tab:svhn_results} shows that we are able to reach competitive recognition accuracies. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & 64px \\ \hline Maxout CNN \cite{Goodfellow2014MultiDigit} & 96 \\ ST-CNN \cite{Jaderberg2015Spatial} & 96.3 \\ \hline Ours & 95.2 \\ \hline \end{tabular} \end{center} \caption{Sequence recognition accuracies on the SVHN dataset. When recognizing house number on crops of $64 \times 64$ pixels, following the experimental setup of \cite{Goodfellow2014MultiDigit}} \label{tab:svhn_results} \end{table} Based on this experiment we wanted to determine whether our model is able to detect different lines of text that are arranged in a regular grid or placed at random locations in the image. In \autoref{fig:svhn_grid_dataset} we show samples from two purpose build datasets\footnote{datasets are available here: \url{https://bartzi.de/research/stn-ocr}} that we used for our other experiments based on SVHN data. We found that our network performs well on the task of finding and recognizing house numbers that are arranged in a regular grid. An interesting observation we made during training on this data was that we were able to achieve our best results when we did two training steps. The first step was to train the complete model from scratch (all weights initialized randomly) and then train the model with the same data again, but this time with the localization network pre-initialized with the weights obtained from the last training and the recognition net initialized with random weights. This strategy leads to better localization results of the localization network and hence improved recognition results. During our experiments on the second dataset, created by us, we found that it is not possible to train a model from scratch, that can find and recognize more than two textlines that are scattered across the whole image. It is possible to train such a network by first training the model on easier tasks first (few textlines, textlines closer to the center of the image) and then increase the difficulty of the task gradually. In the supplementary material we provide short video clips that show how the network is exploring the image while learning to detect text for a range of different experiments. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{images/svhn_data_one_line.eps} \end{center} \caption{Samples from our generated datasets, including BBoxes predicted by our model. \textit{Left:} Sample from regular grid dataset, \textit{Right:} Sample from dataset with randomly positioned house numbers.} \label{fig:svhn_grid_dataset} \end{figure} \subsection{Experiments on Robust Reading Datasets} \label{ssec:icdar2013_experiments} In our next experiments we used datasets where text regions are aleady cropped from the input images. We wanted to see whether our text localization network can be used as an intelligent sliding window generator that adopts to irregularities of the text in the cropped text region. Therefore we trained our recognition model using \ac{CTC} on a dataset of synthetic cropped word images, that we generated using our own data generator, that works similar to the data generator introduced by Jaderberg \etal \cite{Jaderberg2014Synthetic}. In \autoref{tab:focused_scene_text_results} we report the recognition results of our model on the ICDAR 2013 robust reading \cite{Karatzas2013Icdar}, the Street View Text (SVT) \cite{Wang2011EndToEnd} and the IIIT5K \cite{Mishra2012Scene} benchmark datasets. For evaluation on the ICDAR 2013 and SVT datasets, we filtered all images that contain non-alphanumeric characters and discarded all images that have less than 3 characters as done in \cite{Shi2016Robust,Wang2011EndToEnd}. We obtained our final results by post-processing the predictions using the standard hunspell english (en-US) dictionary. Overall we find that our model achieves state-of-the-art performance for unconstrained recognition models on the ICDAR 2013 and IIIT5K dataset and competitive performance on the SVT dataset. In \autoref{fig:robust_reading_text_localization} we show that our model learns to follow the slope of the individual text regions, proving that our model produces sliding windows in an intelligent way. \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Method & ICDAR 2013 & SVT & IIIT5K \\ \hline Photo-OCR \cite{Bissacco2013Photoocr} & 87.6 & 78.0 & -\\ CharNet \cite{Jaderberg2014Deepa} & 81.8 & 71.7 & -\\ DictNet* \cite{Jaderberg2015Reading} & \textbf{90.8} & 80.7 & - \\ CRNN \cite{Shi2016EndToEnd} & 86.7 & 80.8 & 78.2 \\ RARE \cite{Shi2016Robust} & 87.5 & \textbf{81.9} & 81.9 \\ \hline Ours & \textbf{90.3} & 79.8 & \textbf{86} \\ \hline \end{tabular} \end{center} \caption{Recognition accuracies on the ICDAR 2013, SVT and IIIT5K robust reading benchmarks. Here we only report results that do not use per image lexicons. (*\cite{Jaderberg2015Reading} is not lexicon-free in the strict sense as the outputs of the network itself are constrained to a 90k dictionary.)} \label{tab:focused_scene_text_results} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{images/icdar_svt_samples_text.eps} \end{center} \caption{Samples from ICDAR, SVT and IIIT5K datasets that show how well our model finds text regions and is able to follow the slope of the words.} \label{fig:robust_reading_text_localization} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=0.99\linewidth]{images/fsns_images_smaller.eps} \end{center} \caption{Samples from the \ac{FSNS} dataset, these examples show that our system is able to detect a range of differently arranged text lines and also recognize the content of these words} \label{fig:fsns_examples} \end{figure*} \subsection{Preliminary Experiments on the \ac{FSNS} dataset} \label{ssec:fsns_experiments} Following our scheme of increasing the difficulty of the task that should be solved by the network, we chose the \acf{FSNS} dataset by Smith \etal \cite{Smith2016EndToEnd} to be our third dataset to perform experiments on. The results we report here are preliminary and are only meant to show that our network architecture is also applicable to this kind of data, although it does not yet reach state-of-the-art results. The \ac{FSNS} dataset contains images of french street name signs that have been extracted from Google Streetview. This dataset is the most challenging dataset for our approach as it \begin{enumerate*}[label={(\arabic*)}] \item contains multiple lines of text with varying length embedded in natural scenes with distracting backgrounds and \item contains a lot of images that do not include the full name of the streets. \end{enumerate*} During our first experiments with that dataset we found that our model is not able to converge, when trained on the supplied groundtruth. We argue that this is because the labels of the original dataset do not include any hint on which words can be found in which text line. We therefore changed our approach and started with experiments where we tried to find individual words instead of textlines with more than one word. We adapted the groundtruth accordingly and used all images that contain a maximum of three words for our experiments, which leaves us with approximately \SI{80}{\percent} of the original data from the dataset. \autoref{fig:fsns_examples} shows some examples from the \ac{FSNS} dataset where our model correctly localized the individual words and also correctly recognized the words. Using this approach we were able to achieve a reasonably good character recognition accuracy of \SI{97}{\percent} on the test set, but only a word accuracy of \SI{71.8}{\percent}. The discrepancy in character recognition rate and word recognition rate is caused by the fact that the model we trained for this task uses independent softmax classifiers for each character in a word. Having a character recognition accuracy of \SI{97}{\percent} means that there is a high probability that at least one classifier makes a mistake and thus increases the sequence error. \section{Introduction} Text is ubiquitous in our daily lifes. Text can be found on documents, road signs, billboards, and other objects like cars or telephones. Automatically detecting and reading text from natural scene images is an important part of systems that can be used for several challenging tasks such as image-based machine translation, autonomous cars or image/video indexing. In recent years the task of detecting text and recognizing text in natural scenes has seen much interest from the computer vision and document analysis community. Furthermore recent breakthroughs \cite{He2016Deep,Jaderberg2015Spatial,Redmon2016You,Ren2015Faster} in other areas of computer vision enabled the creation of even better scene text detection and recognition systems than before \cite{Bigorda2016Textproposalsa,Gupta2016Syntheticb,Shi2016Robust}. Although the problem of \ac{OCR} can be seen as solved for printed document texts it is still challenging to detect and recognize text in natural scene images. Images containing natural scenes exhibit large variations of illumination, perspective distortions, image qualities, text fonts, diverse backgrounds, \etc. The majority of existing research works developed end-to-end scene text recognition systems that consist of complex two-step pipelines, where the first step is to detect regions of text in an image and the second step is to recognize the textual content of that identified region. Most of the existing works only concentrate on one of these two steps. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{images/schematic_overview.eps} \end{center} \caption{Schematic overview of our proposed system. The input image is fed to a single neural network that consists of a text detection part and a text recognition part. The text detection part learns to detect text in a semi-supervised way, by being jointly trained with the recognition part.} \label{fig:schematic_of_system} \end{figure} In this paper, we present a solution that consists of a single \ac{DNN} that can learn to detect and recognize text in a semi-supervised way. This is contrary to existing works, where text detection and text recognition systems are trained separately in a fully-supervised way. Recent work \cite{Dai2016InstanceAware} showed that \acp{CNN} are capable of learning how to solve complex multi-task problems, while being trained in an end-to-end manner. Our motivation is to use these capabilities of \acp{CNN} and create an end-to-end scene text recognition system that behaves more like a human by dividing the task at hand into smaller subtasks and solving these subtask independently from each other. In order to achieve this behavior we learn a single \ac{DNN} that is able to divide the input image into subtasks (single characters, words or even lines of text) and solve these subtasks independently of each other. This is achieved by jointly learning a localization network that uses a recurrent spatial transformer \cite{Jaderberg2015Spatial,Snderby2015Recurrent} as attention mechanism and a text recognition network (see \autoref{fig:schematic_of_system} for a schematic overview of the system). In this setting the network only receives the image and the labels for the text contained in that image as input. The localization of the text is learned by the network itself, making this approach semi-supervised. Our contributions are as follows: \begin{enumerate*}[label={(\arabic*)}] \item We present a system that is a step towards solving end-to-end scene text recognition by integrating spatial transformer networks. \item We train our proposed system end-to-end in a semi-supervised way. \item We demonstrate that our approach is able to reach state-of-the-art/competitive performance on a range of standard scene text detection and recognition benchmarks. \item We provide our code\footnote{\url{https://github.com/Bartzi/stn-ocr}} and trained models\footnote{\url{https://bartzi.de/research/stn-ocr}} to the research community. \end{enumerate*} This paper is structured in the following way: In \autoref{sec:related_work} we outline work of other researchers related to ours. Section \ref{sec:proposed_system} describes our proposed system in detail and provides best practices on how to train such a system. We show and discuss our results on standard benchmark datasets in \autoref{sec:experiments} and conclude our findings in \autoref{sec:conclusion}. \section{Proposed System} \label{sec:proposed_system} A human trying to find and read text will do so in a sequential manner. The first action is to put attention on a line of text, read each character sequentially and then attend to the next line of text. Most current end-to-end systems for scene text recognition do not behave in that way. These systems rather try to solve the problem by extracting all information from the image at once. Our system first tries to attend sequentially to different text regions in the image and then recognize the textual content of each text region. In order to this we created a simple \ac{DNN} consisting of two stages: \begin{enumerate*}[label={(\arabic*)}] \item text detection \item text recognition \end{enumerate*}. In this section we will introduce the attention concept used by the text detection stage and the overall structure of the proposed system. We also report best practices for successfully training such a system. \subsection{Detecting Text with Spatial Transformers} \label{subsec:ps_spatial_transformer_networks} A spatial transformer proposed by Jaderberg \etal \cite{Jaderberg2015Spatial} is a differentiable module for \acp{DNN} that takes an input feature map $I$ and applies a spatial transformation to this feature map, producing an output feature map $O$. Such a spatial transformer module is a combination of three parts. The first part is a localisation network computing a function $f_{loc}$, that predicts the parameters $\theta$ of the spatial transformation to be applied. These predicted parameters are used in the second part to create a sampling grid that defines which features of the input feature map should be mapped to the output feature map. The third part is a differentiable interpolation method that takes the generated sampling grid and produces the spatially transformed output feature map $O$. We will shortly describe each component in the following paragraphs. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{images/grid_and_sampler.eps} \end{center} \caption{Operation method of grid generator and image sampler. First the grid generator uses the $N$ affine transformation matrices $A^{n}_{\theta}$ to create $N$ equally spaced sampling grids (red and blue grids on the left side). These sampling grids are used by the image sampler to extract the image pixels at that location, in this case producing the two output images $O^1$ and $O^2$. The corners of the generated sampling grids provide the vertices of the bounding box for each text region that has been found by the network.} \label{fig:transformation_params_overview} \end{figure} \paragraph{Localization Network} The localization network takes the input feature map $I \in \mathbb{R}^{C \times H \times W}$, with $C$ channels, height $H$ and width $W$ and outputs the parameters $\theta$ of the transformation that shall be applied. In our system we use the localization network ($f_{loc}$) to predict $N$ two-dimensional affine transformation matrices $A^{n}_{\theta}$, where $n \in \{0, \ldots, N - 1\}$: \begin{equation} f_{loc}(I) = A^{n}_{\theta} = \begin{bmatrix} \theta^{n}_1 & \theta^{n}_2 & \theta^{n}_3 \\ \theta^{n}_4 & \theta^{n}_5 & \theta^{n}_6 \\ \end{bmatrix} \end{equation} $N$ is thereby the number of characters, words or textlines the localization network shall localize. The affine transformation matrices predicted in that way allow the network to apply translation, rotation, zoom and skew to the input image, hence the network learns to produce transformation parameters that can zoom on characters, words or text lines that are to be extracted from the image. In our system the $N$ transformation matrices $A^{n}_{\theta}$ are produced by using a feed-forward \ac{CNN} together with a \ac{RNN}. Each of the $N$ transformation matrices is computed using the hidden state $h_n$ for each time-step of the \ac{RNN}: \begin{align} c &= f^{conv}_{loc}(I) \\ h_n &= f^{rnn}_{loc}(c, h_{n-1}) \\ A^{n}_{\theta} &= g_{loc}(h_n) \end{align} where $g_{loc}$ is another feed-forward network, and each transformation matrix $A^{n}_{\theta}$ is conditioned on the globally extracted convolutional features ($f^{conv}_{loc}$) together with the hidden state of the previously performed time-step. The \ac{CNN} in the localization network used by us is a variant of the well known ResNet by He \etal \cite{He2016Deep}. We use a variant of ResNet because we found that with this network structure our system learns faster and more successful, as compared to experiments with other network structures like the VGGNet \cite{Simonyan2015Very}. We argue that this is due to the fact that the residual connections of the ResNet help with retaining a strong gradient down to the very first convolutional layers. In addition to the structure we also used Batch Normalization \cite{Ioffe2015Batcha} for all our experiments. The \ac{RNN} used in the localization network is a \ac{BLSTM} \cite{Graves2013Hybrid,Hochreiter1997Long} unit. This \ac{BLSTM} is used to generate the hidden states $h_n$, which in turn are used to predict the affine transformation matrices. We used the same structure of the network for all our experiments we report in \autoref{sec:experiments}. \autoref{fig:localization_net_structure} provides a structural overview of this network. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\textwidth]{images/structure_of_network_2.eps} \end{center} \caption{The network used in our work consists of two major parts. The first is the localization network that takes the input image and predicts $N$ transformation matrices, that are applied to $N$ identical grids, forming $N$ different sampling grids. The generated sampling grids are used in two ways: (1) for calculating the bounding boxes of the identified text regions (2) for sampling the input image with $N$ sampling grids to extract $N$ text regions. The $N$ extracted text images are then used in the recognition network to perform text recognition. The whole system is trained end-to-end by only supplying information about the text labels for each text region.} \label{fig:localization_net_structure} \end{figure*} \paragraph{Grid Generator} The grid generator uses a regularly spaced grid $G_o$ with coordinates $y_{h_o},x_{w_o}$, of height $H_o$ and width $W_o$, together with the affine transformation matrices $A^{n}_{\theta}$ to produce $N$ regular grids $G^n_i$ of coordinates $u^n_{i},v^n_{j}$ of the input feature map $I$, where $i \in H_o$ and $j \in W_o$: \begin{equation} \begin{pmatrix} u^n_{i} \\ v^n_{j} \end{pmatrix} = A^{n}_{\theta} \begin{pmatrix} x_{w_o} \\ y_{h_o} \\ 1 \end{pmatrix} = \begin{bmatrix} \theta^{n}_1 & \theta^{n}_2 & \theta^{n}_3 \\ \theta^{n}_4 & \theta^{n}_5 & \theta^{n}_6 \\ \end{bmatrix} \begin{pmatrix} x_{w_o} \\ y_{h_o} \\ 1 \end{pmatrix} \end{equation} During inference we can extract the $N$ resulting grids $G^n_i$ which contain the bounding boxes of the text regions found by the localization network. Height $H_o$ and width $W_o$ can be chosen freely and if they are lower than height $H$ or width $W$ of the input feature map $I$ the grid generator is producing a grid that performs a downsampling operation in the next step. \paragraph{Image Sampling} The $N$ sampling grids $G^n_i$ produced by the grid generator are now used to sample values of the feature map $I$ at their corresponding coordinates $u^n_{i},v^n_{j}$ for each $n \in N$. Naturally these points will not always perfectly align with the discrete grid of values in the input feature map. Because of that we use bilinear sampling that extracts the value at a given coordinate by bilinear interpolating the values of the nearest neighbors. With that we define the values of the $N$ output feature maps $O^n$ at a given location $i,j$ where $i \in H_o$ and $j \in W_o$: \begin{equation} O^n_{ij} = \sum^{H_o}_h \sum^{W_o}_w I_{hw} max(0, 1 - \lvert u^n_{i} - h \rvert) max(0, 1 - \rvert v^n_{j} - w \rvert) \end{equation} This bilinear sampling is (sub-)differentiable, hence it is possible to propagate error gradients to the localization network by using standard backpropagation. The combination of localization network, grid generator and image sampler forms a spatial transformer and can in general be used in every part of a \ac{DNN}. In our system we use the spatial transformer as the first step of our network. The localization network receives the input image as input feature map and produces a set of affine transformation matrices that are used by the grid generator to calculate the position of the pixels that shall be sampled by the bilinear sampling operation.\\ \subsection{Text Recognition Stage} The image sampler of the text detection stage produces a set of $N$ regions that are extracted from the original input image. The text recognition stage (a structural overview of this stage can be found in \autoref{fig:localization_net_structure}) uses each of these $N$ different regions and processes them independently of each other. The processing of the $N$ different regions is handled by a \ac{CNN}. This \ac{CNN} is also based on the ResNet architecture as we found that we could only achieve good results if we use a variant of the ResNet architecture for our recognition network. We argue that using a ResNet in the recognition stage is even more important than in the detection stage, because the detection stage needs to receive strong gradients from the recognition stage in order to successfully update the weights of the localization network. The \ac{CNN} of the recognition stage predicts a probability distribution $\hat{y}$ over the label space $L_{\epsilon}$, where $L_{\epsilon} = L \cup \{\epsilon\}$, with $L = \{0-9a-z\}$ and $\epsilon$ representing the blank label. Depending on the task this probability distribution is either generated by a fixed number of $T$ softmax classifiers, where each softmax classifier is used to predict one character of the given word: \begin{align} x^n &= O^n \\ \hat{y}^n_t &= softmax(f_{rec}(x^n)) \\ \hat{y}^n &= \sum_{t=1}^ T \hat{y}^n_t \end{align} where $f_{rec}(x)$ is the result of applying the convolutional feature extractor on the sampled input $x$. Another possibility is to train the network using \ac{CTC} \cite{Graves2006Connectionist} and retrieve the most probable labeling by setting $\hat{y}$ to be the most probable labeling path $\pi$, that is given by: \begin{align} p(\pi|x^n) &= \prod^{T}_{t=1} \hat{y}^n_{\pi_t}, \forall \pi \in L^T_{\epsilon} \\ \hat{y}^n_t &= \text{argmax} p(\pi|x^n) \\ \hat{y}^n &= B(\sum^T_{t=1} \hat{y}^n_t) \end{align} with $L^T_{\epsilon}$ being the set of all labels that have the length $T$ and $p(\pi|x^n)$ being the probability that path $\pi \in L^T_{\epsilon}$ is predicted by the \ac{DNN}. $B$ is a function that removes all predicted blank labels and all repeated labels (e.g. $B(\text{-IC-CC-V}) = B(\text{II--CCC-C--V-}) = \text{ICCV}$). \subsection{Model Training} The training set $X$ used for training the model consists of a set of input images $I$ and a set of text labels $L_I$ for each input image. We do not use any labels for training the text detection stage. This stage is learning to detect regions of text only by using the error gradients obtained by either calculating the cross-entropy loss or the \ac{CTC} loss of the predictions and the textual labels. During our experiments we found that, when trained from scratch, a network that shall detect and recognize more than two text lines does not converge. The solution to this problem is to perform a series of pre-training steps where the difficulty is gradually increasing. Furthermore we find that the optimization algorithm chosen to train the network has a great influence on the convergence of the network. We found that it is beneficial to use \ac{SGD} for pre-training the network on a simpler task and Adam \cite{Kingma2015Adam} for finetuning the already pre-trained network on images with more text lines. We argue that \ac{SGD} performs better during pre-training because the learning rate $\eta$ is kept constant during a longer period of time, which enables the text detection stage to explore the input images and better find text regions. With decreasing learning rate the updates in the detection stage become smaller and the text detection stage (ideally) settles on already found text regions. At the same time the text recognition network can start to use the extracted text regions and learn to recognize the text in that regions. While training the network with \ac{SGD} it is important to note that choosing a too high learning rate will result in divergence of the model early on. We found that using initial learning rates between $1^{-5}$ and $5^{-7}$ tend to work in nearly all cases, except in cases where the network should only be fine-tuned. Here we found that using Adam is the more reliable choice, as Adam chooses the learning rate for each parameter in an adaptive way and hence does not allow the detection network to explore as radically as it does when using \ac{SGD}. \section{Related Work} \label{sec:related_work} Over the course of years a rich environment of different approaches to scene text detection and recognition have been developed and published. Nearly all systems use a two-step process for performing end-to-end recognition of scene text. The first step is to detect regions of text and extract these regions from the input image. The second step is to recognize the textual content and return the text strings of these extracted text regions. It is further possible to divide these approaches into three broad categories: \begin{enumerate*}[label={(\arabic*)}] \item Systems relying on hand crafted features and human knowledge for text detection and text recognition. \item Systems using deep learning approaches together with hand crafted features, or two different deep networks for each of the two steps. \item Systems that do not consist of a two step approach but rather perform text detection and recognition using a single deep neural network. \end{enumerate*} We will discuss some of these systems for each category below. \paragraph{Hand Crafted Features} In the beginning methods based on hand crafted features and human knowledge have been used to perform text detection. These systems used features like MSERs \cite{Neumann2010Method}, Stroke Width Transforms \cite{Epshtein2010Detecting} or HOG-Features \cite{Wang2011EndToEnd} to identify regions of text and provide them to the text recognition stage of the system. In the text recognition stage sliding window classifiers \cite{Mishra2012Scene} and ensembles of SVMs \cite{Yao2014Strokelets} or k-Nearest Neighbor classifiers using HOG features \cite{Wang2010Word} were used. All of these approaches use hand crafted features that have a large variety of hyper parameters that need expert knowledge to correctly tune them for achieving the best results. \paragraph{Deep Learning Approaches} More recent systems exchange approaches based on hand crafted features in one or both steps of end-to-end recognition systems by approaches using \acp{DNN}. Gómez and Karatzas \cite{Bigorda2016Textproposalsa} propose a text-specific selective search algorithm that, together with a \ac{DNN}, can be used to detect (distorted) text regions in natural scene images. Gupta \etal \cite{Gupta2016Syntheticb} propose a text detection model based on the YOLO-Architecture \cite{Redmon2016You} that uses a fully convolutional deep neural network to identify text regions. The text regions identified by these approaches can then be used as input for further systems based on \acp{DNN} that perform text recognition. Bissacco \etal \cite{Bissacco2013Photoocr} propose a complete end-to-end architecture that performs text detection using hand crafted features. The identified text regions are binarized and then used as input to a deep fully connected neural network that classifies each found character independently. Jaderberg \etal \cite{Jaderberg2015Reading,Jaderberg2014Deep} propose several systems that use deep neural networks for text detection and text recognition. In \cite{Jaderberg2014Deep} Jaderberg \etal propose a sliding window text detection approach that slides a convolutional text detection model across the image in multiple resolutions. The text recognition stage uses a single character \ac{CNN}, which is slided across the identified text region. This \ac{CNN} shares its weights with the \ac{CNN} used for text detection. In \cite{Jaderberg2015Reading} Jaderberg \etal propose to use a region proposal network with an extra bounding box regression CNN for text detection and a CNN that takes the whole text region as input and performs classification across a pre-defined dictionary of words, making this approach only applicable to one given language. Goodfellow \etal \cite{Goodfellow2014MultiDigit} propose a text recognition system for house numbers, that has been refined by Jaderberg \etal \cite{Jaderberg2014Deepa} for unconstrained text recognition. This system uses a single \ac{CNN}, which takes the complete extracted text region as input, and provides the text contained in that text region. This is achieved by having one independent classifier for each possible character in the given word. Based on this idea He \etal \cite{He2016Reading} and Shi \etal \cite{Shi2016EndToEnd,Shi2016Robust} propose text recognition systems that treat the recognition of characters from the extracted text region as a sequence recognition problem. He \etal \cite{He2016Reading} use a naive sliding window approach that creates slices of the text region, which are used as input to their text recognition \ac{CNN}. The features produced by the text recognition \ac{CNN} are used as input to a \ac{RNN} that predicts the sequence of characters. In our experiments on pure scene text recognition (see section \ref{ssec:icdar2013_experiments} for more information) we use a similar approach, but our system uses a more sophisticated sliding window approach, where the choice of the sliding windows is automatically learned by the network and not engineered by hand. Shi \etal \cite{Shi2016EndToEnd} utilize a CNN that uses the complete text region as input and produces a sequence of feature vectors, which are fed to a RNN that predicts the sequence of characters in the extracted text region. This approach generates a fixed number of feature vectors based on the width of the text region. That means for a text region that only contains a few characters, but has the same width as a text region with sufficently more characters, this approach will produce the same amount of feature vectors used as input to the RNN. In our pure text recognition experiments we utilized the strength of our approach to learn to attend to the most important information in the extracted text region, hence producing only as many feature vectors as necessary. Shi \etal \cite{Shi2016Robust} improve their approach by firstly adding an extra step that utilizes the rectification capabilities of Spatial Transformer Networks \cite{Jaderberg2015Spatial} for rectifying the extracted text line. Secondly they added a soft-attention mechanism to their network that helps to produce the sequence of characters in the input image. In their work Shi \etal make use of Spatial Transformers as an extra pre-processing step to make it easier for the recognition network to recognize the text in the image. In our system we use the Spatial Transformer as a core building block for detecting text in a semi-supervised way. \paragraph{End-to-End trainable Approaches} The presented systems always use a two-step approach for detecting and recognizing text from scene text images. Although recent approaches make use of deep neural networks they are still using a huge amount of hand crafted knowledge in either of the steps or at the point where the results of both steps are fused together. Smith \etal \cite{Smith2016EndToEnd} propose an end-to-end trainable system that is able to detect and recognize text on french street name signs, using a single \ac{DNN}. In contrast to our system it is not possible for the system to provide the location of the text in the image, only the textual content can be extracted. Furthermore the attention mechanism used in our approach shows a more human-like behaviour because is sequentially localizes and recognizes text from the given image.
proofpile-arXiv_065-3604
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} People exhibit systematic deviations from the predictions of game theory. For example, they do not always act so as to maximize their expected utility in games such as Prisoner's Dilemma (to the extent that their utility is accurately characterized by the payoffs in the game). Many alternative models have been proposed to explain these deviations; the explanations range from players having \emph{other-regarding} preferences, so that they prefer to avoid inequity, or prefer to maximize social welfare (see, e.g., \cite{Bo-Oc,Ch-Ra,Fe-Sc}) to \emph{quantal-response equilibrium}, which assumes that, with some probability, players make mistakes and do not play rationally \cite{MK-Pa95}. The literature here is enormous; a complete bibliography would be almost as long as this paper. Nevertheless, we propose yet another approach to explaining what people do. Our explanation assumes that people have some \emph{tolerance} for not getting the optimal payoff; the degree of tolerance is measured by how far from an optimal payoff they find acceptable. This idea is certainly not new: it is implicit in notions like $\epsilon$-equilibrium and \emph{satisficing} \cite{Simon55,Simon56}, although the details are different. Moreover, it is clear that people do have some tolerance. There are many potential reasons for this. First, although we often identify payoffs and utilities, the payoffs in a game may not represent a player's true utility; a player may in fact be indifferent between receiving a payoff of $a$ and $a-t$ if $t$ is sufficiently small. (This is in the spirit of the ``satisficing'' view.) Or there may be a recommended strategy, and some overhead in switching (which again is not being captured in the game's payoffs). \posted{ Or players might work at a coarse level, using a language that can't distinguish situations that a modeler might think should get different utility (cf.~\cite{BHP13}).} For whatever reason, it seems reasonable to assume that players may have some tolerance regarding payoffs. However, there is no reason to believe that all players have the same tolerance. We thus assume that there is a distribution over possible tolerances for each player, captured by a profile $\pi = (\pi_1, \ldots, \pi_n)$ of distributions, where $\pi_i$ is a distribution over the possible tolerances of player $i$. Intuitively, we imagine that we have a large population of players who could be player $i$; if we choose player $i$ at random from this poplulation, then with probability $\pi_i(t)$, she will have tolerance $t$. \fullv{ (Of course, in many applications, it is reasonable to assume that all players are chosen from the same pool, so that $\pi_1 = \cdots = \pi_n$.) We can relate this to more traditional game-theoretic considerations by thinking of these tolerances as representing different \emph{types} of player $i$; that is, a type is associated with a tolerance. There is some psychological evidence to support this viewpoint; specifically, there some evidence that whether someone uses satisficing-style behavior, and the extent to which it is used, is a personality trait, with a strong genetic component that endures over time \cite{SS11}. } In this setting, we define an equilibrium notion that we call \emph{$\pi$-tolerant equilibrium}. A profile $\sigma$ of possibly mixed strategies, one for each player, is a $\pi$-tolerant equilibrium if, roughly speaking, for each type $t$ of player $i$ (i.e., each possible tolerance that player $i$ can have), we can assign a mixed strategy to type $t$ in such a way that (1) each of the pure strategies in the mixture is a best response to $\sigma_{-i}$ (i.e., what the other players are doing) and (2) $\sigma_i$ represents the convex combination of what all the types of player $i$ are doing. Intuitively, the other players don't know what type of player $i$ they are facing; $\sigma_i$ describes the aggregate behavior over all types of player $i$. We can show that a Nash equilibrium is a 0-tolerant equilibrium (i.e., if we take $\pi_1 = \cdots = \pi_n$ to be the distribution that assigns probability 1 to players having tolerance 0); moreover, every Nash equilibrium is a $\pi$-tolerant equilibrium for all $\pi$. Similarly, if $\pi_1^\epsilon = \cdots = \pi_n^\epsilon$ all asign probability 1 to players having tolerance $\epsilon$, then a $\pi^{\epsilon}$-tolerant equilibrium is an $\epsilon$-Nash equilibrium. (The converse is not quite true; see Section~\ref{sec:toleranteq}.) After defining $\pi$-tolerant equilibria in Section~\ref{sec:toleranteq}, in Section~\ref{sec:dilemma}, we review the definition of social dilemmas, discuss the observed behavior in social dilemmas that we seek to explain, and show how tolerance can explain it. Our interest in social dilemmas is only part of why we are interested in tolerance. We are also interested in taking advantage of tolerance when designing mechanisms. We illustrate the potential in Section~\ref{sec:PD} by investigating this issue in the context of Prisoner's Dilemma. Although Prisoner's Dilemma may seem to be a limited domain, it can model a range of two-player interactions with appropriate meanings ascribed to the actions of cooperating and defecting. Our analysis of Prisoner's Dilemma with tolerance isolates the factors that determine the equilibrium level of cooperation in the game, providing guidelines (to the extent to which tolerance is indeed the explanation for observed cooperation) for how a designer, who may be able to modify or control the payoffs from certain actions, can adjust them to achieve particular levels of cooperative behavior in equilibrium. \section{$\pi$-Tolerant Equilibrium}\label{sec:toleranteq} We consider normal-form games here. A \emph{normal-form game} is a tuple $\Gamma = (N, (S_{i})_{i \in N}, (u_i)_{i \in N})$, where $N$ is a finite set of \emph{players}, which for convenience we take to be the set $\{1, \ldots, n\}$, $S_{i}$ is the set of pure strategies available to player $i$, which for convenience we also take to be finite, and $u_i$ is $i$'s utility function. As usual, we take $S = S_{1} \times \cdots \times S_n$. A \emph{mixed strategy} for player $i$ is a distribution on $S_i$. Let $\Sigma_i$ denote the mixed strategies for player $i$, and let $\Sigma = \Sigma_1 \times \cdots \times \Sigma_n$. Elements of $\Sigma$ are called \emph{mixed-strategy profiles}; given $\sigma \in \Sigma$, we denote by $\sigma_{i}$ the $i$th component of the tuple $\sigma$, and by $\sigma_{-i}$ the element of $\Sigma_{-i}$ consisting of all but the $i$th component of $\sigma$. The utility function $u_i: S \rightarrow \mbox{$I\!\!R$}$; that is, $u_i$ associates with each pure strategy profile a real number, which we can think of as $i$'s utility. We can extend $u_i$ to $\Sigma$ in the obvious way, by linearity. We take $T_i$ to be the set of possible tolerances for player $i$. Each element of $T_i$ is a non-negative real number. For simplicity in this discussion, we take $T_i$ to be finite, although the definitions that we are about to give go through with minimal change if $T_i$ is infinite (typically, summations have to be changed to integrations). We identify a tolerance with a \emph{type}; it can be viewed as private information about player $i$. Let $\pi_i$ be a distribution on $T_i$, the set of possible types of $i$ (under this viewpoint), and let $\pi = (\pi_1, \ldots, \pi_n)$. We want to define what it means for a mixed-strategy profile $(\sigma_1, ... \sigma_n)$ to be a $\pi$-tolerant equilibrium. The intuition is that $\sigma_i$ represents a population distribution. If $\sigma_i$ puts probability $p_i$ on the pure strategy $s_i$, then a fraction $p_i$ of the population (of agents who could play the role of player $i$) plays $s_i$. Similarly, if $\pi_i$ puts a probability $p_i$ on a tolerance $t$, then a fraction $p_i$ of the population of agents who could be player $i$ has type $t$. Given our view that a mixed strategy for player $i$ really represents a population of players each playing a pure strategy, in a $\pi$-tolerant equilibrium, we want all players of tolerance $t$ to be playing a mixed strategy such that each strategy in the support is within a tolerance $t$ of being a best response to what the other players are doing.% {We should stress that although we view a mixed strategy for player $i$ as representing a population of players, each playing a pure strategy, nothing in the formal definitions requires this. There could equally well be a single player $i$ playing a mixed strategy.} \begin{definition}\label{def:consistent} {\rm A pure strategy $s_i$ for player $i$ is \emph{consistent with a tolerance $t$ for player $i$ and a mixed-strategy profile $\sigma_{-i}$} for the players other than $i$ if $s_i$ is a $t$-best response to $\sigma_{-i}$; that is, if, for all strategies $s_i'$ for player $i$, $$u_i(s_i',\sigma_{-i}) \le u_i(s_i,\sigma_{-i}) + t.$$} \end{definition} \begin{definition} {\rm $\sigma$ is a \emph{$\pi$-tolerant equilibrium} if, for each player $i$, there is a mapping $g_i$ from the set $T_i$ of possible tolerances of player $i$ to mixed strategies for player $i$ such that the following conditions hold: \begin{itemize} \item[E1.] The support of $g_i(t)$ consists of only pure strategies that are consistent with $t$ and $\sigma_{-i}$. (Intuitively, a player $i$ of type $t$ will play only strategies that are $t$-best responses to $\sigma_{-i}$.) \item[E2.] $\sum_t \pi_i(t) g_i(t) = \sigma_i$.% \end{itemize} } \end{definition} Note that if $(\sigma_1, ... \sigma_n)$ is a $\pi$-tolerant equilibrium, then there might not be any type of player $i$ that plays strategy $\sigma_i$. Rather, $\sigma_i$ describes the other players' perception of what a ``random'' instance of player $i$ is doing. Thus, if player $i$ has two possible types, say $t$ and $t'$, where $t$ occurs with probability $1/3$ and $t'$ occurs with probability $2/3$, then E2 requires that $\sigma_i = \frac{1}{3}g_i(t) + \frac{2}{3}g_i(t')$. Every Nash equilibrium is clearly a $\pi$-tolerant equilibrium for all $\pi$: For if $\sigma$ is a Nash equilibrium, then each pure strategy in the support of $\sigma_i$ is a best response to $\sigma_{-i}$, so must be consistent with $t$ and $\sigma_{-i}$ for all types $t$. Thus, if we take $g_i(t) = \sigma_i$ for all $t$, then E1 and E2 above are clearly satisfied. Moreover, the Nash equilibria are precisely the $\delta^0$-tolerant equilibria, where $\delta^0 = (\delta_1^0, \ldots, \delta_n^0)$ and $\delta_i^0$ puts probability 1 on type 0. It is similarly easy to check that if $\delta^\epsilon = (\delta_1^\epsilon, \ldots, \delta_n^\epsilon)$, where $\delta_i^\epsilon$ puts probability 1 on type $\epsilon$, then every $\delta^\epsilon$-tolerant equilibrium is an $\epsilon$-Nash equilibrium. The converse is \emph{not} true, at least not the way that $\epsilon$-Nash is typically defined (see, e.g., \cite{M97}). For example, consider Prisoner's Dilemma. As is well known, defecting is the dominant strategy. Given $\epsilon > 0$, there exists a $\delta > 0$ sufficiently small such that the mixed strategy $\delta C + (1-\delta) D$ (cooperating with probability $\delta$ and defecting with probability $1-\delta$) is an $\epsilon$-best response no matter what the other player does; thus, both players using this strategy is an $\epsilon$-Nash equilibrium. However, it is not a $\delta^\epsilon$-tolerant equilibrium if $C$ is not an $\epsilon$-best response. Interestingly, Goldberg and Papadimitriou \cite{GP06} defined a (nonstandard) notion of $\epsilon$-Nash equilibrium where all strategies in the support of a mixed strategy are required to be $\epsilon$-best responses. This corresponds exactly to our notion of $\delta^\epsilon$-tolerant equilibrium. Thus, $\pi$-tolerant equilibrium refines Nash equilibrium and $\epsilon$-Nash equilibrium in an arguably natural way that allows for beliefs regarding agents' tolerance. We can also view it as a generalization of $\epsilon$-Bayes-Nash equilibrium in Bayesian games. Recall that in a Bayesian game, each player $i$ has a type in a set $T_i$. It is typically assumed that there is a (commonly known) distribution over $T = T_1 \cdots \times \cdots T_n$, and that a player's utility can depend on his type. The notion of \emph{$\epsilon$-Bayes-Nash equilibrium} in a Bayesian game a natural extension of $\epsilon$-Nash equilibrium. If we take a player's type to be his tolerance, and take all types to agree on the utilities, then a $\pi$-tolerant equilibrium is a $\epsilon$-Bayes-Nash equilibrium in the sense of the Goldberg-Papadimitriou definition, \emph{provided that the $\epsilon$ can depend on the player's type}. That is, rather than having a uniform $\epsilon$, we have a type-dependent $\epsilon$. We believe that focusing on tolerance and its consequences gives more insight than thinking in terms of this nonstandard notion of Bayes-Nash equilibrium; that is what we do in the remainder of the paper. We conclude this section by showing that greater tolerance leads to more equilibria. While this is intuitively clear, the proof \fullv{(which can be found in the appendix)} is surprisingly nontrivial. Given a distribution $\pi_i$, let $F^{\pi_i}$ denote the corresponding cumulative distribution; that is, $F^{\pi_i}(t) = \sum_{t' \le t} \pi_i(t')$. Say that $\pi_i'$ \emph{stochastically dominates} $\pi_i$ if $F^{\pi_i'} \le F^{\pi_i}$; that is, $F^{\pi_i'}(t) \le F^{\pi_i}(t)$ for all $t$. Thus, the probability of getting greater than $t$ with $\pi_i'$ is at least as high as the probability of getting greater than $t$ with $\pi_i$. Intuitively, $\pi_i'$ stochastically dominates $\pi_i$ if $\pi_i'$ is the result of shifting $\pi_i$ to the right. A profile $\pi'= (\pi'_1, \ldots, \pi_n')$ stochastically dominates $\pi = (\pi_1, \ldots, \pi_n)$ if $\pi_i'$ stochastically dominates $\pi_i$ for all $i$. \begin{theorem}\label{thm:stochastic} If $\pi'$ stochastically dominates $\pi$, then every $\pi$-tolerant equilibrium is a $\pi'$-tolerant equilibrium. \end{theorem} \noindent{\bf Proof:} See the appendix. \vrule height7pt width4pt depth1pt\vspace{0.1in} \section{Social Dilemmas}\label{sec:dilemma} Social dilemmas are situations in which there is a tension between the collective interest and individual interests: every individual has an incentive to deviate from the common good and act selfishly, but if everyone deviates, then they are all worse off. Following Capraro and Halpern \cite{CH2014}, we formally define a social dilemma as a normal-form game with a unique Nash equilibrium $\sigma^N$ and a unique welfare-maximizing profile $s^W$, both pure strategy profiles, such that each player's expected utility if $s^W$ is played is higher than his utility if $s^N$ is played. While this is a quite restricted set of games, it includes many of the best-studied games in the game-theory literature. We examine the same four games as Capraro and Halpern~\cite{CH2014}, and show that \fullv{the} experimentally observed regularities in these games can \fullv{also} be explained using tolerance.% {The description of the games and observations is taken almost verbatim from Capraro and Halpern~\cite{CH2014}.} \begin{description} \item[\textbf{Prisoner's Dilemma.}] Two players can either cooperate ($C$) or defect ($D$). To relate our results to experimental results on Prisoner's Dilemma, we consider a subclass of Prisoner's Dilemma games where we think of cooperation as meaning that a player pays a cost $c > 0$ to give a benefit $b>c$ to the other player. If a player defects, he pays nothing and gives nothing. Thus, the payoff of $(D,D)$ is $(0,0)$, the payoff of $(C,C)$ is $(b-c,b-c)$, and the payoffs of $(D,C)$ and $(C,D)$ are $(b,-c)$ and $(-c,b)$, respectively. The condition $b>c$ implies that $(D,D)$ is the unique Nash equilibrium and $(C,C)$ is the unique welfare-maximizing profile. \item[\textbf{Traveler's Dilemma.}] Two travelers have identical luggage, which is damaged (in an identical way) by an airline. The airline offers to recompense them for their luggage. They may ask for any dollar amount between $L$ and $H$ (where $L$ and $H$ are both positive integers). There is only one catch. If they ask for the same amount, then that is what they will both receive. However, if they ask for different amounts---say one asks for $m$ and the other for $m'$, with $m < m'$---then whoever asks for $m$ (the lower amount) will get $m+b$ ($m$ and a bonus of $b$), while the other player gets $m-b$: the lower amount and a penalty of $b$. $(L,L)$ is thus the unique Nash equilibrium, while $(H,H)$ maximizes social welfare, independent of $b$. \item[\textbf{Public Goods game.}] $N\geq2$ contributors are endowed with 1 dollar each; they must simultaneously decide how much, if anything, to contribute to a public pool. (The contributions must be in whole cent amounts.) The total contribution pot is then multiplied by a constant strictly between 1 and $N$, and then evenly redistributed among all players. So the payoff of player $i$ is $u_i(x_1,\ldots,x_N)=1-x_i+\rho(x_1+\ldots+x_N)$, where $x_i$ denotes $i$'s contribution, and $\rho\in\left(\frac1N,1\right)$ is the \emph{marginal return}. (Thus, the pool is multiplied by $\rho N$ before being split evenly among all players.) Everyone contributing nothing to the pool is the unique Nash equilibrium, and everyone contributing their whole endowment to the pool is the unique welfare-maximizing profile. \item[\textbf{Bertrand Competition.}] $N\geq2$ firms compete to sell their identical product at a price between the ``price floor'' $L\geq 2$ and the ``reservation value'' $H$. (Again, we assume that $H$ and $L$ are integers, and all prices must be integers.) The firm that chooses the lowest price, say $s$, sells the product at that price, getting a payoff of $s$, while all other firms get a payoff of 0. If there are ties, then the sales are split equally among all firms that choose the lowest price. Now everyone choosing $L$ is the unique Nash equilibrium, and everyone choosing $H$ is the unique welfare-maximizing profile.% {We require that $L \ge 2$ for otherwise we would not have a unique Nash equilibrium, a condition we imposed on Social Dilemmas. If $L = 1$ and $N=2$, we get two Nash equilibria: $(2,2)$ and $(1,1)$; similarly, for $L=0$, we also get multiple Nash equilibria, for all values of $N \ge 2$.} \end{description} From here on, we say that a player \emph{cooperates} if he plays his part of the socially-welfare maximizing strategy profile and \emph{defects} if he plays his part of the Nash equilibrium strategy profile. While Nash equilibrium predicts that people should always defect in social dilemmas, in practice, we see a great deal of cooperative behavior. But the cooperative behavior exhibits a great deal of regularity. Here are some regularities that have been observed (although it should be noted that in some cases the evidence is rather limited---see the discussion of Bertrand Competition at the end of this section): \begin{itemize} \item The degree of cooperation in the Prisoner's dilemma depends positively on the benefit of mutual cooperation and negatively on the cost of cooperation \cite{capraro2014heuristics,EZ,Rapoport}. \item The degree of cooperation in the Traveler's Dilemma depends negatively on the bonus/penalty \cite{CGGH99}. \item The degree of cooperation in the Public Goods game depends positively on the constant marginal return \cite{Gu,IWT}. \item The degree of cooperation in the Public Goods game depends positively on the number of players \cite{IWW,Ze}. \item The degree of cooperation in the Bertrand Competition depends negatively on the number of players \cite{DG00}. \item The degree of cooperation in the Bertrand Competition depends negatively on the price floor \cite{D07}. \end{itemize} Of course, as mentioned in the introduction, there have been many attempts to explain the regularities that have been observed in social dilemmas. However, very few can actually explain all the regularities mentioned above. Indeed, the only approaches seem to be Charness and Rabin's \cite{Ch-Ra} approach, which assumes that agents care about maximizing social welfare and the utility of the worst-off individual as well as their own utility, and the translucency approach introduced by Halpern and Pass \cite{HaPa13} and adapted by Capraro and Halpern \cite{CH2014} to explain social dilemmas: roughly speaking, a player is translucent to the degree that he believes that, with some probability, other players will know what he is about to do. To show how tolerance can explain cooperation in social dilemmas, we first examine the relationship between tolerance and the parameters of the various social dilemmas we are considering. We consider two settings. In the first, we ask when it is consistent for a player $i$ of type $t$ to cooperate. In the second, we ask when it is rational for a player $i$ of type $t$ who believes (as assumed by Capraro and Halpern \cite{CH2014}) that each other player $j$ is playing $\beta s_j^W + (1-\beta)s_j^N$ to cooperate (i.e., $i$ believes that each other player is cooperating with probability $\beta$, defecting with probability $(1-\beta)$, and not putting positive probability on any other strategy). We write $\beta s_{-i}^W + (1-\beta) s_{-i}^N$ for the corresponding mixed-strategy profile. Say that a player has type $(t,\beta)$ in the second case; in the spirit of Definition~\ref{def:consistent}, say that cooperation is \emph{consistent with type $(t,\beta)$ for player $i$} if for all strategies $s_i'$ for player $i$, $$u_i(s_i',\beta s_{-i}^W + (1-\beta) s_{-i}^N) \le u_i(s_i^W,\beta s_{-i}^W + (1-\beta) s_{-i}^N) + t.$$ For a Prisoner's Dilemma of the form described in Section~\ref{sec:dilemma}, consistency is independent of the strategy the other player is using (and hence independent of player's beliefs). \begin{proposition}\label{prop:PD} For the Prisoner's Dilemma of the form described in Section~\ref{sec:dilemma}, cooperation is consistent for a player of type $t$ and mixed strategy $\sigma$ for the other player iff $t \ge c$. \end{proposition} \noindent{\bf Proof:} Switching from cooperating to defecting gives the player an additional payoff of $c$, independent of whether the other player is cooperating or defecting. Thus, cooperation is consistent if $t \ge c$. \vrule height7pt width4pt depth1pt\vspace{0.1in} We next consider the Traveler's Dilemma. \begin{proposition}\label{prop:TD} For the Traveler's Dilemma, \begin{itemize} \item[(a)] cooperation is consistent with $t$ and a mixed strategy $\sigma$ for the other player if $t \ge 2b-1$; \item[(b)] there exists a strategy $\sigma$ for the other player such that cooperation is consistent with $t$ and $\sigma$ iff $t \ge 2b-1$; \item[(c)] cooperation is consistent with $(t,\beta)$ iff $t \ge \max(\beta(b-1), b - \beta(H-L))$. \end{itemize} \end{proposition} \noindent{\bf Proof:} If player 2 plays $m$, then player 1's best response is to play $m-1$ (or $L$ if $m=L$). If $m < H$, then player 1 gets a payoff of $m-b$ if he cooperates (i.e., plays $H$), and could have gotten $m-1+b$ by making a best response (or $L$, in case $m=L$). Thus, he can gain at most $2b-1$ by playing a best response. This proves part (a). If $m=H-1$ and $H-1 > L$, then cooperation is consistent iff $t \ge 2b-1$; this proves part (b). Finally, if player 1 has type $(t,\beta)$, then he believes that 2 plays $H$ with probability $\beta$ and $L$ with probability $1-\beta$. Thus, player 1 believes his expected payoff from playing $H$ is $\beta H + (1-\beta)(L-b)$. The best response for player 1 is to play one of $H-1$ or $L$. His payoff from playing $H-1$ is $\beta (H + b-1) + (1-\beta)(L-b)$; his payoff from playing $L$ is $\beta(L+b) +(1-\beta)L$. Thus, cooperation is consistent for a player 1 of type $(t,\beta)$ iff $t \ge \max(\beta(b-1), b-\beta(H-L))$. \vrule height7pt width4pt depth1pt\vspace{0.1in} \fullv{For the Public Goods game, consistency is again independent of the other players' strategies.} \shortv{The next two results are proved in the full paper.} \begin{proposition}\label{prop:PG} For the Public Goods game, cooperation is consistent for a player $i$ of tolerance $t$ and mixed strategy $\sigma_{-i}$ for the other players iff $t \ge (1-\rho)$. \end{proposition} \fullv{ \noindent{\bf Proof:} It is easy to see that, no matter what the other players do, defection (contributing 0) is the best response in this game, and a player gets a payoff that is $1-\rho$ higher if he defects than if he cooperates. Thus, cooperation is consistent iff $t \ge (1-\rho)$. \vrule height7pt width4pt depth1pt\vspace{0.1in} } \begin{proposition}\label{prop:BC} For Bertrand Competition with $n$ players, cooperation for player $i$ is consistent with $(t,\beta)$ iff $t \ge \max(\beta^{n-1} (H-1), f(n)L) - \beta^{n-1}H/n$, where $f(n) = \sum_{k=0}^{n-1} \beta^k(1-\beta)^{n-1-k} \binom{n-1}{k}/(n-k)$. \end{proposition} \fullv{\noindent{\bf Proof:} Consider a player of type $(t,\beta)$. If player $i$ cooperates, he will get $H/n$ if all the other players cooperate, which happens with probability $\beta^{n-1}$; otherwise, he gets 0. Thus, his expected payoff from cooperation is $\beta^{n-1} H/n$. His best response, given his beliefs, is to play one of $H-1$ or $L$. If he plays $H-1$, then his payoff is $H-1$ if all the remaining players play $H$, which happens with probability $\beta^{n-1}$; otherwise his payoff is 0. Thus, his expected payoff is $\beta^{n-1} (H-1)$. If he plays $L$, then his payoff if $k$ players play $H$ and $n-1-k$ play $L$ is $L/(n-k)$; this event occurs with probability $\beta^{k}(1-\beta)^{n-1-k}\binom{n-1}{k}$. Thus, his expected payoff is $f(n) L$. It follows that cooperation is consistent with $(t,\beta)$ if $t \ge \max(\beta^n (H-1), f(n)L) - \beta^{n-1}H/n$. \vrule height7pt width4pt depth1pt\vspace{0.1in}} From here it is but three short steps to our desired result: First, observe that, up to now, we have looked at games in isolation. But now we want to compare tolerances in different games, with different settings of the relevant parameters. Intuitively, having a tolerance of 2 in Traveler's Dilemma where $L=2$ and $H=100$ should have a different meaning than it does in a version of Traveler's Dilemma where payoffs are multiplied by a factor of 10, so that $L=20$ and $H=1000$. Thus, when considering a family of related games, rather than considering absolute tolerances, it seems more appropriate to consider \emph{relative tolerance}. There are many ways of defining a notion of relative tolerance. For our purposes, we take a player's relative tolerance to be an element of $[0,1]$; player $i$'s actual tolerance in a game $\Gamma$ is his relative tolerance multiplied by the payoff that player $i$ gets if everyone cooperates in $\Gamma$. For example, since the payoff obtained by $i$ if everyone cooperates in Traveler's Dilemma is $H$, then the actual tolerance of a player of type $(\tilde{t},\beta)$ is $\tilde{t}H$. (Here and elsewhere, if we wish to emphasize that we are considering relative tolerance, we write $\tilde{t}$, reserving $t$ for actual tolerance.) There are other ways we could define relative tolerance. For example, we could multiply by the difference between the payoff obtained if everyone cooperates and the payoff obtained if everyone defects, or multiply by the maximum possible social welfare. The exact choice does not affect our results. Second, recall that the fact that cooperation is consistent with a given type does not mean that a player of that type will actually cooperate. We add an extra component to the type of a player to indicate whether the player will cooperate if it is consistent to do so, given his beliefs. We thus consider \emph{relative types} of the form $(\tilde{t},\beta,C)$ and $(\tilde{t},\beta,D)$; such a type will cooperate in Traveler's Dilemma if $\tilde{t}H \ge \max(\beta(b-1), b - \beta(H-L))$ and the third component is $C$. Finally, we need to assume that there are a reasonable number of players of each type. Formally, we assume that the set of types of each player is infinite and that there is a distribution on relative types such that for all intervals $(u,v)$ and $(u',v')$ in $[0,1]$, there is a positive probability of finding someone of relative type $(\tilde{t},\beta,C)$ with $\tilde{t} \in (u,v)$ and $\beta \in (u',v')$. An analogous assumption is made by Capraro and Halpern \cite{CH2014}. With these assumptions, it follows from Propositions~\ref{prop:PD}--\ref{prop:BC} that the regularities discussed in Section~\ref{sec:dilemma} hold. \begin{itemize} \item In the case of Prisoner's Dilemma, $b-c$ is the payoff obtained if everyone cooperates, so if $\tilde{t}$ is the relative tolerance, $\tilde{t}(b-c)$ is the actual tolerance. Thus, if a player's relative type is $(\tilde{t},\beta)$, then cooperation is consistent if $\tilde{t}(b-c) \ge c$. Clearly, as $b$ increases, there are strictly more relative types for which cooperation is consistent, so, by our assumptions, we should see more cooperation. Similarly, if $c$ increases (keeping $b$ fixed), there are fewer relative types for which cooperation is consistent, so we should see less cooperation. \item In the case of Traveler's Dilemma, as we have observed a relative type will cooperate if $\tilde{t}H \ge \max(\beta(b-1), b - \beta(H-L))$. Clearly, if $b$ increases, then there will be fewer relative types for whom cooperation is consistent. \item In the Public Goods game, if everyone cooperates, the payoff to player $i$ is $n\rho$. So it is consistent to cooperate if $\tilde{t}\rho n \ge (1-\rho)$. Clearly, as $n$ increases, we should see more cooperation, given our assumptions. Moreover, tolerance explains the increase of cooperation as the marginal return increases. \item Finally, in the Bertrand Competition, since the payoff if everyone cooperates is $H/n$, it is consistent to cooperate if $\tilde{t}H/n \ge \max(\beta^{n-1} (H-1), f(n)L) - \beta^{n-1}H/n$, or equivalently, if $$\tilde{t} \ge \max(n\beta^{n-1} (H-1)/H, nf(n)L/H) - \beta^{n-1}.$$ Clearly, cooperation decreases if $L$ increases. The effect of increasing $n$ is more nuanced. For $n$ large, $\beta^{n-1}$ is essentially 0, as is $n\beta^{n-1}$; it can be shown that $f(n)$ is roughly $1/(1-\beta) n$. Thus, if $n$ is large, cooperation is consistent if $\tilde{t} > L/(1-\beta)H$. What happens for small values of $n$ is very much dependent on $\beta$, $H$, and $L$. The actual experiments on this topic \cite{DG00} considered only 2, 3, and 4 players, with $L=2$ and $H = 100$. For these values, we get the desired effect if $\beta$ is sufficiently large ($\beta > .7$ suffices). \end{itemize} As we said earlier, of all the approaches to explaining social dilemmas in the literature, only Capraro and Halpern \cite{CH2014} and Charness and Rabin~\cite{Ch-Ra}, can explain all these regularities; see Capraro and Halpern \cite{CH2014} for a detailed discussion and a comparison to other approaches. Of course, this leaves open the question of which approach is a better description of what people are doing. We suspect that translucency, care for others, and tolerance all influence behavior. We hope that further investigation of social dilemmas will reveal other regularities that can be used to compare our approach to others, and give us a more fine-grained understanding of what is going on. \section{Prisoner's Dilemma with Tolerance}\label{sec:PD} We now take a closer look at the impact of tolerance on perhaps the best-studied social dilemma, Prisoner's Dilemma. The analysis suggests how thinking in terms of tolerance might help us design better mechanisms. The general prisoners' dilemma (PD) game has payoffs $(a_1,a_2)$, $(b_1,c_2)$, $(c_1,b_2)$, and $(d_1,d_2)$ corresponding to action profiles $(C,C)$, $(C,D)$, $(D,C)$, and $(D,D)$, respectively, with $c_i > a_i > d_i > b_i$. To analyze equilibrium outcomes in PD with tolerances, consider a player $i$, and suppose she believes that the other player $j$ will play $C$ with probability $\alpha_j$. Her payoff from choosing action $C$ and action $D$ are, respectively, \begin{equation} \nonumber u_C = a_j \alpha_i + (1-\alpha_j) b_i; ~~ u_D = \alpha_j c_i + (1-\alpha_j) d_i. \end{equation} Since $D$ is a dominant strategy in Prisoner's Dilemma, $u_D > u_C$. Agent $i$ is willing to play $C$ if $u_D-u_C$ is within her tolerance, that is, if \begin{equation} \nonumber t \geq \alpha_j (c_i - a_i) + (1-\alpha_j) (d_i - b_i) \triangleq \alpha_j \Delta C_i + (1-\alpha_j) \Delta D_i, \end{equation} where $\Delta C_i$ is the gain to player $i$ from defecting when the the other player plays C, and similarly for $\Delta D_i$. Taking $F_i$ to denote the cumulative probability distribution on agent $i$'s tolerances, it follows that the probability that agent $i$ has the tolerance required to allow cooperation is $1-F_i(\alpha_j \Delta C_i + (1-\alpha_j) \Delta D_i)$. Note that the minimum tolerance at which $i$ can cooperate, which depends upon the probability $\alpha_j$ with which $j$ plays C, need {\em not} decrease with $\alpha_j$: If the payoffs in the game are such that $\Delta C_i > \Delta D_i$ (the gain from defecting is larger when the other player is cooperating rather than defecting), then increasing $\alpha_j$ {\em increases} the tolerance $i$ must have to be willing to cooperate. Suppose that agents break ties in favor of cooperation, that is, if cooperating yields a payoff within an agent's tolerance, that agent will cooperate rather than defect. Call a $\pi$-tolerant equilibrium satisfying this condition a \emph{particularly cooperative} ($\pi$-tolerant) equilibrium. Note that if some player with tolerance $t$ cooperates in a perfectly cooperative equilibrium, then all players with tolerance $t$ cooperate. It follows from the discussion above that a particularly cooperative equilibrium is determined by a pair of mutually consistent probabilities of cooperation $(\alpha_i, \alpha_j)$ satisfying $$\begin{array}{l} \alpha _i = 1-F_i(\alpha_i \Delta C_j + (1-\alpha_i) \Delta D_j)\\ \alpha _j = 1-F_j(\alpha_j \Delta C_i + (1-\alpha_j) \Delta D_i). \end{array} $$ Note that a particularly cooperative equilibrium may not exist in PD, although a $\pi$-tolerant equilibrium always does (since both agents defecting is a $\pi$-tolerant equilibrium, no matter what $\pi$ is). For a simple example, suppose that $a_1 = a_2 = 3$, $b_1 = b_2 = -1$, $c_1 = c_2 = 5$, $d_1 = d_2 =0$, and players are drawn from a population where everyone has a tolerance of 1.5. Now suppose that there is a perfectly cooperative equilibrium where a fraction $\alpha$ of the players cooperate. Thus, a player's expected payoff from cooperation is $3\alpha -(1-\alpha) = 4\alpha - 1$; a player's expected payoff from defection is $5\alpha$. Thus, a player gains $\alpha + 1$ by defecting. If $0 \le \alpha \le .5$, then a player gains at most $1.5$ by switching from cooperate to defect, so all players should cooperate in a perfectly cooperative equilibrium (i.e., $\alpha$ should be 1); on the other hand, if $\alpha > .5$, then $1 + \alpha > 1.5$, so all players should defect (so $\alpha$ should be 0). In this example, we have a point mass of 1 on tolerance 1.5, so there is only one type of each player. This is inconsistent with the assumption in the previous section that the cumulative probability increases continuously. If we assume that the cumulative probability increases continuously then there is always a particularly cooperative equilibrium in PD. We provide an analysis here, making some symmetry assumptions for simplicity. Specifically, we assume (1) symmetry in payoffs: $a_1 = a_2, \ldots, d_1 = d_2$; (2) symmetry in tolerance distributions: $F_1 = F_2 = F$; and (3) that $F$ is continuous (so that there are infinitely many types). Under these assumptions, we show that a \emph{symmetric} perfectly cooperative equilibrium (where $\alpha_1 = \alpha_2$) always exists; this is a solution $\alpha^*$ to \begin{equation}\label{e-PDeq} 1- \alpha = F(\alpha \Delta C + (1-\alpha) \Delta D). \end{equation} (Note that for prisoners' dilemma, $0 < F(\Delta C), F(\Delta D) \leq 1$ since $\Delta C = c-a$ and $\Delta D = d-b$ are positive.) \begin{theorem}[Equilibrium structure.] Under our assumptions, a symmetric particularly cooperative equilibrium always exists, is a solution to (\ref{e-PDeq}), and has the following structure: \begin{enumerate} \item[(a)] There is an equilibrium with $\alpha = 0$ (in which case $(D,D)$ is necessarily played, so there is no cooperation) if and only if $F(\Delta D) =1$. \item[(b)] (Uniqueness.) If $\Delta C > \Delta D$, there is a unique equilibrium; if $\Delta C \leq\Delta D$, multiple equilibria corresponding to different cooperation probabilities may exist. \end{enumerate} \end{theorem} We omit the formal proofs; the results follow from applying the Intermediate Value Theorem using the continuity of $F$, and noting that the LHS in (\ref{e-PDeq}) is greater than the RHS at $\alpha = 0$ when $F(\Delta D) < 1$, and is smaller than the RHS at $\alpha = 1$. Uniqueness (and non-uniqueness, respectively) follows from the fact that the RHS is increasing (respectively, decreasing) in $\alpha$ when $\Delta D < \Delta C$ (respectively $\Delta C < \Delta D$). The idea is perhaps best illustrated by Figure \ref{f-eqbmPD}, where the intersections correspond to equilibrium cooperation probabilities $\alpha^*$. Note that these results depend critically on $F$, the cumulative distribution, being continuous. \begin{figure}[htb] \vspace{-.5in} \begin{minipage}[c]{0.46\textwidth} \begin{center} \includegraphics[scale = .14]{Eqbm1.png}\\ (a) $\Delta C > \Delta D$ \end{center} \end{minipage} \ \ \ \ \begin{minipage}[c]{0.46\textwidth} \begin{center} \includegraphics[scale = .14]{Eqbm2.png}\\ (b) $\Delta C \le \Delta D$ \end{center} \end{minipage} \label{eqbmPD} \caption{Equilibrium structure for PD.}\label{f-eqbmPD} \end{figure} The next result gives insight into how the probability of cooperation changes as we change various parameters. As we change the relevant parameters (the payoffs and the probabilities of tolerance) slightly in a continuous way, each particularly cooperative equilibrium ``shifts'' slightly also in a continuous way, so that we can talk about corresponding equilibria; we omit the formal definition here. \begin{theorem}\label{t-cs} The equilibrium probability of cooperation $\alpha^*$ in corresponding particularly cooperative equilibria (a) decreases as $\Delta C$ and $\Delta D$ increase, and therefore (b) decreases as the payoffs $c$ and $d$ increase, and increases as $a$ and $b$ increase, and (c) ``increases'' in $\pi$, in that if $\pi'$ stochastically dominates $\pi$, then the payoffs in a particularly cooperative $\pi'$-tolerant equilibrium are higher than in the corresponding $\pi$-tolerant equilibrium. \end{theorem} These results follow easily by noting that for a fixed $\alpha$, the RHS in (\ref{e-PDeq}) is increasing in both $\Delta C$ and $\Delta D$; the value of $\alpha$ at which the RHS equals the LHS therefore decreases when either of $\Delta C$ and $\Delta D$ increase. The monotonicity in $\pi$ is similar: if $\pi'_1$ stochastically dominates $\pi_1$, the value of $\alpha$ at the intersection is larger with $\pi'_1$ than with $\pi_1$. These results, in addition to providing testable predictions for how cooperation levels should behave when payoffs are varied in an experiment, also provide guidelines for a designer who may be able to manipulate some of the payoffs in the game (via either social or monetary means), by isolating the factors that influence the nature of equilibria and extent of equilibrium cooperation. First, the extent of cooperation in equilibrium depends on the marginal, rather than the absolute, payoffs: it is the {\em differences} $\Delta C$ and $\Delta D$ that determine equilibrium levels of cooperation, rather than any other function of the payoffs $a,b,c,d$.% {We must be a little careful here. To do comparative statics, we should consider relative tolerances, for the reasons discussed in Section~\ref{sec:dilemma}. Changing the payoff parameters may well change the actual tolerance, while keeping the relative tolerance fixed. Parts (a) and (b) of Theorem~\ref{t-cs} hold only for a fixed tolerance distribution.} Second, perhaps surprisingly, which of $\Delta C$ or $\Delta D$ is larger makes a difference to the structure of equilibria, which also has implications for design. If, for instance, the designer prefers a game where there is a unique equilibrium with non-trivial cooperation, Theorem \ref{t-cs} suggests that the designer should manipulate payoffs so that $\Delta D$, the marginal gain from defecting instead of cooperating when the other player also defects, is smaller than $\Delta C$, the marginal gain from defecting when the other player cooperates. (This might be achieved, for instance, by providing additional rewards---either extra compensation, or social rewards such as public acknowledgement, to a player who continues to cooperate despite defection by her partner, increasing the payoff $b$ and therefore decreasing $d-b$.) On the other hand, if there is a means to ``nudge'' behavior towards a particular equilibrium when there are multiple equilibria, a designer might prefer to manipulate payoffs to fall in the $\Delta C \leq \Delta D$ regime and nudge behavior towards the equilibrium with the most cooperation (\fullv{again, this could be achieved }by imposing social or monetary penalties for defecting on a cooperating partner, decreasing $t$ and thereby $\Delta C$). \section{Conclusion} \label{sec:conclusion} We have defined a notion of $\pi$-tolerant equilibrium, which takes into account that players have some tolerance regarding payoffs. This solution concept generalizes Nash and $\epsilon$-Nash equilibrium in a natural way. We showed that this solution concept can explain cooperation in social dilemmas. Although we focused on social dilemmas, tolerance can also explain other well-known observations, such as the fact that people give some money to the other person in the \emph{Dictator Game} \cite{KKT86} (where one person is given a certain amount of money, and can split it as he chooses between himself and someone else) and that people give intermediate amounts and reject small positive offers in the \emph{Ultimatum Game} \cite{GSS82} (where one person can decide on how to split a certain amount of money, but the other person can reject the split, in which case both players get nothing). We also examined the structure of particularly cooperative $\pi$-tolerant equilibria, where players are as cooperative as they can be, given their tolerances, in Prisoner's Dilemma. To the extent that cooperation is due to tolerance, our results provide guidance to a mechanism designer who has some control over the payoffs in a game, and suggest ways that cooperation can be increased. Since many practical situations of interest can be modeled as Prisoner's Dilemmas, these results may suggest how mechanism designers can take advantage of players' tolerance in practice. We believe that a study of convergence towards, and stability and robustness of, particularly cooperative equilibria in Prisoner's Dilemma in an appropriate model for dynamics can potentially provide useful insights into emergence and sustainability of trust in online economies.
proofpile-arXiv_065-3613
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Relation extraction is the task of predicting attributes and relations for entities in a sentence~\cite{zelenko2003kernel,bunescu2005subsequence,guodong2005exploring, DBLP:journals/corr/Yu0HSXZ17}. For example, given a sentence \emph{``\textbf{Barack Obama} was born in \textbf{Honolulu}, Hawaii.''}, a relation classifier aims at predicting the relation of \emph{``\textbf{bornInCity}''}. Relation extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization. A major issue for relation extraction is the lack of labeled training data. In recent years, distant supervision~\cite{mintz2009distant,hoffmann2011knowledge,surdeanu2012multi} emerges as the most popular method for relation extraction--- it uses knowledge base facts to select a set of noisy instances from unlabeled data. Among all the machine learning approaches for distant supervision, the recently proposed Convolutional Neural Networks (CNNs) model~\cite{zeng2014relation} achieved the state-of-the-art performance. Following their success, Zeng et al.~\shortcite{zeng2015distant} proposed a piece-wise max-pooling strategy to improve the CNNs. Various attention strategies~\cite{lin2016attention,shen-huang:2016:COLING} for CNNs are also proposed, obtaining impressive results. However, most of these neural relation extraction models are relatively shallow CNNs---typically only one convolutional layer and one fully connected layer were involved, and it was not clear whether deeper models could have benefits on distilling signals from noisy inputs in this task. In this paper, we investigate the effects of training deeper CNNs for distantly-supervised relation extraction. More specifically, we designed a convolutional neural network based on residual learning~\cite{he2016deep}---we show how one can incorporate word embeddings and position embeddings into a deep residual network, while feeding identity feedback to convolutional layers for this noisy relation prediction task. Empirically, we evaluate on the NYT-Freebase dataset~\cite{riedel2010modeling}, and demonstrate the state-of-the-art performance using deep CNNs with identify mapping and shortcuts. In contrast to popular beliefs in vision that deep residual network only works for very deep CNNs, we show that even with a moderately deep CNNs, there are substantial improvements over vanilla CNNs for relation extraction. Our contributions are three-fold: \begin{itemize} \item We are the first to consider deeper convolutional neural networks for weakly-supervised relation extraction using residual learning; \item We show that our deep residual network model outperforms CNNs by a large margin empirically, obtaining state-of-the-art performances; \item Our identity mapping with shortcut feedback approach can be easily applicable to any variants of CNNs for relation extraction. \end{itemize} \begin{figure*}[t!] \centering \includegraphics[width=0.8\textwidth]{architecture.png} \caption{The architecture of ResCNN used for relation extraction.} \label{fig:arch} \vspace{-2ex} \end{figure*} \section{Deep Residual Networks for Relation Extraction} \iffalse (1-1.5 pages) In this section, we present innovative architecture to learn more useful features without complicated NLP preprocessing. This procedure includes five main parts: \textit{Vector Representation}, \textit{Convolution}, \textit{Residual Convolution Block}, \textit{Max Pooling} and \textit{Softmax Output}. \fi In this section, we describe a novel deep residual learning architecture for distantly supervised relation extraction. Figure~\ref{fig:arch} describes the architecture of our model. \subsection{Vector Representation} Let $\textbf{x}_i$ be the \textit{i}-th word in the sentence and \textit{e1}, \textit{e2} be the two corresponding entities. Each word will access two embedding look-up tables to get the word embedding $\textbf{WF}_i$ and the position embedding $\textbf{PF}_i$. Then, we concatenate the two embeddings and denote each word as a vector of $\textbf{v}_i = [\textbf{WF}_i, \textbf{PF}_i]$. \subsubsection{Word Embeddings} Each representation $\textbf{v}_i$ corresponding to $\textbf{x}_i$ is a real-valued vector. All of the vectors are encoded in an embeddings matrix $\textbf{V}_w \in \mathbb{R}^{{d_w} \times |V|}$ where $V$ is a fixed-sized vocabulary. \subsubsection{Position Embeddings} In relation classification, we focus on finding a relation for entity pairs. Following \cite{zeng2014relation}, a PF is the combination of the relative distances of the current word to the first entity $\textit{e}_1$ and the second entity $\textit{e}_2$. For instance, in the sentence "\textit{Steve\_Jobs is the founder of Apple.}", the relative distances from \textit{founder} to $\textit{e}_1$ (\textit{Steve\_Job}) and $\textit{e}_2$ are 3 and -2, respectively. We then transform the relative distances into real valued vectors by looking up one randomly initialized position embedding matrices \(\textbf{V}_p \in \mathbb{R}^{{d_p}\times \|P\|}\) where P is fixed-sized distance set. It should be noted that if a word is too far from entities, it may be not related to the relation. Therefore, we choose maximum value $\textit{e}_{max}$ and minimum value $\textit{e}_{min}$ for the relative distance. In the example shown in Figure~\ref{fig:arch}, it is assumed that $\textit{d}_w$ is 4 and $\textit{d}_p$ is 1. There are two position embeddings: one for $\textit{e}_1$, the other for $\textit{e}_2$. Finally, we concatenate the word embeddings and position embeddings of all words and denote a sentence of length \textit{n} (padded where necessary) as a vector \vspace{-1ex} \[ \textbf{v} = \textbf{v}_1 \oplus \textbf{v}_2 \oplus ... \oplus \textbf{v}_n \vspace{-1ex} \] where $\oplus$ is the concatenation operator and \(\textbf{v}_i \in \mathbb{R}^{d}\) (\(d = d_w+d_p\times 2\)). \subsection{Convolution} Let $\textbf{v}_{i:i+j}$ refer to the concatenation of words $\textbf{v}_i,\textbf{v}_{i+1},...,\textbf{v}_{i+j}$. A convolution operation involves a \textit{filter} \(\textbf{w}\in\mathbb{R}^{hd}\), which is applied to a window of \textit{h} words to produce a new feature. A feature $\textit{c}_i$ is generated from a window of word $\textbf{v}_{i:i+h-1}$ by \vspace{-1ex} \[ \textit{c}_i = \textit{f}(\textbf{w}\cdot\textbf{x}_{i:i+h-1}+\textit{b}) \vspace{-1ex} \] Here $\textit{b} \in \mathbb{R}$ is a bias term and \textit{f} is a non-linear function. This filter is applied to each possible window of words from $\textbf{v}_1$ to $\textbf{v}_n$ to produce \textit{feature} $\textbf{c}=[c_1,c_2,...,c_{n-h+1}]$ with $\textbf{c} \in \mathbb{R}^s(s=n-h+1)$. \subsection{Residual Convolution Block} Residual learning connects low-level to high-level representations directly, and tackles the vanishing gradient problem in deep networks. In our model, we design the residual convolution block by applying shortcut connections. Each residual convolutional block is a sequence of two convolutional layers, each one followed by an ReLU activation. The kernel size of all convolutions is $h$, with padding such that the new feature will have the same size as the original one. Here are two convolutional \textit{filter} $\textbf{w}_1$, $\textbf{w}_2 \in \mathbb{R}^{h\times1}$. For the first convolutional layer: \vspace{-1ex} \[ \textit{$\tilde{c}$}_i = \textit{f}(\textbf{w}_1\cdot\textit{c}_{i:i+h-1}+\textit{b}_1) \vspace{-1ex} \] For the second convolutional layer: \vspace{-1ex} \[ \textit{$\acute{c}$}_i = \textit{f}(\textbf{w}_2\cdot\textit{$\tilde{c}$}_{i:i+h-1}+\textit{b}_2) \vspace{-1ex} \] Here $\textit{b}_1$, $\textit{b}_2$ are bias terms. For the residual learning operation: \vspace{-1ex} \[ \textbf{c} = \textbf{c} + \acute{\textbf{c}} \vspace{-1ex} \] Conveniently, the notation of \textbf{c} on the left is changed to define as the output vectors of the block. This operation is performed by a shortcut connection and element-wise addition. This block will be multiply concatenated in our architecture. \subsection{Max Pooling, Softmax Output} We then apply a max-pooling operation over the \textit{feature} and take the maximum value \(\hat{c}=max\{\textbf{c}\}\). We have described the process by which \textit{one} feature is extracted from \textit{one} filter. Take all features into one high level extracted feature \(\textbf{z}=[\hat{c}_1,\hat{c}_2,...,\hat{c}_m]\)(note that here we have \textit{m} filters). Then, these features are passed to a fully connected softmax layer whose output is the probability distribution over relations. Instead of using $y=\textbf{w}\cdot\textbf{z}+b$ for output unit \textit{y} in forward propagation, dropout uses $y=\textbf{w}\cdot(\textbf{z}\circ\textbf{r})+b$ where $\circ$ is the element-wise multiplication operation and $\textbf{r}\in\mathbb{R}^m$ is a 'masking' vector of Bernoulli random variables with probability \textit{p} of being 1. In the test procedure, the learned weight vectors are scaled by \textit{p} such that \(\hat{\textbf{w}}=\textit{p}\textbf{w}\) and used (without dropout) to score unseen instances. \section{Experiments} \iffalse It should be noted that several state-of-the-art methods on these datasets use other feature sets, like POS and WordNet and apply other model architectures, but we only choose the basic convolutional model for our baseline. This work is intended to show that it is possible and beneficial to train deep convolutional networks and residual network for relation extraction. Data proprocessing may improve our results even further. We will investigate this in future research. \fi \subsection{Experimental Settings} In this paper, we use the word embeddings released by \cite{lin2016attention} which are trained on the NYT-Freebase corpus~\cite{riedel2010modeling}. We fine tune our model using validation on the training data. The word embedding is of size 50. The input text is padded to a fixed size of 100. Training is performed with tensorflow adam optimizer, using a mini-batch of size 64, an initial learning rate of 0.001. We initialize our convolutional layers following \cite{glorot2010understanding}. The implementation is done using Tensorflow 0.11. All experiments are performed on a single NVidia Titan X (Pascal) GPU. In Table \ref{table:parameter} we show all parameters used in the experiments. \begin{table}[h!] \small \centering \begin{tabular}{ |c|c| } \hline Window size \textit{h} & 3 \\ Word dimension $\textit{d}_w$ & 50 \\ Position dimension $\textit{d}_p$ & 5 \\ Position maximum distance $\textit{e}_{max}$ & 30 \\ Position minimum distance $\textit{e}_{min}$ & -30 \\ Number of filters \textit{m} & 128 \\ Batch size \textit{B} & 64 \\ Learning rate $\lambda$ & 0.001 \\ Dropout probability \textit{p} & 0.5 \\ \hline \end{tabular} \caption{Parameter settings} \vspace{-2ex} \label{table:parameter} \end{table} \noindent We experiment with several state-of-the-art baselines and variants of our model. \begin{itemize} \item \textbf{CNN-B}: Our implementation of the CNN baseline~\cite{zeng2014relation} which contains one convolutional layer, and one fully connected layer. \item \textbf{CNN+ATT}: CNN-B with attention over instance learning \cite{lin2016attention}. \item \textbf{PCNN+ATT}: Piecewise CNN-B with attention over instance learning \cite{lin2016attention}. \item \textbf{CNN}: Our CNN model which includes one convolutional layer and three fully connected layers. \item \textbf{CNN-x}: Deeper CNN model which has x convolutional layers. For example, CNN-9 is a model constructed with 9 convolutional layers (1 + 4 residual cnn block without identity shortcut) and three fully connected layers. \item \textbf{ResCNN-x}: Our proposed CNN-x model with residual identity shortcuts. \end{itemize} We evaluate our models on the widely used NYT freebase larger dataset~\cite{riedel2010modeling}. Note that ImageNet dataset used by the original ResNet paper~\cite{he2016deep} has 1.28 million training instances. NYT freebase dataset includes 522K training sentences, which is the largest dataset in relation extraction, and it is the only suitable dataset to train deeper CNNs. \subsection{NYT-Freebase Dataset Performance} The advantage of this dataset is that there are 522,611 sentences in training data and 172,448 sentences in testing data and this size can support us to train a deep network. Similar to previous work \cite{zeng2015distant,lin2016attention}, we evaluate our model using the held-out evaluation. We report both the aggregate curves precision/recall curves and Precision@N (P@N). \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{CNN_ResCNN.png} \caption{Comparing ResCNN to different CNNs.} \label{fig:CNN_ResCNN} \vspace{-2ex} \end{figure} In Figure \ref{fig:CNN_ResCNN}, we compare the proposed ResCNN model with various CNNs. First, CNNs with multiple fully-connected layers obtained very good results, which is a novel finding. Second, the results also suggest that deeper CNNs with residual learning help extracting signals from noisy distant supervision data. We observe that overfitting happened when we try to add more layers and the performance of CNN-9 is much worse than CNN. We find that ResNet can solve this problem and ResCNN-9 obtains better performance as compared to CNN-B and CNN and dominates the precision/recall curve overall. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{DeepCNN.png} \caption{Varying the depths of our models.} \label{fig:DeepCNN} \vspace{-2ex} \end{center} \end{figure} \iffalse \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Residual_previous.png} \caption{Precision/Recall curves of CNN-ONE, CNN+ATT, PCNN+ONE, PCNN+ATT, ResCNN-9.} \label{fig:comparison} \end{figure} \fi We show the effect of depth in residual networks in Figure \ref{fig:DeepCNN}. We observe that ResCNN-5 is worse than CNN-5 because the ResNet does not work well for shallow CNNs, and this is consistent with the original ResNet paper. As we increase the network depth, we see that CNN-9 does overfit to the training data. With residual learning, both ResCNN-9 and ResCNN-13 provide significant improvements over CNN-5 and ResCNN-5 models. In contradictory to popular beliefs that ResNet only works well for very deep networks, we found that even with 9 layers of CNNs, using identity mapping could significantly improve the performance learning in a noisy input setting. \begin{table}[t] \small \centering \begin{tabular}{ |p{1.8cm}||p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}| } \hline P@N(\%)& 100 & 200 & 300 & Mean \\ \hline CNN+ATT & 76.2 & 68.6 & 59.8 & 68.2 \\ \hline PCNN+ATT & \textbf{76.2} & \textbf{73.1} & \textbf{67.4} & \textbf{72.2} \\ \hline\hline CNN-B & 41.0 & 40.0 & 41.0 & 40.7 \\ \hline CNN & 64.0 & 61.0 & 55.3 & 60.1 \\ \hline CNN-5 & 64.0 & 58.5 & 54.3 & 58.9 \\ \hline ResCNN-5 & 57.0 & 57.0 & 54.3 & 56.1 \\ \hline CNN-9 & 56.0 & 54.0 & 49.7 & 53.2 \\ \hline ResCNN-9 & \textbf{79.0} & \textbf{69.0} & \textbf{61.0} & \textbf{69.7} \\ \hline ResCNN-13 & 76.0 & 65.0 & 60.3 & 67.1 \\ \hline \end{tabular} \caption{P@N for relation extraction with different models. Top: models that select training data. Bottom: models without selective attention.} \label{table:1} \vspace{-2ex} \end{table} The intuition of ResNet help this task in two aspect. First, if the lower, middle, and higher levels learn hidden lexical, syntactic, and semantic representations respectively, sometimes it helps to bypass the syntax to connect lexical and semantic space directly. Second, ResNet tackles the vanishing gradient problem which will decrease the effect of noise in distant supervision data. In Table \ref{table:1}, we compare the performance of our models to state-of-the-art baselines. We show that our ResCNN-9 outperforms all models that do not select training instances. And even without piecewise max-pooling and instance-based attention, our model is on par with the PCNN+ATT model. For the more practical evaluation, we compare the results for precision@N where N is small (1, 5, 10, 20, 50) in Table \ref{table:2}. We observe that our ResCNN-9 model dominates the performance when we predict the relation in the range of higher probability. ResNet helps CNNs to focus on the highly possible candidate and mitigate the noise effect of distant supervision. We believe that residual connections actually can be seen as a form of renormalizing the gradients, which prevents the model from overfitting to the noisy distant supervision data. \begin{table}[t] \small \centering \begin{tabular}{ |p{1.8cm}||p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}| } \hline P@N(\%)& 1 & 5 & 10 & 20 & 50 \\ \hline PCNN+ATT & \textbf{1} & 0.8 & \textbf{0.9} & 0.75 & 0.7 \\ \hline ResCNN-9 & \textbf{1} & \textbf{1} & \textbf{0.9} & \textbf{0.9} & \textbf{0.88} \\ \hline \end{tabular} \caption{P@N for relation extraction with different models where N is small. We get the result of PCNN+ATT using their public source code.} \label{table:2} \vspace{-2ex} \end{table} In our distant-supervised relation extraction experience, we have two important observations: (1) We get significant improvements with CNNs adding multiple fully-connected layers. (2) Residual learning could significantly improve the performance for deeper CNNs. \section{Conclusion} In this paper, we introduce a deep residual learning method for distantly-supervised relation extraction. We show that deeper convolutional models help distill signals from noisy inputs. With shortcut connections and identify mapping, the performances are significantly improved. These results aligned with a recent study~\cite{conneau2016very}, suggesting that deeper CNNs do have positive effects on noisy NLP problems. \input{all.bbl}
proofpile-arXiv_065-3615
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and preliminary} \noindent With the aid of a system of partial differential equations, we proved an expansion theorem for the bivariate Hermite polynomials in \cite[Theorem~1.8]{LiuHermite2017}. This expansion theorem allows us to develop a systematic method to prove the identities involving the Hermite polynomials. I find the idea of \cite{LiuHermite2017} has universal significance, which stimulates us to develop a new method to treat the complex Hermite polynomials. \begin{defn}\label{comHermite} For complex numbers $x, y$ and non-negative integers $m, n$, the complex Hermite polynomials are defined by \begin{equation*} H_{m, n}(x, y)=\sum_{k=0}^{m\land n}(-1)^k k! {m\choose k} {n\choose k} x^{m-k} y^{n-k}, \end{equation*} where $m\land n=\min\{m, n\}.$ \end{defn} The polynomials $H_{m, n}(z, \bar{z})$ were first considered by It\^{o} \cite{Ito1952} in his study of complex multiple Wiener integrals and their applications to normal stochastic processes. These polynomials are also applied in \cite{AliBH2019} to coherent states, and in \cite{Wunsche1998}, \cite{Wunsche1999} to quantum optics and quasi-probabilities respectively. Several papers about this topic have been published in recent year, see for example \cite{Ghanmi2013}, \cite{Ismail2016}, \cite{IsmailZhang}, \cite{IsmailZeng2015}. For our purpose, we need extend slightly the complex Hermite polynomials by adding an extra parameter to them, and for convenience, we still call the extended complex Hermite polynomials as the complex Hermite polynomials. \begin{defn}\label{triHermitedefn} For any complex numbers $x, y$ and $z$, the complex Hermite polynomials $H_{m, n}(x, y, z)$ are defined as \begin{equation*} H_{m, n}(x, y, z)=\sum_{k=0}^{m\land n} k! {m\choose k} {n\choose k} x^{m-k} y^{n-k} z^k. \end{equation*} \end{defn} It is obvious that when $z=-1,$ $H_{m, n}(x, y, z)$ reduce to the usual complex Hermite polynomials $H_{m, n}(x, y).$ By a simple calculation, we also find the following proposition. \begin{prop}\label{Hermiterelation} The polynomials $H_{m, n}(x,y,z)$ and the polynomials $H_{m, n}(x, y)$ satisfy \[ H_{m, n}(x, y, z)=\left( \sqrt{-z}\right)^{m+n} H_{m,n} \left(\frac{x}{\sqrt{-z}}, \frac{y}{\sqrt{-z}}\right). \] \end{prop} Thus we may regard $H_{m, n}(x, y, z)$ as a variant form of the usual complex Hermite polynomials $H_{m, n}(x, y)$. Although $H_{m, n}(x, y, z)$ are equivalent to the complex Hermite polynomials $H_{m, n}(x, y)$, the former have a richer mathematical structure than the latter. \begin{rem} \rm The polynomials $H_{m, n}(x, y, -z)$ have been considered by Datolli et al. \cite[pp.23--24]{ DattoliOTTOVA1997}, and several basic properties about $H_{m, n}(x, y, -z)$ were obtained by them. \end{rem} To state our expansion theorem, we now introduce the definition of the $k$-fold complex Hermite series in several variables. \begin{defn}\label{kfoldHermite:pp} The $k$-fold complex Hermite series are defined as \[ \sum_{m_1, n_1, \ldots, m_k, n_k=0}^\infty \lambda_{m_1, n_1, \ldots, m_k, n_k} H_{m_1, n_1} (x_1, y_1, z_1) \cdots H_{m_k, n_k} (x_k, y_k, z_k), \] where $\lambda_{m_1, n_1, \ldots, m_k, n_k}$ are complex numbers independent of $x_1, y_1, z_1, \ldots, x_k, y_k, z_k.$ \end{defn} The principal result of this paper is the following expansion theorem for the analytic functions in several variables. \begin{thm}\label{LiutriHermite:eqna} If $f(x_1, y_1, z_1, \ldots, x_k, y_k, z_k)$ is a $3k$-variable analytic function at $(0, 0, \ldots, 0)\in \mathbb{C}^{3k}$, then, $f$ can be expanded in an absolutely and uniformly convergent $k$-fold complex Hermite series, if and only if, for $j\in \{1, 2, \ldots, k\}, f$ satisfies the partial differential equations \[ \frac{\partial f}{\partial z_j} =\frac{\partial^2 f}{\partial x_j \partial y_j}. \] \end{thm} This theorem is a powerful tool for proving formulas involving the complex Hermite polynomials, which allows us to develop a systematic method to derive identities involving the complex Hermite polynomials. \section{The proof of Theorem~\ref{LiutriHermite:eqna}} Using $\exp(sx+ty+stz)=\exp(sx)\exp(ty)\exp(stz)$ and the Maclaurin expansion for the exponential function, one can easily derive Proposition~\ref{triH:propa}. \begin{prop}\label{triH:propa} For any complex numbers $x, y, z$ and $s, t$, we have \[ \sum_{m, n=0}^{\infty} H_{m, n} (x, y, z) \frac{s^m t^n}{m! n!} =\exp(sx+ty+stz). \] \end{prop} In order to prove Theorem~\ref{LiutriHermite:eqna}, we need the following three propositions. \begin{prop}\label{triH:propb} The complex Hermite polynomials $H_{m, n}(x, y, z)$ satisfy the partial differential equation \[ \frac{\partial H_{m, n}}{\partial z} =\frac{\partial^2 H_{m, n}}{\partial x \partial y}. \] \end{prop} \begin{proof} Applying the partial differential operator ${\partial^2}/{\partial x \partial y}$ to act both sides of the equation in Proposition~\ref{triH:propa}, we find that \[ \sum_{m, n=0}^{\infty} \frac{\partial^2 H_{m, n}}{\partial x \partial y} \frac{s^m t^n}{m! n!} =st\exp(sx+ty+stz). \] Upon differentiating both sides of the equation in Proposition~\ref{triH:propb} with respect to $z$, we arrive at \[ \sum_{m, n=0}^{\infty} \frac{\partial H_{m, n}}{\partial z} \frac{s^m t^n}{m! n!}=st\exp(sx+ty+stz). \] A comparison of these two equations immediately gives us that \[ \sum_{m, n=0}^{\infty} \frac{\partial H_{m, n}}{\partial z} \frac{s^m t^n}{m! n!}=\sum_{m, n=0}^{\infty} \frac{\partial^2 H_{m, n}}{\partial x \partial y} \frac{s^m t^n}{m! n!}. \] Equating the coefficients of like powers of $s$ and $t$, we complete the proof of the proposition. \end{proof} \begin{prop}\label{triH:propc} The following exponential operator representation for the complex Hermite polynomials holds: \[ H_{m, n} (x, y, z) =\exp \left( z\frac{\partial^2 }{\partial x \partial y} \right)\{x^m y^n\}. \] \end{prop} This operational identity for the complex Hermite polynomials is equivalent to \cite[Eq.(1.5.2d)]{DattoliOTTOVA1997}. \begin{rem}\rm Using the exponential operator $\exp \left( -z\frac{\partial^2 }{\partial x \partial y} \right)$ to act both sides of the equation in Proposition~\ref{triH:propc}, we have \begin{align} x^my^n&=\exp \left( -z\frac{\partial^2 }{\partial x \partial y} \right)\{H_{m, n}(x, y)\}\\ &=\sum_{k=0}^{m\land n} k! {m\choose k} {n\choose k} \exp \left( -z\frac{\partial^2 }{\partial x \partial y} \right)\{x^{m-k} y^{n-k}\} z^k\nonumber\\ &=\sum_{k=0}^{m\land n} k! {m\choose k} {n\choose k} H_{m-k}(x, y, -z) H_{n-k}(x, y, -z) z^k.\nonumber \end{align} \end{rem} \begin{prop}\label{mcvarapp} If $f(x_1, x_2, \ldots, x_k)$ is analytic at the origin $(0, 0, \ldots, 0)\in \mathbb{C}^k$, then, $f$ can be expanded in an absolutely and uniformly convergent power series, \[ f(x_1, x_2, \ldots, x_k)=\sum_{n_1, n_2, \ldots, n_k=0}^\infty \lambda_{n_1, n_2, \ldots, n_k} x_1^{n_1} x_2^{n_2}\cdots x_k^{n_k}. \] \end{prop} This proposition can be found in the standard textbooks for complex analysis in several variables (see, for example \cite[p. 5, Proposition~ 1]{Malgrange}). Now we begin to prove Theorem~\ref{LiutriHermite:eqna} with the help of the above three propositions. \begin{proof} The theorem can be proved by mathematical induction. We first prove the theorem for the case $k=1.$ Since $f$ is analytic at $(0, 0, 0),$ we know that $f$ can be expanded in an absolutely and uniformly convergent power series in a neighborhood of $(0, 0)$. Thus there exists a sequence $\{\lambda_{m, n, p}\}$ independent of $x_1, y_1$ and $z_1$ such that \begin{equation} f(x_1, y_1, z_1)=\sum_{m, n, p=0}^\infty \lambda_{m, n, p} x_1^m y_1^n z_1^p. \label{liu:eqn1} \end{equation} The series on the right-hand side of the equation above is absolutely and uniformly convergent. Upon substituting the equation above into the following partial differential equation: \[ \frac{\partial f}{\partial z_1} =\frac{\partial^2 f}{\partial x_1 \partial y_1}, \] and then using the identities, $D_{ z_1}\{z_1^p\}=p z_1^{p-1}$, in the resulting equation, we obtain \begin{equation*} \sum_{m, n, p=0}^\infty p \lambda_{m, n, p} x_1^{m} y_1^{n} z_1^{p-1} =\frac{\partial^2 }{\partial x_1 \partial y_1}\left\{\sum_{m, n, p=0}^\infty \lambda_{m, n, p} x_1^m y_1^{n}z_1^p\right\}. \end{equation*} Upon equating the coefficients of $z_1^{p-1}$ on both sides of the equation, we deduce that \begin{align*} p\sum_{m, n=0}^\infty \lambda_{m, n, p} x_1^m y_1^n =\frac{\partial^2 }{\partial x_1 \partial y_1}\left\{\sum_{m, n=0}^\infty \lambda_{m, n, p-1} x_1^m y_1^n\right\}. \end{align*} If we iterate this relation $(p-1)$ times and interchange the order of differentiation and summation, we deduce that \begin{align*} \sum_{m, n=0}^\infty \lambda_{m, n, p} x_1^m y_1^n &=\frac{1}{p!}\frac{\partial^{2p} }{\partial x_1^p \partial y_1^p}\left\{\sum_{m, n=0}^\infty \lambda_{m, n, 0} x_1^m y_1^n\right\}\\ &=\frac{1}{p!}\sum_{m, n=0}^\infty \lambda_{m, n, 0} \frac{\partial^{2p} }{\partial x_1^p \partial y_1^p} \{x_1^m y_1^n\}. \end{align*} Substituting this equation into (\ref{liu:eqn1}) and using a simple calculation, we conclude that \begin{align*} f(x_1, y_1, z_1)&=\sum_{ p=0}^\infty z^p \sum_{m. n=0}^\infty \lambda_{m, n, p} x_1^m y_1^n\\ &=\sum_{ p=0}^\infty \frac{z_1^p}{p!}\sum_{m, n=0}^\infty \lambda_{m, n, 0} \frac{\partial^{2p} }{\partial x_1^p \partial y_1^p} \{x_1^m y_1^n\} . \end{align*} Interchanging the order of summation and using Proposition~\ref{triH:propc}, we deduce that \begin{align*} f(x_1, y_1, z_1)&=\sum_{m, n=0}^\infty \lambda_{m, n, 0} \exp \left(z_1 \frac{\partial^2 }{\partial x_1, \partial y_1}\right) \{x_1^m y_1^n\}\\ &=\sum_{m, n=0}^\infty \lambda_{m, n, 0} H_{m, n}(x_1, y_1, z_1). \end{align*} This indicates that $f(x_1, y_1, z_1)$ can be expanded in terms of $H_{m, n}(x_1, y_1, z_1).$ Conversely, if $f(x_1, y_1, z_1)$ can be expanded in terms of $H_{m, n}(x_1, y_1, z_1)$, then, using Proposition~\ref{triH:propb}, we find that $f(x_1, y_1, z_1)$ satisfies the partial differential equation \[ \frac{\partial f}{\partial z_1} =\frac{\partial^2 f}{\partial x_1 \partial y_1}. \] This shows that Theorem~\ref{LiutriHermite:eqna} holds for the case with $k=1$. Now, we assume that the theorem is true for the case $k-1$ and consider the case $k$. If we regard $f(x_1, y_1, z_1, \ldots, x_k, y_k, z_k)$ as a function of $x_1, y_1 $ and $z_1$, then, $f$ is analytic at $(0, 0, 0)$ and satisfies the partial differential equation \[ \frac{\partial f}{\partial z_1} =\frac{\partial^2 f}{\partial x_1 \partial y_1}. \] Hence there exists a sequence $\{c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)\}$ independent of $x_1, y_1$ and $z_1$ such that \begin{align} &f(x_1, y_1, z_1, \ldots, x_k, y_k, z_k)\label{liu:eqn2}\\ &=\sum_{m_1, n_1=0}^\infty c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)H_{m_1, n_1}(x_1, y_1, z_1). \nonumber \end{align} Setting $z_1=0$ in the equation and using the obvious equation $H_{m_1, n_1}(x_1, y_1, 0)=x_1^{m_1} y_1^{n_1},$ we obtain \begin{align*} &f(x_1, y_1, 0, \ldots, x_k, y_k, z_k)\\ &=\sum_{m_1, n_1=0}^\infty c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)x_1^{m_1} y_1^{n_1}. \end{align*} Using the Maclaurin expansion for analytic functions of two variables, we immediately deduce that \begin{align*} &c_{m_1, n_1}(x_2, y_2, z_2 \ldots, x_k, y_k, z_k)\\ &=\frac{\partial^{m_1+n_1} f(x_1, y_1, 0, \ldots, x_k, y_k, z_k)}{m_1! n_1! \partial {x_1}^{m_1} \partial {y_1}^{n_1}}\Big|_{x_1=y_1=0}. \end{align*} Since $f(x_1, y_1, z_1, \ldots, x_k, y_k, z_k)$ is analytic at $(0, \ldots, 0)\in \mathbb{C}^{2k},$ from the above equation, we know that $c_{m_1, n_1}(x_2, y_2, z_2,\ldots, x_k, y_k, z_k)$ is analytic at \[ (x_2, y_2, z_2, \ldots, x_k, y_k, z_k)=(0, \ldots, 0)\in \mathbb{C}^{3k-3}. \] Substituting (\ref{liu:eqn2}) into the partial differential equations in Theorem~\ref{LiutriHermite:eqna}, we find that for $j=2, \ldots, k,$ \begin{align*} &\sum_{n_1=0}^\infty \frac{\partial c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)} {\partial {z_j}}H_{m_1, n_1}(x_1, y_1, z_1)\\ &=\sum_{m_1, n_1=0}^\infty \frac{\partial ^2 c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)}{\partial {x_j} \partial {y_j}} H_{n_1}(x_1, y_1, z_1). \end{align*} By equating the coefficients of $H_{m_1, n_1}(x_1, y_1, z_1)$ in the above equation, we find that for $j=2, \ldots, k,$ \begin{align*} \frac{\partial c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)} {\partial {z_j}} =\frac{\partial ^2c_{m_1, n_1}(x_2, y_2, z_2, \ldots, x_k, y_k, z_k)}{\partial {x_j}\partial {y_j}}. \end{align*} Thus by the inductive hypothesis, there exists a sequence $\lambda_{m_1, n_1, \ldots, m_k, n_k}$ independent of $x_2, y_2, z_2, \ldots, x_k, y_k, z_k$ (of course independent of $x_1, y_1$ and $z_1$) such that \begin{align*} &c_{m_1, n_1}(x_2, y_2, z_2 \ldots, x_k, y_k, z_k)\\ &=\sum_{m_1, n_1, \ldots, m_k, n_k=0}^\infty \lambda_{m_1, n_1, \ldots, m_k, n_k} H_{m_2, n_2}( x_2, y_2, z_2)\ldots H_{m_k, n_k}(x_k, y_k, z_k). \end{align*} Substituting this equation into (\ref{liu:eqn2}), we find that $f$ can be expanded into a $k$-fold complex Hermite series. Conversely, if $f$ is a $k$-fold complex Hermite series, then it satisfies the partial differential equations in Theorem~\ref{LiutriHermite:eqna} by using Proposition~\ref{triH:propb}. Hence we complete the proof of the theorem. \end{proof} To determine if a given function is an analytic functions in several complex variables, we can use the following theorem due to Hartogs (see, for example, \cite[p. 28]{Taylor}). \begin{thm}\label{hartogthm} If a complex valued function $f(z_1, z_2, \ldots, z_n)$ is holomorphic (analytic) in each variable separately in a domain $U\in\mathbb{C}^n,$ then, it is holomorphic (analytic) in $U.$ \end{thm} \section{The Poisson Kernel for the complex Hermite polynomials } In this section we will use Theorem~\ref{LiutriHermite:eqna} to give a completely new proof of the following Poisson kernel for the complex Hermite polynomials. This formula was first derived by Carlitz \cite[p.13]{carlitz1978} in 1978, and rediscovered by W\"{u}nsche \cite{Wunsche1999} without proof in 1999. Ismail \cite[Theorem~3.3]{Ismail2016} recovered it as a specific case of his Kibble--Slepian formula. For other different proofs, please see \cite[Theorem~4.1]{Ghanmi2017}, \cite{IsmailZhang}. Our proof is brand new. \begin{thm}\label{mehlerthm} For $|stz_1z_2|<1,$ the Mehler formula for the complex Hermite polynomials states that \begin{align*} &\sum_{m, n=0}^\infty \frac{H_{m, n}(x_1, y_1, z_1)H_{m, n}(x_2, y_2, z_2)}{m! n!} {s^mt^n}\\ &=\frac{1}{1-stz_1z_2} \exp \left(\frac{sx_1x_2+ty_1y_2+(z_1x_2y_2+z_2x_1y_1)st}{1-stz_1z_2}\right). \end{align*} \end{thm} \begin{proof} If we use $f(x_1, y_1, z_1)$ to denote the right-hand side of the equation in Theorem~\ref{mehlerthm}, then, it is easily seen that $f(x_1, y_1, z_1)$ is an analytic function of $x_1, y_1, z_1$ for any $x_1, y_1$ and $|stz_1z_2|<1.$ Hence $f(x_1, y_1, z_1)$ is analytic at $(x_1, y_1, z_1)=(0, 0, 0).$ By a direct computation, we find that \[ \frac{\partial f}{\partial z_1}=\frac{\partial^2 f}{\partial x_1 \partial y_1} =\left(\frac{z_2st}{(1-stz_1z_2)^2}+\frac{st(x_2+y_1z_2)(y_2+x_1z_2)}{(1-stz_1z_2)^2} \right)f. \] Thus, by Theorem~\ref{LiutriHermite:eqna}, there exists a sequence $\{\lambda_{m, n}\}$ independent of $x_1, y_1$ and $z_1$ such that \begin{align} &\frac{1}{1-stz_1z_2} \exp \left(\frac{sx_1x_2+ty_1y_2+(z_1x_2y_2+z_2x_1y_1)st}{1-stz_1z_2}\right)\label{mehler:eqn1}\\ &=\sum_{m, n=0}^{\infty} \lambda_{m, n} H_{m, n}(x_1, y_1, z_1).\nonumber \end{align} Setting $z_1=0$ in this equation and using $H_{m, n}(x_1, y_1, 0)=x_1^m y_1^n$ in the resulting equation, we immediately find that \[ \exp (sx_1x_2+ty_1y_2+x_1y_1z_2st) =\sum_{m, n=0}^{\infty} \lambda_{m, n} x_1^m y_1^n. \] Using the generating function for the complex Hermite polynomials in Proposition~\ref{triH:propa}, we have \[ \exp (sx_1x_2+ty_1y_2+x_1y_1z_2st)=\sum_{m, n=0}^\infty \frac{H_{m, n}(x_2, y_2, z_2)}{m! n!} {(sx_1)^{m}(ty_1)^n}. \] Comparing the right-hand sides of these two equations, we conclude that \[ \lambda_{m, n}=\frac{H_{m, n}(x_2, y_2, z_2)}{m! n!}s^m t^n. \] Substituting this into (\ref{mehler:eqn1}), we complete the proof of Theorem~\ref{mehlerthm}. \end{proof} Using Proposition~\ref{triH:propc}, we easily find that the Poisson kernel for the complex Hermite polynomials is equivalent to the following exponential operational identity, which is equivalent to \cite[Equation (5.1)]{Wunsche2015}. \begin{thm}\label{Mehleroperator} For $|stz_1z_2|<1,$ we have the exponential operator identity \begin{align*} &\exp\left( z_2 \frac{\partial^2}{\partial x_2 \partial y_2}\right) \left\{ \exp (sx_1x_2+ty_1y_2+ty_1y_2+stz_1x_2y_2) \right\}\\ &=\frac{1}{1-stz_1z_2} \exp \left(\frac{sx_1x_2+ty_1y_2+(z_1x_2y_2+z_2x_1y_1)st}{1-stz_1z_2}\right). \end{align*} \end{thm} \section{The Nielsen type formulas for the complex Hermite polynomials} \noindent We begin this section with the following formula for the complex Hermite polynomials. \begin{thm}\label{Nielsen:thma} For any complex numbers $x, y, z, s_1, s_2, t_1$ and $t_2$, we have \begin{align*} &\exp\left((s_1+s_2)x+(t_1+t_2)y+(s_1+s_2)(t_1+t_2)z\right)\\ &=\sum_{m_1, n_1, m_2, n_2=0}^\infty H_{m_1+m_2, n_1+n_2} (x, y, z) \frac{s_1^{m_1} s_2^{m_2}t_1^{n_1}t_2^{n_2}}{m_1! m_2! n_1! n_2!}. \end{align*} \end{thm} \begin{proof} Denote the left-hand side of the equation in Theorem~\ref{Nielsen:thma} by $f(x, y, z)$. It is easily seen that $f(x, y, z)$ is analytic at $(0, 0, 0)$. A simple computation shows that \[ \frac{\partial f}{\partial z} =\frac{\partial^2 f}{\partial x \partial y} =(s_1+s_2)(t_1+t_2) f(x, y, z). \] Thus, by Theorem~\ref{LiutriHermite:eqna}, there exists a sequence $\{\lambda_{k, l}\}$ independent of $x, y$ and $z$ such that \begin{align} &\exp\left((s_1+s_2)x+(t_1+t_2)y+(s_1+s_2)(t_1+t_2)z\right)\label{Tnielsen:eqn1}\\ &=\sum_{k_1, l=0}^\infty \lambda_{k, l} H_{k, l}(x, y, z).\nonumber \end{align} Upon setting $z=0$ in the equation and using $H_{k, l}(x, y, 0)=x^ky^l,$ we deduce that \[ \exp\left((s_1+s_2)x+(t_1+t_2)y\right) =\sum_{k_1, l=0}^\infty \lambda_{k, l} x^k y^l. \] Equating the coefficients of $x^ky^l$ on both sides of this equation, we find that $k! l! \lambda_{k, l}=(s_1+s_2)^k(t_1+t_2)^l.$ Substituting this into the right-hand side of (\ref{Tnielsen:eqn1}), expanding $(s_1+s_2)^k (t_1+t_2)^l$ using the binomial theorem and interchanging the order of summation, we complete the proof of Theorem~\ref{Nielsen:thma}. \end{proof} Using Theorem~\ref{Nielsen:thma} and method of equating the coefficients of like power, we can derive the following Nielsen type formula for the complex Hermite polynomials, which is equivalent to \cite[Equation (3.11)]{Ghanmi2013} and \cite[Equation (4.7)]{Ismail2016}. \begin{thm}\label{Nielsen:thmb} For any non-negative integers $m_j, n_j, p_j$ $j\in\{1, 2\}$, we have \begin{align*} &\frac{H_{m_1+m_2, n_1+n_2}(x, y, z)}{m_1! m_2! n_1! n_2!}\\ &=\sum_{p_1=0}^{m_1\land n_2}\sum_{p_2=0}^{n_1\land m_2} \frac{H_{m_1-p_1, n_1-p_2}(x, y, z)H_{m_2-p_2, n_2-p_1}(x, y, z) z^{p_1+p_2}}{p_1!p_2!(m_1-p_1)!(m_2-p_2)!(n_1-p_2)!(n_2-p_1)!}. \end{align*} \end{thm} Upon multiplying both sides of the equation in Theorem~\ref{Nielsen:thma} by $\exp(-s_1t_2-s_2t_1)z$ and then equating the coefficients of like power, we can also derive the following formula due to Ismail \cite[Theorem~4.1]{Ismail2016}. \begin{thm}\label{Nielsen:thmc} For any non-negative integers $m_j, n_j, p_j$ $j\in\{1, 2\}$, we have \begin{align*} &\frac{H_{m_1, n_1}(x, y, z)H_{m_2, n_2}(x, y, z)}{m_1! m_2! n_1! n_2!}\\ &=\sum_{p_1=0}^{m_1\land n_2}\sum_{p_2=0}^{n_1\land m_2} \frac{H_{m_1+m_2-p_1-p_2, n_1+n_2-p_1-p_2}(x, y, z) (-z)^{p_1+p_2}}{p_1!p_2!(m_1-p_1)!(m_2-p_2)!(n_1-p_1)!(n_2-p_2)!}. \end{align*} \end{thm} \section{ Addition formula for the complex Hermite polynomials } \begin{thm}\label{additionthm} If $M, N$ are two non-negative integers, then, we have the following addition formula for the complex Hermite polynomials: \begin{align*} &H_{M, N} (a_1x_1+\cdots+a_kx_k, b_1y_1+\cdots+b_ky_k, a_1b_1z_1+\cdots+a_kb_kz_k)\\ &=\sum_{m_1, n_1, \ldots, m_k, n_k} \frac{M!N!}{m_1!n_1!\ldots m_k!n_k!} a_1^{m_1}\cdots a_k^{m_k} b_1^{n_1}\cdots b_k^{n_k}\\ &\qquad \qquad \qquad \qquad \times H_{m_1, n_1}(x_1, y_1, z_1)\cdots H_{m_k, n_k}(x_k, y_k, z_k). \end{align*} The sum is taken over all combinations of non-negative integers indices $m_1$ through $m_k$ and $n_1$ through $n_k$ such that \[ m_1+\cdots+m_k=M,~\text{and}~n_1+\cdots+n_k=N. \] \end{thm} \begin{proof} Upon denoting the left-hand side of the equation in Theorem~\ref{additionthm} by \[ f(x_1, y_1, z_1, \ldots, x_k, y_k, z_k), \] it is obvious that this function is analytic at $(0, \ldots, 0)\in \mathbb{C}^{3k}.$ For simplicity, we temporarily denote \begin{align*} x&=a_1x_1+\cdots+a_kx_k,\\ y&=b_1y_1+\cdots+b_ky_k,\\ z&=a_1b_1z_1+\cdots+a_kb_kz_k. \end{align*} By a simple calculation, we find that for $j=1, \ldots, k,$ \[ \frac{\partial f}{\partial z_j} =\frac{\partial^2 f}{\partial x_j \partial y_j}=a_jb_j \frac{\partial H_{M,N}}{\partial z}. \] Thus, by Theorem~\ref{LiutriHermite:eqna}, there exists a sequence $\{\lambda_{m_1, n_1, \ldots, m_k, n_k}\}$ independent of \[ x_1, y_1, z_1, \ldots, x_k, y_k, z_k \] such that \begin{align*} &H_{M, N} (a_1x_1+\cdots+a_kx_k, b_1y_1+\cdots+b_ky_k, a_1b_1z_1+\cdots+a_kb_kz_k)\\ &=\sum_{m_1, n_1, \ldots, m_k, n_k=0}^\infty \lambda_{m_1, n_1, \ldots, m_k, n_k} H_{m_1, n_1}(x_1, y_1, z_1)\cdots H_{m_k, n_k}(x_k, y_k, z_k). \end{align*} Setting $z_1=\cdots=z_k=0$ and in the resulting equation using the fact that \[ H_{m_j, n_j} (x_j, y_j, 0)=x_j^{m_j}y_j^{n_j}, \] we deduce that \begin{align*} &(a_1x_1+\cdots+a_kx_k)^M (b_1y_1+\cdots+b_ky_k)^N\\ &=\sum_{m_1, n_1, \ldots, m_k, n_k=0}^\infty \lambda_{m_1, n_1, \ldots, m_k, n_k} x_1^{m_1} y_1^{n_1}\cdots x_k^{m_k} y_k^{n_k}. \end{align*} Expanding the left-hand side by the multinomial theorem and then equating the coefficients of multiple power series, we complete the proof of Theorem~\ref{additionthm}. \end{proof} \section{A multilinear generating function for the complex Hermite polynomials } \begin{thm}\label{multipl:mehler} If $|s_1t_1z_1+\cdots+s_rt_rz_r|<1$ and $a, b, c$ are defined by \begin{align*} a&=s_1x_1+\cdots+s_rx_r,\\ b&=t_1y_1+\cdots+t_ry_r,\\ c&=s_1t_1z_1+\cdots+s_rt_rz_r, \end{align*} then, we have the following multilinear generating function for the complex Hermite polynomials: \begin{equation} \frac{1}{(1-cz)} \exp \left(\frac{ax+by+cxy+abz}{1-cz}\right) \label{mul:eqn1} \end{equation} \begin{align} &=\sum_{m_1, n_1, \ldots, m_r, n_r=0}^\infty H_{m_1, n_1}(x_1, y_1, z_1)\cdots H_{m_r, n_r}(x_r, y_r, z_r)\nonumber\\ &\qquad \qquad \qquad \qquad \qquad \times H_{m_1+\cdots+m_r, n_1+\cdots+n_r}(x, y, z) \frac{s_1^{m_1}t^{n_1}\cdots s_r^{m_r}t_r^{n_r}}{m_1! n_1!\cdots m_r! n_r!}.\nonumber \end{align} \end{thm} \begin{proof} If we use $f(x, y, z)$ to denote the left-hand side of (\ref{mul:eqn1}), then, it is easily seen that $f$ is an analytic function of $x, y, z$ such that $|s_1t_1z_1+\cdots+s_rt_rz_r|<1.$ Hence $f(x, y, z)$ is analytic at $(x, y, z)=(0, 0, 0)$. By a straightforward computation, we conclude that \[ \frac{\partial f}{\partial z} =\frac{\partial^2 f}{\partial x \partial y}= \left(\frac{c}{1-cz}+\frac{(a+cy)(b+cz)}{(1-cz)^2}\right)f. \] Thus, by Theorem~\ref{LiutriHermite:eqna}, there exists a sequence $\lambda_{k, l}$ independent of $x, y, z$ such that \begin{equation} f(x, y, z)=\sum_{k, l=0}^\infty \lambda_{k, l} H_{k, l}(x, y, z). \label{mul:eqn2} \end{equation} Setting $z=0$ in the above equation and using the fact that $H_{k, l}(x, y, 0)=x^ky^l,$ we find that \begin{equation} f(x, y, 0)=\sum_{k_1, l=0}^\infty \lambda_{k, l} x^k y^l. \label{mul:eqn3} \end{equation} On other hand, from the definition of $f(x, y, z)$, it is easily seen that \[ f(x, y, 0)=\prod_{j=1}^r \exp (s_jx_jx+t_jy_jy+s_jt_jz_jxy). \] Using the generating function of the exponential type for the complex Hermite polynomials in Proposition~\ref{triH:propa}, we find that \begin{align*} f(x, y, 0)&=\sum_{m_1, n_1, \ldots, m_r, n_r=0}^\infty H_{m_1, n_1}(x_1, y_1, z_1)\cdots H_{m_r, n_r}(x_r, y_r, z_r)\\ &\qquad \qquad \qquad \qquad \times \frac{(s_1x)^{m_1}(t_1y)^{n_1}\cdots (s_rx)^{m_r}(t_ry)^{n_r}}{m_1! n_1!\cdots m_r! n_r!}. \end{align*} Comparing this equation with (\ref{mul:eqn3}) and equating the coefficients of $x^k y^l$, we conclude that \begin{align*} \lambda_{k, l}&=\sum_{{m_1+\cdots+m_r=k}\atop{n_1+\cdots+n_r=l}}^\infty H_{m_1, n_1}(x_1, y_1, z_1)\cdots H_{m_r, n_r}(x_r, y_r, z_r)\\ &\qquad \qquad \qquad \qquad \times \frac{s_1^{m_1}t_1^{n_1}\cdots s_r^{m_r}t_r^{n_r}}{m_1! n_1!\cdots m_r! n_r!}. \end{align*} Substituting this into (\ref{mul:eqn2}), we complete the proof of Theorem~\ref{multipl:mehler}. \end{proof} \section{A generating function for the products of the Hermite polynomials and the complex Hermite polynomials} \noindent As usual, for any real number $x$, we use $[x]$ to denote the greatest integer function. For any complex number $x$, the Hermite polynomials are defined by \begin{equation} H_n(x)=\sum_{k=0}^{[\frac{n}{2}]} \frac{n!}{k!(n-2k)!} (2x)^{n-2k}. \label{Hermite:eqn1} \end{equation} The exponential generating function for the Hermite polynomials $H_n(x)$ is given by \begin{equation} \exp (2xt-t^2)=\sum_{n=0}^\infty \frac{H_n(x)}{n!} t^n, \quad |t|<\infty. \label{Hermite:eqn2} \end{equation} The following formula is equivalent to W\"{u}nsche \cite[Equation (7.4)]{Wunsche2015}. In his paper Professor W\"{u}nsche just said that his formula can be proved by using auxiliary formulae prepared in Appendix A, but lacks sufficient details. Now we will use Theorem~\ref{LiutriHermite:eqna} to give a very simple proof of Theorem~\ref{mixed:mehler}. \begin{thm}\label{mixed:mehler} For $|2stz|<1,$ we have the following generating function for the Hermite polynomials and the complex Hermite polynomials. \begin{align*} &\sum_{m, n=0}^\infty (-1)^{m+n} H_{m, n}(x, y, z) H_m(u)H_n(v)\frac{s^mt^n}{m!n!}\\ &=\frac{\exp(u^2+v^2)}{\sqrt{1-4s^2t^2z^2}} \exp\left(\frac{4stz(sx+u)(ty+v)-(sx+u)^2-(ty+v)^2}{1-4s^2t^2z^2}\right). \end{align*} \end{thm} \begin{proof} If we use $f(x, y, z)$ to denote the right-hand side of the equation in Theorem~\ref{mixed:mehler}, then, it is easily seen that $f$ is analytic at $(0, 0, 0)$. A elementary calculation shows that \begin{align*} &\frac{\partial f}{\partial z} =\frac{\partial^2 f}{\partial x \partial y}=\\ & \left\{\frac{4s^2t^2z}{1-4s^2t^2z^2} +\frac{4st(2stz(sx+u)-(yv+t))(2stz(ty+v)-(sx+u))}{(1-4s^2t^2z^2)^2} \right\}f \end{align*} Hence, by Theorem~\ref{LiutriHermite:eqna}, there exists a sequence $\lambda_{m, n}$ independent of $x, y, z$ such that \begin{equation} f(x, y, z)=\sum_{m, n=0}^\infty \lambda_{m, n} H_{m, n}(x, y, z). \label{mix:eqn1} \end{equation} Setting $z=0$ in the above equation and using the fact that $H_{m, n}(x, y, 0)=x^my^n,$ we deduce that \begin{equation} \exp(-(sx)^2-2sxu-(ty)^2-2tyv)=\sum_{m, n=0}^\infty \lambda_{m, n} x^m y^n. \label{mix:eqn2} \end{equation} Using the exponential generating function for the Hermite polynomials, we find that \begin{align*} \exp(-(sx)^2-2sxu)=\sum_{m=0}^\infty H_m(u)\frac{(-sx)^m}{m!},\\ \exp(-(ty)^2-2tyv)=\sum_{n=0}^\infty H_n(v) \frac{(-ty)^n}{n!}. \end{align*} Upon substituting these two equations into the left-hand side of (\ref{mix:eqn2}) and equating the coefficients of like power, we obtain \[ \lambda_{m, n}=(-1)^{m+n}H_m(u)H_n(v) \frac{s^m t^n}{m! n!}. \] Combining this equation with (\ref{mix:eqn1}), we complete the proof of Theorem~\ref{mixed:mehler}. \end{proof} Theorem~\ref{mixed:mehler} contains the Mehler formula for the Hermite polynomials as a special case, which was discovered by Mehler \cite[p.174, Equation(18)]{MehlerFG1866} in 1866. One can also find this important formula in most books on special functions, for example, \cite[p.280, Equation (6.1.13)]{AndAskRoy1999}, \cite[p.111, Equation(4.417)]{BealsWong2010}, \cite[p.108, Equation (4.7.6)]{Ismail2009}, \cite[p. 198, Equation (2)]{Rainville1960}. One very simple proof of this formula can be found in \cite{LiuHermite2017}. \begin{thm}\label{mixed:mehlera} For $|2t|<1,$ we have the Mehler formula for the Hermite polynomials: \[ \sum_{n=0}^{\infty} \frac{H_n(u)H_n(v)}{n!}t^n =\frac{1}{\sqrt{1-4t^2}} \exp\left(\frac{4tuv-4(u^2+v^2)t^2}{1-4t^2}\right). \] \end{thm} \begin{proof} Upon taking $x=y=0$ in the equation in Theorem~\ref{mixed:mehler} and using the fact that \[ H_{m, n} (0, 0, z)=\delta_{m,n} n! z^n, \] in the resulting equation, we immediately conclude that \[ \sum_{n=0}^{\infty} \frac{H_n(u)H_n(v)}{n!}(stz)^n =\frac{\exp(u^2+v^2)}{\sqrt{1-4s^2t^2z^2}} \exp\left(\frac{4stuv-(u^2+v^2)}{1-4s^2t^2z^2}\right). \] Putting $s=z=1$ in this equation and simplifying we complete the proof of Theorem~\ref{mixed:mehlera}. \end{proof} In the same way we can prove the following more general generating function formula, which appeared to be new. \begin{thm}\label{mixed:mehlerb} If $k$ is a non-negative integer and $|2stz|<1,$ we have the following generating function for the Hermite polynomials and the complex Hermite polynomials: \begin{align*} &\sum_{m, n=0}^\infty (-1)^{m+n} H_{m, n}(x, y, z) H_{m+k}(u)H_n(v)\frac{s^mt^n}{m!n!}\\ &=\frac{\exp(u^2+v^2)}{(1-4s^2t^2z^2)^{(k+1)/2}} H_k\left(\frac{u+sx-2stz(v+ty)}{\sqrt{1-4s^2t^2z^2}}\right)\\ &\qquad \times \exp\left(\frac{4stz(sx+u)(ty+v)-(sx+u)^2-(ty+v)^2}{1-4s^2t^2z^2}\right). \end{align*} \end{thm} Upon putting $x=y=0$ in Theorem~\ref{mixed:mehlerb} and using the fact that \[ H_{m, n} (0, 0, z)=\delta_{m,n} n! z^n, \] in the resulting equation and finally setting $s=z=1,$ we derive the following formula due to Weisner \cite[Equation (4.9)]{Weisner1959}. \begin{thm}\label{mixed:mehlerd} For $|2t|<1,$ we have \begin{align*} &\sum_{n=0}^{\infty} \frac{H_{n+k}(u)H_n(v)}{n!}t^n\\ &=\frac{1}{(1-4t^2)^{(k+1)/2}} H_k \left( \frac{u-2tv}{\sqrt{1-4t^2}}\right) \exp\left(\frac{4tuv-4(u^2+v^2)t^2}{1-4t^2}\right). \end{align*} \end{thm} \section{Acknowledgments} The author is grateful to the editor and the referees for their valuable comments and suggestions.
proofpile-arXiv_065-3620
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} With the recent improvements in remote sensing technology, there has been a lot of work in building detection and classification from high resolution satellite imagery. However, we are the first to implement a system on a global scale. Other work uses handpicked features to define buildings \cite{DBLP:journals/corr/Cohen0KCD16} \cite{autorooftop} which would not scale well across countries with very different styles of buildings. The work closest to ours is done by Yuan \cite{DBLP:journals/corr/Yuan16}, which also uses pixel level convolutional neural networks for building detection, but is only validated on a handful of cities in the US and likely would not transfer well to smaller settlements or other countries. In order to speed up our pipeline we need a fast bounding box proposal algorithm to limit the number of images that need to be run through our convolutional neural network. To maintain high recall, however, we need to be careful to not filter out too many candidates. We used a naive bounding box proposal algorithm, by performing straight edge detection to extract smaller masks to run through our classification network. This reduced the amount of landmass to process by 50\%. The distribution of buildings is still very negatively skewed, where only 2\% of proposals are positive. This also means we need to sample a large number of masks in order to have confident precision and recall numbers by country. We also use a weak building classifier to filter masks with over 0.3 IoU (intersection over union) by choosing the mask with the highest probability of containing a building in the center, since these overlapping masks are likely to contain the same building. Discovering systematic issues with our models is also a slow, manual problem that requires visualization of .kmz files, pinpointing large numbers of false positive or false negative areas, and debugging the causes. The problems encountered included noise, contrast issues, cloud cover, or just deficiencies in the model, and we set up a feedback loop to fix those problems. We will be open sourcing our population density results as well as our labeled dataset as a benchmark for future efforts. \section{Dataset Collection Issues} We have two goals for data collection, obtaining labels for training, and accuracy numbers on a country level. Obtaining accuracy numbers of the entire pipeline for a single country requires randomly sampling from all possible 64x64 masks. That distribution is incredibly skewed, and randomly sampling enough masks to obtain a reasonable confidence interval on accuracy is expensive. Instead, we measure how well our neural network performs building classification by randomly sampling from the distribution of masks generated by our bounding box proposal algorithm. The assumption is that the bounding box proposal algorithm only eliminates clear negatives, so reduces skew on the underlying distribution without affecting recall of the overall pipeline. This drops the number of labels we need by a factor of 10, because our new distribution now is 2\%-5\% positive. Collecting a training set went through several iterations because we want a more balanced dataset for training so the model can get enough samples of both the background and the building classes. We also employ simple active learning techniques by sampling from masks the network was "less sure" about, where the probability was closer to the threshold. \section{Generalizing a Global Model} Training a global building classification model has trade-offs. Buildings can look very different across different countries, but there is still a lot of information that can be transferred from country to country. We initially started with a model trained only on Tanzania, which when applied to a new country had a large drop in accuracy. However, we found that as we labeled data in more countries and re-trained our model with the new data, our new global model performed better on Tanzania than a Tanzania specific model. The generalizations learned from other countries made the model more robust. Another argument for training a global model is that building a large training set takes time, and the amount of data required to train a model from scratch for each country was prohibitive. The trade-off is that the global model doesn't work equally well on all countries, and we found it necessary to perform some amount of model specialization. We fine-tuned the global model with the same samples it had seen from the initial training, but only from a handful of countries that we wanted it to improve upon. We saw gains of 20-40\% in precision and recall on the validation set using the extra fine-tuning step, but noticed there were trade-offs. The training and validation sets gave no evidence of overfitting, but we saw an increase in systematic false positives when visualizing the results on a country level, in certain countries. \subsection{Building Classification Model} The classification model we trained was a weakly supervised version of SegNet \cite{DBLP:journals/corr/BadrinarayananK15}, which is a fast yet accurate pixel classification network that uses deconvolution layers. We trained with weak ``pixel level'' labels, and generate a mask level probability using global average pooling on the final pixel level probabilities over the 64x64 mask. We have 500TB of satellite imagery, and being able to run the model over all these countries (multiple times) is crucial for fast iteration. It was a non-trivial task to develop a model that was large enough to capture the complex idea of what defines a building, while also being small enough to run quickly during inference time. SegNet performed well on this by saving the indices from the max pooling layers to perform non-linear upsampling in the deconvolution layers. \subsection{Building Segmentation Model} \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{./footprints.png} \caption{Semantic segmentation results using weakly-supervised model.} \label{fig:segmentation} \end{figure} Finely pixel-wise labeled data is extremely time consuming to acquire, and errors will accumulate especially for small foreground objects. Instead of utilizing fully supervised semantic segmentation method such as FCN \cite{long2015fully}, we investigated weakly supervised segmentation models relying on feedback neural network \cite{cao2015look}, which utilizes the large amount of ``cheap'' weakly-supervised training data. Notably, to increase the efficiency of semantic segmentation, the classification model is composed to filter out negative candidate regions. By combining results from both models, the segmentation model successfully suppress false positives and generate best results, with an example shown in Figure~\ref{fig:segmentation} \section{Dealing with Systematic Errors} \subsection{Finding Systematic Errors} The precision and recall numbers we measure by randomly sampling from the mask candidates do not account for systematic errors arising from varying satellite image quality. To discover those systematic errors, we adopt both visually inspection and evaluation using external data. Intuitively, we visualize our results by construction \emph{KMZ} files and overlaying with Google Earth to manually pinpoint areas of concern. We also use this strategy to sample \emph{ambiguous} training data to fine-tune our model to reduce the chance of further systematic errors. Moreover, we also quantitatively measure systematic errors at a coarser scale by comparing our results with external datasets on those areas with adequate data coverage. However, it is still an open question to discover systematic errors on large scale with less manual work. \begin{comment} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{./fp.png} \caption{False Positives in Uzbekistan} \label{fig:fp} \end{figure} Figure \ref{fig:fp} shows the comparison of an error with systematic error before and after fine-tuning. \end{comment} \subsection{Data Quality} \begin{figure}[h!] \centering \subfloat[Before Denoising]{% \includegraphics[height=5cm]{./denoising_before.png}% }% \subfloat[After Denoising]{% \includegraphics[height=5cm]{./denoising_after.png}% }% \caption{Classification Results before and after denoising.} \label{fig:denoising} \end{figure} One of the reasons for systematic errors is also issues with data quality. The satellite images are taken at various times of day, and pre-processed across multiple layers for the highest quality image. However, areas with a lot of cloud cover tend to have much fewer clear images taken, and so quality suffers. This has an impact on our model, since most of the data is randomly or semi-randomly sampled, and so it does not get a lot of exposure to these poorer quality images during training. We use geographical meta-information to further detect the cloud occlusion during deploying stage. Another key factor of low data quality comes from noise, which are introduced in either imaging or image enhancing phases. Traditional image denoising approach such as BM3D \cite{dabov2006image} is computationally expensive in handling large imagery files, and can only work for limited type of noises, such as white noise. To this end, we train a shallow neural network end-to-end by mimicking several kinds of noise existed in satellite images. The trained denoising model is appended as a transformer before imagery is fed to the classification network. Comparison of classification results of the same low data quality area before and after denoising is shown in Figure \ref{fig:denoising}. \section{Results} Overall the SegNet model by itself achieves a precision and recall of $pr=0.9$, $re=0.89$ on a global dataset where the imbalance is such that $93\%$ of the randomly sampled testing data is not a building. Below we have some heat maps generated of building density in three countries: Mozambique, Madagascar, and India. \begin{figure}[h!] \centering \subfloat[Mozambique]{% \includegraphics[height=5cm]{./moz.png}% \label{fig:a}% }% \subfloat[Madagascar]{% \includegraphics[height=5cm]{./madagascar.png}% \label{fig:b}% }% \subfloat[India]{% \includegraphics[height=5cm]{./india.png}% \label{fig:c}% }% \caption{Building Heat Maps} \end{figure} So far we have released datasets for 5 countries: Haiti, Malawi, Ghana, South Africa, and Sri Lanka. The rest are pending validation with third party groups. Below we show precision recall curves and best F-score with confidence intervals for each of the countries released. \begin{figure}[h!] \centering \subfloat[Pr/Re Curves]{% \includegraphics[height=5cm]{./pr-curve.png}% \label{fig:a}% }% \subfloat[Confidence Intervals for F-Score]{% \includegraphics[height=5cm]{./f-score.png}% \label{fig:b}% }% \caption{Classification Performance} \end{figure} The estimation of population density via settlement buildings as a proxy results in significant improvement compared with previous efforts. Figure~\ref{fig:stateofart} shows the comparison of previous highest resolution estimation from Galantis and our own results. This gives a totally new perspective to various social / economic research. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{./stateofart.png} \caption{Comparison of Galantis and our results} \label{fig:stateofart} \end{figure} \section{Conclusion} We have built one of the first building detection systems that can be deployed at a global scale. Future work includes reducing the amount of iteration required to achieve a robust model as we roll out to more countries, the biggest problem of which is detecting systematic errors. Detecting and solving these systematic issues in classification is still a work in progress. We are still looking into ways to automate the data validation process and data collection methods further, which will also shorten the length of each iteration required to improve our dataset accuracy. \small \bibliographystyle{unsrt}
proofpile-arXiv_065-3623
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4 (and \LaTeXe) in mansucripts prepared for submission to APS journals. Further information can be found in the REV\TeX~4 documentation included in the distribution or available at \url{http://publish.aps.org/revtex4/}. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. Note that top-level section headings are automatically uppercased. If a specific letter or word should appear in lowercase instead, you must escape it using \verb+\lowercase{#1}+ as in the word ``via'' above. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} and \texttt{twocolumn} styles. \texttt{twocolumn} format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, APS will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to APS that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: References and Footnotes} Reference citations in text use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. \verb+#1+ may contain letters and numbers. The reference itself is specified by a \verb+\bibitem{#1}+ command with the same argument as the \verb+\cite{#1}+ command. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. REV\TeX~4 includes Bib\TeX\ style files \verb+apsrev.bst+ and \verb+apsrmp.bst+ appropriate for \textit{Physical Review} and \textit{Reviews of Modern Physics}, respectively. REV\TeX~4 will automatically choose the style appropriate for the journal specified in the document class options. This sample file demonstrates the basic use of Bib\TeX\ through the use of \verb+\bibliography+ command which references the \verb+assamp.bib+ file. Running Bib\TeX\ (typically \texttt{bibtex apssamp}) after the first pass of \LaTeX\ produces the file \verb+apssamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. To cite bibliography entries, use the \verb+\cite{#1}+ command. Most journal styles will display the corresponding number(s) in square brackets: \cite{feyn54,witten2001}. To avoid the square brackets, use \verb+\onlinecite{#1}+: Refs.~\onlinecite{feyn54} and \onlinecite{witten2001}. REV\TeX\ ``collapses'' lists of consecutive reference numbers where possible. We now cite everyone together \cite{feyn54,witten2001,epr}, and once again (Refs.~\onlinecite{epr,feyn54,witten2001}). Note that the references were also sorted into the correct numerical order as well. When the \verb+prb+ class option is used, the \verb+\cite{#1}+ command displays the reference's number as a superscript rather than using square brackets. Note that the location of the \verb+\cite{#1}+ command should be adjusted for the reference style: the superscript references in \verb+prb+ style must appear after punctuation; otherwise the reference must appear before any punctuation. This sample was written for the regular (non-\texttt{prb}) citation style. The command \verb+\onlinecite{#1}+ in the \texttt{prb} style also displays the reference on the baseline. Footnotes are produced using the \verb+\footnote{#1}+ command. Most APS journal styles put footnotes into the bibliography. REV\TeX~4 does this as well, but instead of interleaving the footnotes with the references, they are listed at the end of the references\footnote{This may be improved in future versions of REV\TeX.}. Because the correct numbering of the footnotes must occur after the numbering of the references, an extra pass of \LaTeX\ is required in order to get the numbering correct. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations; this is the most common type of equation in \textit{Physical Review}: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1}, and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats'' which means that their final position is determined by \LaTeX\ while the document is being typeset. \LaTeX\ isn't always successful in placing floats optimally. Figures may be inserted by using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in how optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart} shows a figure that is small enough to fit in a single column. It is embedded using the \texttt{figure} environment which provides both the caption and the imports the figure file. \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} Fig.~\ref{fig:wide} is a figure that is too wide for a single column, so instead the \texttt{figure*} environment has been used. \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the figure* environment to get a wide figure that spans the page in \texttt{twocolumn} formatting.} \end{figure*} The heart of any table is the \texttt{tabular} environment which gives the rows of the tables. Each row consists of column entries separated by \verb+&+'s and terminates with \textbackslash\textbackslash. The required argument for the \texttt{tabular} environment specifies how data are displayed in the columns. For instance, entries may be centered, left-justified, right-justified, aligned on a decimal point. Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using the \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}). Tables~\ref{tab:table1}-\ref{tab:table4} show various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table set with the \texttt{table*} environment. Long tables may need to break across pages. The most straightforward way to accomplish this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. However, the standard \LaTeXe\ package \texttt{longtable} will give more control over how tables break and will allow headers and footers to be specified for each page of the table. A simple example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, there is no choice but to manually create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2} for examples. \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a narrow column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnotemark[1] &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} \textit{Physical Review} style requires that the initial citation of figures or tables be in numerical order in text, so don't cite Fig.~\ref{fig:wide} until Fig.~\ref{fig:epsart} has been cited. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{Introduction} \label{sect:intro} The Nuclear Energy Density Functional (NEDF) theory allows us to describe properties of nuclei from light to heavy nuclei and from drip-line to drip-line~\cite{ben03}. Several functionals have been developed in the recent years, but the most widely used~\cite{per04,rai11} are those derived from the non-relativistic zero-range Skyrme interaction~\cite{sky59}. Since its first applications to atomic nuclei~\cite{vau72}, this interaction has proven to be very well suited to describe nuclear observables at very reduced computational cost~\cite{gor09}. A crucial aspect in building a functional is to determine the values of its coupling constants. Despite its apparent simplicity, this is a very delicate aspect: a badly determined coupling constant can give rise to unphysical instabilities~\cite{les06,sch10,frac12,hel13,Pas15T,report,dep16} and thus to unphysical results. A possibility for avoiding them is to find an adequate set of observables so that all coupling constants are properly constrained during the optimization procedure~\cite{dob14,nik16}. In Ref.~\cite{pas13}, we have presented an alternative solution to avoid unphysical instabilities based on the linear response (LR) formalism in infinite nuclear medium. This solution is particularly simple and very efficient especially for some particular terms of the functional that are odd under time reversal symmetry and give very little contribution to masses of odd-systems~\cite{sch10}. However, avoiding unphysical instabilities is not the only requirement to have an effectient functional : one also has to check how it performs to describe nuclear observables. On this point, the UNEDF collaboration~\cite{ber07} has recently studied much in detail the properties of Skyrme functionals against a large set of nuclear observables~\cite{kor10,kor12,kor13}. The main conclusion in their last article~\cite{kor13} is that the standard Skyrme functional~\cite{per04} has reached its limits. If we want to improve the description of experimental data (as masses, radii, fission barriers,...) we need to follow two paths: explore different functional forms or develop functionals at multi-reference level~\cite{dug15}. Following the idea of Carlsson and collaborators~\cite{car08,rai11}, we have decided to explore the first path and to study the impact of additional gradient terms into the Skyrme pseudo-potential~\cite{dav13}. The gradient terms have been introduced in a systematic way by considering all possible combinations allowed by the symmetries of the problem up to 6th power. The resulting pseudo-potential has been called N$\ell$LO which by definition incorporates gradients up to order $2\ell$. Within this language, the standard Skyrme interaction~\cite{cha97} is named N1LO. In Ref.~\cite{Dav16AN}, we have shown the explicit connection between the Taylor momentum expansion of $any$ finite range interaction and the actual form of the N$\ell$LO pseudo-potential~\cite{rai11}. In that article, we have also proven that such an expansion works fairly well in infinite nuclear medium and that the main properties of the Equation of State (EoS) of a finite-range interaction can be fairly reproduced by truncating the momentum expansion to fourth order (N2LO). The result is coherent with previous findings based on Density Matrix Expansion (DME)~\cite{car10}: the role of fourth order terms is important and it leads to a remarkable improvement of the DME results when compared to finite-range interactions. Higher order terms can thus be neglected as a first step since their contribution becomes systematically less important. At present, the only existing parametrizations of the extended Skyrme N2LO/N3LO pseudo-potentials have been obtained by considering only properties of infinite nuclear medium~\cite{Dav15,Dav15AA}, that is without taking into account properties of finite nuclei. In order to remedy this aspect, we present here a new Skyrme Hartree-Fock-Bogoliubov (HFB) code that incorporates higher order derivatives terms appearing in N2LO. It is worth remainding at this point that an alternative code named HOSPHE~\cite{hosphe} has already been published. This code, based on Harmonic-Oscillator (HO) basis also considers the most general functional form of the N3LO functional~\cite{car08} using spherical basis representation. However, following our previous findings of Ref.~\cite{dav13}, we have decided to express the N$\ell$LO pseudo-potential in Cartesian coordinates and to develop for this specific case a numerical code to work in coordinate space: the r-space representation is in fact more convenient to be used in a fitting procedure since we do not need to use a very large number of basis states to achieve convergence. See Ref.~\cite{sch15} for more details. The article is organized as follows: in Sec.~\ref{sec:n2lo} we present the general functional formalism for the N2LO pseudo-potential and in Sec.~\ref{sec:n2lo:spheric} we specialize the formalism for the spherically symmetric case. In Sec.~\ref{sec:hfb} we present in detail the generalization of the Hartree-Fock-Bogoliubov equations to include the N2LO pseudo-potential. In Sec.~\ref{sec:fit} we present the fitting protocol to determine the parameters of the new N2LO functionals. Finally we give our conclusions in Sec.~\ref{sec:conclusions}. \section{N2LO Skyrme functional}\label{sec:n2lo} The N2LO Skyrme pseudo-potential as described in Refs.~\cite{car08,rai11} is a generalization of the standard Skyrme interaction, corresponding to the expansion of the momentum space matrix elements of a generic interaction in powers of the relative momenta $\mathbf{k}, \mathbf{k}'$ up to the fourth order. Following~\cite{dav14c}, the form considered in this article respects both Galilean and local gauge invariance~\cite{dob95}. It is written as the sum of three terms \begin{eqnarray} V_{\text{N2LO}} =V_{\rm N2LO}^{C}+V_{\rm N1LO}^{LS}+V_{\rm N1LO}^{DD}\; \end{eqnarray} The central term reads \begin{eqnarray} \label{eq:N2LO:c} V_{\rm N2LO}^{C} &=& t_0 (1+x_0 P_{\sigma}) \nonumber \\ && + \frac{1}{2} t_1 (1+x_1 P_{\sigma}) ({\mathbf{k}}^2 + {\mathbf k'}^2) \nonumber \\ && + t_2 (1+x_2 P_{\sigma}) ({\mathbf k} \cdot {\mathbf k'}) \nonumber\\ & & + \frac{1}{4} t_1^{(4)} (1+x_1^{(4)} P_{\sigma}) \left[({\mathbf k}^2 + {\mathbf k'}^2)^2 + 4 ({\mathbf k'} \cdot {\mathbf k})^2\right] \nonumber\\ & &+ t_2^{(4)} (1+x_2^{(4)} P_{\sigma}) ({\mathbf k'} \cdot {\mathbf k}) ({\mathbf k}^2 + {\mathbf k'}^2) . \end{eqnarray} In these expressions, a Dirac function $\delta({\mathbf r}_1-{\mathbf r}_2)$ is to be understood, but has been omitted for the sake of clarity. See Ref.~\cite{ben03} for details on the adopted notations. The spin-orbit term $V_{\rm N1LO}^{LS}$ is not affected by the inclusion of higher order gradient terms: in Ref.~\cite{Dav16AN}, we have shown that other possible spin-orbit terms are suppressed once the local gauge invariance~\cite{rai11,rai11b} is imposed. In Ref.~\cite{Dav16AN}, we have discussed in details the problem of local gauge invariance for spin-orbit term and in particular the possible violation of such a symmetry for finite-range spin-orbit terms. The density-dependent term $V_{\rm N1LO}^{DD}$ has also exactly the same structure as in the standard Skyrme interaction~\cite{cha97}, since its nature is to mimic the effect of a three-body term~\cite{vau72,sad13}. Tensor terms should be also included into Eq.~(\ref{eq:N2LO:c}). In Ref.~\cite{Dav15}, we have discussed them based on the partial-wave decomposition of the total EOS. In finite nuclei it is actually very difficult to constrain them in NEDF~\cite{sag14} because of their strong competition with the spin-orbit term in modifying the underlying single-particle structure~\cite{les07}. For this preliminary exploration, we have thus decided to neglect them. Finally, it is worth mentioning that in the present article we will always use the complete interaction in the sense that we will not discard the so-called $J^2$ tensor terms~\cite{les07} as often done in the literature. For the Coulomb interaction between protons, we adopt the same procedure as described in Ref.~\cite{cha97} \emph{i.e.} using the standard Slater approximation for the exchange term~\cite{ska01}. Starting from Eq.~(\ref{eq:N2LO:c}), it is possible to derive the explicit form of the Skyrme functional in Cartesian coordinates. We write it as \begin{eqnarray}\label{eq:func:gen} \mathcal{E}=\sum_{t}\mathcal{E}^{(1),\text{even}}_t +\mathcal{E}^{(1),\text{odd}}_t +\mathcal{E}^{(2),\text{even}}_t +\mathcal{E}^{(2),\text{odd}}_t \;, \end{eqnarray} where $t=0,1$ is the isospin index and even/odd refers to the behaviour of the terms of the functional under time-reversal symmetry~\cite{per04}. In the above equation, we have explicitly separated the contributions originated from the N$\ell$LO terms $\mathcal{E}^{(\ell=1,2)}$ . The standard terms $\mathcal{E}^{(1)}_t$ read~\cite{les07} \begin{eqnarray} \mathcal{E}^{(1),\text{even}}_t & = & C_t^\rho [\rho_0 ] \, \rho_t^2 + C_t^{\Delta \rho} \, \rho_t \, \Delta \rho_t + C_t^\tau \, \rho_t \, \tau_t - C^{T}_t \sum_{\mu, \nu = x}^{z} J_{t,\mu \nu} J_{t,\mu \nu} +C_t^{\nabla J} \; \rho_t \, \nabla \cdot {\bf J}_t \,, \\ \label{eq:centEDFo} \mathcal{E}^{(1),\text{odd}}_t & = & C_t^s [\rho_0 ] \, {\bf s}_t^2 - C_t^\tau \, {\bf j}^2_t + C^{\Delta s}_t \, {\bf s}_t \cdot \Delta {\bf s}_t + C^{T}_t \, {\bf s}_t \cdot {\bf T}_t + C^{\nabla J}_t \; {\bf s}_t \cdot \nabla \times {\bf j}_t \,, \end{eqnarray} while the new terms can be written as \begin{eqnarray} \mathcal{E}_t^{\text{(2),even}} & = & C^{( \Delta \rho)^2}_t \left( \Delta \rho_t \right)^2 + C^{ M \rho}_t \boldsymbol{\mathbb M}_t^{M \rho,\text{even}} + C^{ M s}_t \boldsymbol{\mathbb M}_t^{Ms,\text{even}} \, , \\ \label{eq:ef:DKo} \mathcal{E}_t^{\text{(2),odd}} & = & C^{(\Delta s)^2}_t \left( \Delta \boldsymbol{\mathbf s}_t \right)^2 + C^{ M \rho}_t \boldsymbol{\mathbb M}_t^{M \rho,\text{odd}} + C^{ M s}_t \boldsymbol{\mathbb M}_t^{Ms,\text{odd}} \, , \end{eqnarray} where \begin{eqnarray} \boldsymbol{\mathbb M}^{M \rho,\text{even}} & = & \left\{ \, \rho \, Q \, + \, \tau^2 \, \right\} + 2 \,\left[ \mbox{Re}(\tau_{\mu \nu}) \mbox{Re}(\tau_{\mu \nu}) \, - \, \mbox{Re}(\tau_{\mu \nu}) \nabla_{\mu} \nabla_{\nu} \rho \; \right] \, , \\ \label{taumunuH} \boldsymbol{\mathbb M}^{Ms,\text{even}} & = & \left\{ \, \left( \nabla_{\mu} J_{\mu \nu} \right)^2 \, + \, 4 J_{\mu \nu} V_{\mu \nu} - \mbox{Im}(K_{\mu \nu \kappa}) \mbox{Im}(K_{\mu \nu \kappa}) \, \right\} \, , \\ \boldsymbol{\mathbb M}^{M \rho,\text{odd}} & = & \left\{ \, \left( \boldsymbol{\mathbf\nabla} \cdot \boldsymbol{\mathbf j} \right)^2 \, + \, 4 \, \boldsymbol{\mathbf j} \cdot \boldsymbol{\mathbf\Pi} \, \right\} \, , \\ \boldsymbol{\mathbb M}^{Ms,\text{odd}} & = & \left\{ \, \boldsymbol{\mathbf s} \cdot \boldsymbol{\mathbf S} \, + \, \boldsymbol{\mathbf T}^2 \, \right\} + 2 \, \left[ \mbox{Re}(K_{\mu \nu \kappa}) \mbox{Re}(K_{\mu \nu \kappa}) - \mbox{Im}(\tau_{\mu \nu})\mbox{Im}(\tau_{\mu \nu}) \, - \, \mbox{Re}(K_{\mu \nu \kappa}) \nabla_{\mu} \nabla_{\nu} s_{\kappa} \right]. \label{KmunuH} \end{eqnarray} These terms contain six new densities: $\tau_{\mu \nu}$, $V_{\mu \nu}$, $\mathbf{\Pi}, K_{\mu\nu\kappa}, Q$ and $\mathbf{S}$. Their explicit definition is given in Appendix~\ref{app:dens}. \section{N2LO functional in spherical symmetry}\label{sec:n2lo:spheric} In the present section, we limit ourselves to the case of spherical symmetry. In this case, the single-particle wave function can be written as follows \begin{eqnarray} \psi_{n \ell j m q} (\boldsymbol{\mathbf r}) = \frac{1}{r} R_{n\ell j q} (r) \; \Omega_{\ell j m} ({\hat r}) \, , \label{fosphe} \end{eqnarray} where $n$ is the principal quantum number, $\Omega_{\ell j m} ({\hat r})$ is a solid spherical harmonic~\cite{var88} and $\ell j m$ refer respectively to the orbital angular momentum, the total angular momentum and its relative projection along the $z$-axis. Here $q\equiv n,p$ stands for proton (p) or neutron (n). In our formalism the two nuclear species are not mixed explicitly~\cite{per04,sat13}. By considering only even-even systems, we can further simplify the expressions given in Eq.~(\ref{eq:func:gen}) \begin{eqnarray} \label{eq:EDF_N1LO_C_sphere} \mathcal{E}^{(1)} = && C^{\rho}_0 \, \rho_0^2 \, + \, C^{\rho}_1 \, \rho_1^2 \, + \, C^{\Delta \rho}_0 \, \rho_0 \Delta \rho_0 \, + \, C^{\Delta \rho}_1 \, \rho_1 \Delta \rho_1 \\ & + & C^{\tau}_0 \, \rho_0 \tau_0 \, + \, C^{\tau}_1 \, \rho_1 \tau_1 \, - \, \tfrac{1}{2}\, C^T_0 \, J_0^2 \, - \, \tfrac{1}{2}\, C^T_1 \, J_1^2 \nonumber \\ & + & C^{\nabla J}_0\, \rho_0 \, \boldsymbol{\mathbf\nabla} \cdot \boldsymbol{\mathbf J}_0 \, + \, C^{\nabla J}_1 \, \rho_1 \, \boldsymbol{\mathbf\nabla} \cdot \boldsymbol{\mathbf J}_1 \;; \nonumber \end{eqnarray} \begin{eqnarray} \label{eq:EDF_N2LO_even_cc_C} \qquad \qquad \qquad \qquad \mathcal{E}^{(2)} =&& C^{(4) \Delta \rho}_0 \, \left( \Delta \rho_0 \right)^2 \, + \, \, C^{(4) \Delta \rho}_1 \, \left( \Delta \rho_1 \right)^2 \nonumber \\ & +& C^{(4) M \rho}_0 \, \Big\{ \, \left[ \, \rho_0 \, Q_0 \, + \, \tau_0^2 \, \right] \Big\} \nonumber \\ & +& C^{(4) M \rho}_0 \, \Big\{\, \left[ \mbox{Re}(\tau_{0, \mu \nu}) \mbox{Re}(\tau_{0, \mu \nu}) \, - \mbox{Re}(\tau_{0, \mu \nu}) \nabla_{\mu} \nabla_{\nu} \rho_0 \right] \, \Big\} \nonumber \\ & + & \, C^{(4) M \rho}_1 \, \Big\{ \, \left[ \, \rho_1 \, Q_1 \, + \, \tau_1^2 \, \right] \Big\} \nonumber \\ & +& C^{(4) M \rho}_1 \, \Big\{\left[ \mbox{Re}(\tau_{1, \mu \nu}) \mbox{Re}(\tau_{1, \mu \nu}) \, - \, \mbox{Re}(\tau_{1, \mu \nu}) \nabla_{\mu} \nabla_{\nu} \rho_1 \right] \, \Big\} \nonumber \\ & - & C^{(4) M s}_0 \, \left[ \, \left( \nabla_{\mu} J_{0, \mu \nu} \right)^2 \, + \, 4 J_{0, \mu \nu} V_{0, \mu \nu} - \mbox{Im}(K_{0,\mu \nu \kappa}) \mbox{Im}(K_{0,\mu \nu \kappa}) \, \right] \nonumber \\ & - & C^{(4) M s}_1 \, \left[ \, \left( \nabla_{\mu} J_{1, \mu \nu} \right)^2 \, + \, 4 J_{1, \mu \nu} V_{1, \mu \nu} - \mbox{Im}(K_{1,\mu \nu \kappa}) \mbox{Im}(K_{1,\mu \nu \kappa}) \, \right] \,. \label{N2LOfonct} \end{eqnarray} \subsection{Local densities} Let us introduce the short-hand notation $\alpha=\{n\ell jq\}$ and $C_\alpha = j(j+1) - \ell(\ell+1) - \frac{3}{4}$. The explicit expressions of the densities in spherical symmetry (we limit ourselves to systems that are even under time-reversal) up to second order take the form~\cite{ben05} \begin{eqnarray} \rho_{0} (r) & = & \ \sum_{\alpha} \ \frac{(2 j + 1 )}{4 \pi} \ \frac{R_{\alpha}^2(r)}{r^2} \,, \\ \tau_{0} (r) & =& \ \sum_{\alpha} \ \frac{(2 j + 1 )}{4 \pi r^2} \ \left[ \left(R_\alpha^\prime(r) - \frac{R_\alpha(r)}{r} \right)^2 + \frac{\ell (\ell+1)}{r^2} R_{\alpha}^2(r) \right] \,, \\ \label{Jraddens} J_0(r) &=& \sum_{\alpha} \frac{(2 j + 1 )}{4 \pi} \mbox{C}_\alpha \frac{R_{\alpha} (r)}{r^3} \,. \end{eqnarray} $\tau_{0} (r) $ can be conveniently decomposed in a radial and centrifugal part as $\tau_0=\tau_{R,0}+\tau_{C,0}$ where \begin{eqnarray} \tau_{R,0}(r)&=&\ \sum_\alpha \frac{(2 j + 1 )}{4 \pi r^2} \left [ R_{\alpha}^\prime (r)-\frac{R_{\alpha}(r)}{r}\right] ^2 \,,\\ \tau_{C,0} (r) &= & \sum_{\alpha} \frac{(2 j + 1 )}{4 \pi} \frac{\ell ( \ell + 1)}{r^2} \frac{R^2_{\alpha} (r)}{r^2} \,. \end{eqnarray} Eq.~(\ref{Jraddens}) corresponds to the radial part of the $J_{\mu\nu,0} (r)$ spin-orbit vector density defined as \begin{eqnarray} \label{eq:Jmunu} J_{\mu \nu,0} & = & \frac{1}{2} \, \epsilon_{\mu \nu \kappa} \, J_{\kappa} \, = \, \frac{1}{2} \, \epsilon_{\mu \nu \kappa} \, \frac{X_\kappa}{r} \, J_0(r) \, , \end{eqnarray} where $X_{\mu}$ represents the Cartesian coordinates. If we now come to fourth order, the explicit expressions of the new densities in spherical symmetry take the form \begin{eqnarray} \tau_{\mu\nu, 0} (r) &=& \frac{1}{2} \ \tau_{C,0}(r) \; \delta_{\mu\nu} \, + \, \frac{X_\mu X_\nu}{r^2} \left[\tau_{R,0}(r) - \frac{1}{2} \ \tau_{C,0}(r)\right] \;, \\ V_{0}(r)&=& \sum_{\alpha} \frac{(2 j + 1 )}{4 \pi r^2} \ \mbox{C}_\alpha \left[ \frac{ R_{\alpha}^2}{r^3} \ \left[ \ell(\ell+1) +2 \right] +\frac{R^{\prime2}_{\alpha}(r)}{r} -4\frac{R^{\prime}_{\alpha}(r)R_{\alpha}(r) }{r^2}\right] \; ,\\ Q_{0}(r) &=& \sum_{\alpha} \frac{(2 j + 1 )}{4 \pi r^2} \left[R^{\prime\prime}_{\alpha}(r)- \ell(\ell+1)\frac{R_{\alpha}(r)}{r^2}\right]^2 \;,\\ K_{\mu \nu \kappa, 0}(\boldsymbol{\mathbf r}) &=& i \ \mbox{K1}_0 (r) \epsilon_{\mu \nu \kappa} + i \ \mbox{K2}_0 \left[ \epsilon_{\mu \kappa M} \frac{X_M X_\nu}{r^2} + \epsilon_{\mu \nu M} \frac{X_M X_\kappa}{r^2} + \epsilon_{\kappa \nu M} \frac{X_M X_\mu}{r^2} \right] . \end{eqnarray} We have defined $K1_{0}$ and $K2_{0}$ as \begin{eqnarray} \label{K_comp} \mbox{K1}_0 (r) &=& \sum_\alpha \frac{(2 j + 1 )}{16 \pi r^3} \mbox{C}_\alpha R_\alpha^{\prime}(r) R_\alpha(r) \;,\\ \mbox{K2}_0 (r) &=& \sum_\alpha \frac{(2 j + 1 )}{16 \pi r^3} \mbox{C}_\alpha \left[ \frac{2}{r} R_\alpha(r)^2 - R_\alpha^{\prime}(r) R_\alpha(r) \right]. \end{eqnarray} $\tau_{\mu\nu, 0}$(r) is the kinetic density tensor. The usual N1LO $\tau_0$(r) density is given by its trace \begin{equation} \sum_{\mu} \tau_{\mu\mu,0}(r) = \tau_0. \end{equation} The even part of the N2LO functional only receives a non-vanishing contribution from the real part of this density (Eq.~\ref{N2LOfonct}). Given that the imaginary part is zero under spherical symmetry, we will write $\tau_{\mu\nu, 0}$(r) instead of Re($\tau_{\mu\nu, 0}$(r)) in the following. Similarly to $J_{0}(r)$, $V_{0}(r)$ is the radial part of the vector density $V_{\mu \nu, 0}$ \begin{equation} V_{\mu \nu, 0}= \frac{1}{2} \ \epsilon_{ \mu \nu \kappa} \ \frac{X_\kappa}{r} \ V_0 (r), \label{Vmunusph} \end{equation} and it can be decomposed in a radial and centrifugal part as $V_0 =V_{R,0}+V_{C,0}$ where \begin{eqnarray} V_{R,0}(r) &=& \sum_{\alpha} \frac{(2 j + 1 )}{4 \pi r^3} \mbox{C}_\alpha \left[ R_\alpha^{\prime^2}(r) - \frac{4}{r} R_\alpha^{\prime}(r) R_\alpha(r) + \frac{2}{r^2} R_\alpha^2(r) \right] . \\ V_{C, 0}(r) &=& \sum_{\alpha} \frac{(2 j + 1 )}{4 \pi r^3} \mbox{C}_\alpha \left[ \frac{\ell(\ell+1)}{r^2} R_\alpha^2(r) \right] . \end{eqnarray} Since the $K_{\mu \nu \kappa, 0}(\boldsymbol{\mathbf r})$ density is imaginary in spherical symmetry, the N2LO functional (Eq.~\ref{N2LOfonct}) only receives a contribution of this density multiplied by itself. As for the $\tau_{\mu\nu, 0}$(r), we will use $K_{\mu \nu \kappa, 0}(\boldsymbol{\mathbf r})$ without mentioning anymore that it actually stands for the imaginary part of this density. \\ Some additional expressions which represent the new contributions to the functional are also written below for completeness. \begin{eqnarray} \tau_{\mu \nu, 0}(r) \tau_{\mu \nu, 0} (r)&=& \tau_{R,0}^2 (r)- \frac{1}{2} \tau_{C,0}^2(r) \end{eqnarray} \begin{equation} \tau_{\mu \nu,0} \nabla_\mu \nabla_\nu \rho =\rho^{(2)} \tau_R + \frac{\rho^{(1)}}{r} \tau_C \end{equation} \begin{eqnarray} J_{\mu \nu,0} V_{\mu \nu,0} &=& \frac{1}{2 } J_0 (r) V_0 (r) \end{eqnarray} \begin{equation}\label{k2fonct} K_{\mu \nu \kappa,0} K_{\mu \nu \kappa,0} = 6 \mbox{K1}_{0}(r)^2 + 6 \mbox{K2}_{0}(r)^2 - 4 \mbox{K1}_{0}(r) \mbox{K2}_{0}(r). \end{equation} In order to have a qualitative and quantitative idea of all these densities, we represent in Fig.~\ref{WS:density}, the isoscalar densities in $^{208}$Pb. These densities have been determined using a single particle basis obtained from a fully-converged Hartree-Fock (HF) solution based on the SLy5 functional~\cite{cha97}. We observe that all the densities used here are well-behaved at the origin of the coordinate system. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{Denstot.eps} \end{center} \caption{(Colors online) Isoscalar densities in $^{208}$Pb calculated using single particle wave functions obtained by a SLy5 mean-field solution. See text for details. } \label{WS:density} \end{figure} \section{Hartree-Fock-Bogoliubov equations in spherical symmetry}\label{sec:hfb} In this section we describe the method used to solve the complete Hartree-Fock-Bogoliubov (HFB) equations and the numerical tests we have performed. \subsection{Hartree-Fock} We start considering closed-shell nuclei for which the HFB equations can be safely reduced to the standard Hartree-Fock (HF) equations. They read~\cite{vau72,rin80} \begin{eqnarray}\label{eq:hf} h_q(r) R_{nljq}(r)=\varepsilon^q_{nlj} R_{nljq}(r)\;, \end{eqnarray} where $R_{nljq}(r)$ is the radial part of the single-particle wave-function given in Eq.~(\ref{fosphe}). The corresponding Hamiltonian is derived as a functional derivative as \begin{eqnarray}\label{sp:eq:4th} h_q(r) &=&A^q_4 \frac{d^4}{dr^4} + A^q_3 \frac{d^3}{dr^3} + A^q_{2 R} \frac{d^2}{dr^2} + A^q_{1 R} \frac{d}{dr} + A^q_{0 R} \nonumber\\ &+& \frac{ \ell (\ell+1)}{r^2} \left[ A^q_{2 C} \frac{d^2}{dr^2} + A^q_{1 C} \frac{d}{dr} + \frac{ \ell (\ell+1)}{r^2} A^q_{0 CC} + A^q_{0 C} \right] \nonumber \\ &+& W^q_{2 R} \frac{d^2}{dr^2} + W^q_{1 R} \frac{d}{dr} + W^q_{0 R} + \frac{ \ell (\ell+1)}{r^2} W^q_{0 C} \;. \label{eqndiff4} \end{eqnarray} We observe that the inclusion of 4th order term in the interaction translates into a fourth order differential equation. Although this is quite unusual in nuclear physics, a 4th order differential equation is routinely solved in other physical systems, as for example to describe the behaviour of a bending solid beam~\cite{ban01}. The coefficients in Eq.~(\ref{sp:eq:4th}) are defined as \begin{eqnarray} A^q_4 &=& \, C^{ M \rho}_- \, \rho_0 \, + 2 \, \, C^{ M \rho}_1 \, \rho_q \label{eq:a4} \;,\\ A^q_3 &=& 2 \, C^{M \rho}_- \, \rho_0^{(1)} + 4 \, C^{ M \rho}_1 \, \rho_q^{(1)}\;, \label{eq:a3}\\ A^q_{2 R} &=& - \frac{\hbar^2}{2m} - C_-^{\tau} \ \rho_0 - 2 C_1^{\tau} \rho_q + C_-^{ M \rho} \left[ 3 \rho_0^{(2)} - 6 \tau_{R,0} - 2 \tau_{C,0} \right] + 2 \, C_1^{ M \rho}\left[ 3 \rho_q^{(2)} - 6 \tau_{R,q} - 2 \tau_{C,q} \right]\;, \label{eq:a2r}\\ A^q_{2 C} &=& - 2 \, C^{ M \rho}_- \, \rho_0 \, - \, 4 \, C^{ M \rho}_1 \, \rho_q \;,\label{eq:a2c} \\ A^q_{1 R} &=& - \ C_-^{\tau} \rho_0^{(1)} - 2 C_1^{\tau} \rho_q^{(1)} + 2 \, C_-^{ M \rho} \left[\rho_0^{(3)}-3 \tau_{R,0}^{(1)} - \tau_{C,0}^{(1)} \ \right] + 4 \, C_1^{ M \rho}\left[\rho_q^{(3)}-3 \tau_{R,q}^{(1)} - \tau_{C,q}^{(1)} \ \right] \;, \label{eq:a1r} \\ A^q_{1 C} &=& 2 \, C^{ M \rho}_- \, \left(-\rho_0^{(1)} + 2\frac{\rho_0}{r}\right) \, + \, 4 \, C^{ M \rho}_1 \, \left(-\rho_q^{(1)} + 2\frac{\rho_q}{r}\right) \;, \label{eq:a1c} \\ A^q_{0 R} &=& U_q (r) + C_-^\tau \frac{\rho_0^{(1)}}{r} + 2 \ C_1^\tau \frac{\rho_q^{(1)}}{r} \;, \nonumber \\ &+& 2 \, C^{M \rho}_- \left[ 3 \frac{\tau_{R,0}^{(1)}}{r} + \frac{\tau_{C,0}^{(1)}}{r} - \frac{\rho_0^{(3)}}{r} \right] + 4 \, C^{ M \rho}_1 \left[ 3 \frac{\tau_{R,q}^{(1)}}{r} + \frac{\tau_{C,q}^{(1)}}{r} - \frac{\rho_q^{(3)}}{r} \right] \;, \label{eq:a0r}\\ A^q_{0 C} &=& \frac{\hbar^2}{2m} + C_-^\tau \rho_0 + 2 C_1^\tau \rho_q + C^{M \rho}_- \left[ 2 \, \tau_{R,0} + 4 \tau_{C,0} + 2 \frac{\rho_0^{(1)}}{r} - \rho_0^{(2)} - 6 \frac{\rho_0}{r^2} \right] \nonumber \label{eq:a0c} \\ &+& 2 \, C^{ M \rho}_1 \left[ 2 \, \tau_{R,q} + 4 \tau_{C,q} + 2 \frac{\rho_q^{(1)}}{r} - \rho_q^{(2)} - 6 \frac{\rho_q}{r^2} \right] \;, \\ A^q_{0 CC} &=& \, C^{ M \rho}_- \rho_0 + 2 \, C^{ M \rho}_1 \rho_q \label{eq:a00c} \,. \end{eqnarray} Here we used the shorthand notation $C_{-}^x=C_0^x-C_1^x$ with $x=\rho,\Delta\rho,\dots$. The exponent ($i=1,2,3,4$) in the densities stands for the derivative order. Finally, the central field appearing in the previous equation reads \begin{eqnarray} \label{eq:uqD} U_q (r)&=& 2 C^{\rho}_- \rho_0 + 4 \, C^{\rho}_1 \rho_q + 2 C^{\Delta \rho}_- \Delta \rho_0 + 4 \, C^{\Delta \rho}_1 \Delta \rho_q + C^{\tau}_- \tau_0 + 2 \, C^{\tau}_1 \tau_q \nonumber \\ &+& 2 C^{ (\Delta \rho)^2}_- \Delta \Delta \rho_0 + 4 \, C^{ (\Delta \rho)^2}_1 \Delta \Delta \rho_q C^{M \rho}_- \left[Q_0 - 2 \nabla_\mu \nabla_\nu \tau_{\mu \nu , 0} \right] + 2 C^{M \rho}_1 \left[ Q_q - 2 \ \nabla_\mu \nabla_\nu\tau_{\mu\nu , q}\right] \nonumber \\ &+& C^{\nabla J}_- \nabla \cdot J_0 + 2 C^{\nabla J}_1 \nabla \cdot J_q. \end{eqnarray} This field is obtained through the variational principle varying the matter density $\rho$, and it receives contributions from both N1LO and N2LO terms. In Fig.~\ref{208pb:field} we show the coefficients $A^q_R$ and the central field $U_q$ obtained with a fully converged HF calculation (cf Tab.~\ref{tab:inter}) in $^{208}$Pb using a N2LO pseudo-potential. We refer the reader to Sec.~\ref{sec:fit} for more details on this parametrisation. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{a4.eps} \includegraphics[width=0.45\textwidth,angle=0]{a3.eps}\\ \includegraphics[width=0.45\textwidth,angle=0]{a2r.eps} \includegraphics[width=0.45\textwidth,angle=0]{a1r.eps} \includegraphics[width=0.45\textwidth,angle=0]{a0r.eps} \includegraphics[width=0.45\textwidth,angle=0]{utot.eps} \end{center} \caption{(Colors online) Radial dependence of the coefficients defined in Eq.~(\ref{sp:eq:4th}) for $^{208}$Pb obtained using the SN2LO1 and SLy5 interactions. See text for details.} \label{208pb:field} \end{figure} On the same figure we also report the corresponding values obtained with SLy5. As it should be, SLy5 induces non-zero contributions only for the terms originating from the N1LO part of the functional. In Fig.~\ref{208pb:fieldc} we show the other set of fields appearing in Eq.~(\ref{sp:eq:4th}) and corresponding to the centrifugal parts. These fields are active only for non-zero orbital momentum states. All the fields behave normally around $r=0$ apart from the $A^q_{1c},A^q_{0c}$ that present a divergency. Such a behaviour, which already exists at N1LO level for the centrifugal field, is actually not a problem as we will see in Sec.~\ref{sec:asym} when we examine the asymptotic properties of our 4th order differential equation. We will then demonstrate that there exists a particular solution of Eq.~(\ref{sp:eq:4th}) that exhibits no divergency. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{a2c.eps} \includegraphics[width=0.45\textwidth,angle=0]{a1c.eps}\\ \includegraphics[width=0.45\textwidth,angle=0]{a0c.eps} \includegraphics[width=0.45\textwidth,angle=0]{a0cc.eps} \end{center} \caption{(Colors online) Same as Fig.~\ref{208pb:field}, but for centrifugal fields given in Eq.~(\ref{sp:eq:4th}).} \label{208pb:fieldc} \end{figure} Although we have only one explicit spin-orbit term in the effective interaction, we obtain four distinct contributions to the mean-field equation \begin{eqnarray} W^q_{0 R } (r) &= & - C_\alpha \left[ C^{T}_- \frac{J_0}{r} + 2 C^{T}_1\frac{J_q}{r} + C^{\nabla J}_- \frac{\rho_0^{(1)}}{r} + 2 C^{\nabla J}_1 \frac{\rho_q^{(1)}}{r} \right] \label{eq:so1} \\ && + C_\alpha \left[2 C^{(4) M s}_- \left( \frac{J_0}{r^3} -\frac{J_0^{(1)}}{r^2} - \frac{V_0(r)}{r} + 2 \frac{K_0(r)}{r}\right) + 4 C^{(4) M s}_1 \left( \frac{J_q}{r^3} -\frac{J_q^{(1)}}{r^2} - \frac{V_q(r)}{r}+ 2 \frac{K_q(r)}{r}\right) \right]\;, \nonumber \\ W^q_{0 C } (r) &=& C_\alpha \left[ - 2 C^{ M s}_- \frac{ J_0 (r)}{r} - 4 C^{ M s}_1 \frac{J_q (r)}{r} \right],\label{eq:so2}\\ % W^q_{1 R} (r) &=& C_\alpha \left[ 2 C^{ M s}_- \left( \frac{J_0^{(1)}(r)}{r} - \frac{ J_0 (r)}{r^2} \right) + 4 C^{ M s}_1 \left(\frac{ J_q^{(1)}(r)}{r} - \frac{ J_q (r)}{r^2} \right) \right],\label{eq:so3}\\ W^q_{2 R} (r) &=& C_\alpha \left[ 2 C^{ M s}_- \frac{ J_0 (r)}{r}+ 4 C^{M s}_1 \frac{ J_q (r)}{r} \right].\label{eq:so4} \end{eqnarray} This is a very interesting feature of our functional which appears to have more flexibility than N1LO. This new dependence could be of particular interest in different situations, by instance in adjusting centroids of single particle states without the need of using an explicit tensor term. Moreover, these terms are associated with the first two derivatives in the differential equation, contrary to the standard Skyrme interaction, and one of them is a centrifugal term. Such a term could thus allow to act on the single-particle levels with a new dependency in $l$. It is worth mentioning that several Skyrme functionals use different coupling constants in the spin-orbit sector to enrich the freedom of the corresponding field~\cite{rei99}. In such a case, the link with the underlying interaction is then broken. The new N2LO functional presented here has the advantage of keeping such a link and also gaining a more complex spin-orbit structure, thus making it a suitable candidate for multi-reference calculations. In Fig.~\ref{208pb:so}, we show the different spin-orbit contributions. The current parametrisation SN2LO1 leads to relative small values, but we should not exclude a priori the possibility of finding significative corrections with a different set of parameters. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{W2r.eps} \includegraphics[width=0.45\textwidth,angle=0]{W1r.eps} \includegraphics[width=0.45\textwidth,angle=0]{W0R.eps} \includegraphics[width=0.45\textwidth,angle=0]{W0c.eps} \end{center} \caption{(Colors online) Same as Fig.~\ref{208pb:field} but for the spin-orbit fields given in Eq.~(\ref{sp:eq:4th}).} \label{208pb:so} \end{figure} \subsection{Asymptotic properties}\label{sec:asym} Before entering the numerical details of the solution of Eq.~(\ref{eq:hf}), we want to prove that a solution with a well-behaved asymptotic behaviour (origin and infinity) exists. It has been well established for the standard Skyrme second-order differential equation~\cite{vau72} that the radial part of the wave-function Eq.~(\ref{fosphe}) behaves as $R_\alpha \propto r^{l+1}$ at the origin so that it compensates the behavior of the centrifugal term which diverges as $1/r^2$. In the case of the present fourth-order differential equation, this result is a priori no longer true. We thus assume that $R_\alpha(r) \propto r^\beta$ around $r=0$ and determine the possible physical value for $\beta$. We insert it in HF equations given in Eq.~(\ref{eq:hf}) and we obtain \begin{eqnarray} \epsilon_\alpha r^4 = && \beta (\beta-1) (\beta-2) (\beta-3) A_4 + \beta (\beta-1) (\beta-2) A_3 r + \beta (\beta-1) A_{2 R} r^2 + \beta A_{1 R} r^3 \nonumber \\ && + A_{0 R} r^4 + \ell (\ell+1) \left[ \beta (\beta-1) A_{2 C} + \beta A_{1 C} r + A_{0 C} r^2 + \ell (\ell+1) A_{0 CC} \right] \nonumber \\ && + \left( j(j+1) - l(l+1) - \frac{3}{4} \right) \left[ W_{0 R} r^4 + \ell (\ell+1) W_{0 C} r^2 + \beta W_{1 R} r^3 + \beta (\beta -1) W_{2 R} r^2 \right]\;. \end{eqnarray} All non relevant single-particle quantum numbers are omitted in this discussion to make the notation lighter. By inspecting the formal expressions of the coefficients $A_i$ in Eqs.~(\ref{eq:a4}-\ref{eq:a00c}), we observe that some fields diverge around origin \begin{eqnarray} A_{1C}\xrightarrow[r\rightarrow0]{} \frac{1}{r}\;,\\ A_{0C}\xrightarrow[r\rightarrow0]{} \frac{1}{r^2}\;. \end{eqnarray} The term $A_{0R}$ does not diverge since the derivative of the density is zero at the origin. This is typically the case of nuclear densities, even in the case of strong shell effects~\cite{ber01}. The spin-orbit fields have no divergence, so we can drop them. To have a well-behaved wave-function at $r=0$ we thus need to check that only the following terms give zero \begin{eqnarray} \beta (\beta-1) (\beta-2) (\beta-3) A_4 + \beta (\beta-1) (\beta-2) A_3 r + \beta (\beta-1) A_{2 R} r^2 + \beta A_{1 R} r^3 \nonumber \\ + A_{0 R} r^4 + \ell (\ell+1) \left[ \beta (\beta-1) A_{2 C} + \beta A_{1 C} r + A_{0 C} r^2 + \ell (\ell+1) A_{0 CC} \right] \approx 0\;. \end{eqnarray} First we notice that $A_{3},A_{2R}$ and $A_{1R}$ do not diverge at the origin. When multiplied by powers of $r$, they thus go to zero at the origin. By inspecting Eqs.~(\ref{eq:a4}-\ref{eq:a00c}), we can then notice that to leading order the following relations hold \begin{eqnarray} \ \ \ \ \ \ A_{2 C} = - 2 A_4 \ \ \ \ \ \ A_{1 C} = 4 A_4 \ \ \ \ \ \ A_{0 C} = - 6 A_4 \ \ \ \ \ \ A_{0 CC} = A_4\;, \end{eqnarray} so that we can simplify \begin{equation} \beta (\beta-1) (\beta-2) (\beta-3) A_4 + \ell (\ell+1) \left[ \beta (\beta-1) A_{2 C} + \beta A_{1 C} + A_{0 C} + \ell (\ell+1) A_{0 CC} \right] \simeq 0\;. \end{equation} We finally obtain \begin{equation} \beta^ 4 - 6 \beta^3 + \beta^2 \left( - 2 \ell^2 - 2 \ell +11 \right) + 6 \beta \left(\ell^2 + \ell -1 \right) + \ell (\ell+1) \left( \ell^2 + \ell - 6 \right) \simeq 0\;. \end{equation} This equation has 4 solutions \begin{eqnarray} \beta = 2 - \ell ,\ \ \ \ \ \ \beta = - \ell , \ \ \ \ \ \ \beta =\ell + 1 , \ \ \ \ \ \ \beta =\ell + 3\,. \end{eqnarray} The first two solutions diverge for some specific values of $\ell$ and can not represent the physical behaviour of the radial wave function. The last two solutions are physically well-behaved but since the nuclear density needs to be non-zero at the center of the nucleus, only the solution $\beta =\ell + 1 $ can be accepted. The radial part has therefore the same behaviour for N1LO and N2LO. At infinity, all the fields vanish as one can easily see from Figs.\ref{208pb:field}-\ref{208pb:so}, thus we can recover the typical asymptotic behaviour of the solutions of the N1LO functional. \subsection{Numerical methods to solve 4th order equations} The solution of HF equations with 4th order derivative terms represent a major numerical challenge. The standard technique for N1LO is usually to project the HF equations on an Harmonic Oscillator basis, since one can use particular properties of orthonormal polynomials to avoid the explicit numerical derivation~\cite{hosphe}. However, the main inconvenient is the slow convergence as a function of the number of basis states, as compared to the solution of the HF equations via direct integration~\cite{sch15}. We have thus decided to develop a new numerical solver named \verb|WHISKY|~\cite{bec17}: the code has been built in a modular way so it can accept the central part of the N$\ell$LO Skyrme pseudo-potential with $\ell=1,2,3$. The code has been written aiming at using it into a fitting procedure. Therefore it has been conceived to be fast and accurate. To conciliate high accuracy and reduced execution time, we have decided to use a two-basis method to solve HF equations~\cite{sch12}. The 4th order differential equation governing the properties of single-particle states is then solved using the finite-difference method and more particularly the Hooverman method~\cite{hoo72}. With this method, we obtain a wave-function for each point of the mesh for each $(\ell,j,q)$-block. As a consequence the number of basis functions grows quite quickly, especially when we include pairing correlations (see Sec.~\ref{subsec:pair}) so that we introduced an auxiliary Wood-Saxon (WS) basis and an additional energy cut-off. Since the WS wave-functions are reasonably close to the final single-particle solutions, the number of basis states to ensure convergence is quite reduced. An alternative to the WS basis would be the use of the self-consistent HF basis. However, we did not explore this possibility: since we are not currently working with very neutron rich nuclei, a WS approximation is expected to give a result close to the final solution. We plan to add this option to explore the properties of the extended N2LO functional close to stability in the next version of the code. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{cutofflog.eps} \end{center} \caption{(Colors online) Precision obtained with WHISKY against LENTEUR as a function of the cutoff energy in the Wood-Saxon basis for $^{40}$Ca (+) and $^{208}$Pb ($\times$). See text for details.} \label{208pb:prec2} \end{figure} In Fig.~\ref{208pb:prec2}, we compare the accuracy of our HF code against the HF code named \verb|LENTEUR|~\cite{rot09a,rot09b} as a function of the intermediate WS basis size. The calculations are done in both cases using SLy5 interaction~\cite{cha97} with Coulomb included and a mesh of $h=0.05$ fm within a box of 20 fm. It is worth reminding that the code \verb|LENTEUR| works with a similar two-basis method: HF and r-space representation with direct integration of HF equation in coordinate space~\cite{ben05}. The total energy difference for different nuclei obtained with the two codes is defined as $\Delta E=\left|E_{\text{WHISKY}}-E_{\text{LENTEUR}}\right|$. We observe that the accuracy of our code is very good in a reasonably small basis size. By considering states up to 300 MeV we obtain an accuracy of $\approx1$ keV and an execution time of a few seconds. In Tab.~\ref{tab:energy}, we give a more detailed comparison of the resulting energies for a fully converged calculation in $^{208}$Pb using \verb|LENTEUR| and \verb|WHISKY| with a cut-off of 300 MeV in WS basis. We see that the agreement is very good (8 keV at worst). We conclude that the basis size we have chosen is clearly an excellent compromise of efficiency and accuracy since all energy contributions are described by the two codes at the keV level of accuracy. This cutoff is consequently used in the fit. \begin{table} \begin{center} \begin{tabular}{c|cc} \hline \hline \multicolumn{3}{c}{$^{208}$Pb}\\ \hline [MeV] & \verb|WHISKY| & \verb|LENTEUR| \\ \hline Total Energy &-1636.10{\textbf 6} & -1636.10{\textbf 5}\\ Kinetic energy & 3874.7\textbf{89} & 3874.7\textbf{95}\\ Field energy & -6209.6\textbf {42} & -6209.6\textbf {50}\\ Spin-orbit & -99.081 & -99.081 \\ Direct Coulomb & 829.143 & 829.143\\ Exchange Coulomb & -31.314 &-31.314\\ \hline \hline \end{tabular} \end{center} \caption{Energies obtained by WHISKY and LENTEUR with self-consistent HF calculations using the SLy5 interaction. The differences appear on the last digits and are written in bold. } \label{tab:energy} \end{table} The code \verb|LENTEUR| accepts only N1LO Skyrme-like functionals. Therefore, in order to test the energy contribution of the terms originated from higher order derivatives, we benchmarked our code against the latest version of \verb|MOCCA|~\cite{rys15,mocca}. \verb|MOCCA| is a 3D solver working in a cubic box and using imaginary-time algorithm to solve the HF equations~\cite{bon05}. For the current comparison, we used a mesh of $dx=0.4$ fm and 32 points in each direction. Since we deal with spherical even-even nuclei, we can impose several symmetries and thus perform the calculations only in one octant of the whole box. See Ref.~\cite{rys15} for more details. For our tests we have used SLy5 and added a random set of higher order parameters. The results are presented in Tab.~\ref{tab:energy2}. The different energy terms refer to the different components of the N2LO functional as given in Eq.~(\ref{eq:ef:DKo}). In this case the total energy difference between the two codes is at the level of 10 keV on the total energy. This is also the typical discrepancy between the different 4th order terms of the N2LO functional. This is a very strong test since the two codes have been developed in a completely independent way and moreover they use completely different algorithms to solve the HF equations. \begin{table} \begin{center} \begin{tabular}{c|cc} \hline \hline \multicolumn{3}{c}{$^{208}$Pb}\\ \hline [MeV] & \verb|WHISKY| & \verb|MOCCA| \\ \hline Total Energy & -1539.2{\textbf {53}} & -1539.2{\textbf {63}}\\ Total energy N2LO & 89.{\textbf {278}} & 89.{\textbf {360}}\\ E[$(\Delta\rho)^2$] & 4.39{\textbf 4} & 4.39{\textbf 5}\\ E[$\rho Q$] & 37.4{\textbf {77}} & 37.4{\textbf {88}}\\ E[$\tau^2$] & 27.2{\textbf {12}} & 27.2{\textbf {21}}\\ E[$\tau_{\mu\nu}\tau_{\mu\nu}-\tau_{\mu\nu}\nabla_{\mu}\nabla_{\nu}\rho$] & 19.8{\textbf {55}} & 19.8{\textbf {61}}\\ E[$K_{\mu\nu\kappa}K_{\mu\nu\kappa}$] & 0.0546{\textbf 0} & 0.0546{\textbf 1}\\ E[$J_{\mu\nu}V_{\mu\nu}$] & 0.3385{\textbf 0} & 0.3385{\textbf 8}\\ \hline \hline \end{tabular} \end{center} \caption{Comparison of the results for WHISKY and MOCCA: different N2LO functional contributions to the total energy after a self-consistent calculation with a toy N2LO interaction. The discrepancies are presented in bold. } \label{tab:energy2} \end{table} \subsection{Pairing correlations}\label{subsec:pair} Once we move away from closed-shell nuclei, we need to consider extra pairing correlations~\cite{bri05}. To this purpose, we have generalized the \verb|WHISKY| code to solve the complete HFB equations. Since we use a two-basis method, we first solve the HF equations in coordinate space and then we transform back to the WS basis. The HFB equations in this basis read~\cite{pas08} \begin{eqnarray} \sum_{\alpha'}(h_{\alpha'\alpha}^{lj,q}- \mu_{F}^{q})U^{nlj,q}_{\alpha'}+\sum_{\alpha'}\Delta_{\alpha \alpha '}^{lj,q}V^{nlj,q}_{\alpha'}&=&E^{nlj,q}U^{nlj,q}_{\alpha} , \\ \sum_{\alpha'}\Delta^{lj,q}_{\alpha \alpha'}U^{nlj,q}_{\alpha'} -\sum_{\alpha'}(h^{lj,q}_{\alpha'\alpha}- \mu_{F}^{q})V^{nlj,q}_{\alpha'} &=&E^{nlj,q}V^{nlj,q}_{\alpha} , \label{paper:eq:HFBeq} \end{eqnarray} where $\mu_{F}^{q}$ is the chemical potential and $U^{nlj,q}_{\alpha}$ and $V^{nlj,q}_{\alpha}$ are the Bogoliubov amplitudes for the quasiparticle of energy $E^{nlj,q}$, $\alpha$ is the index of the WS basis and $n$ is the index of the quasi-particle state. The field $h_{\alpha'\alpha}^{lj,q}$ is derived from Eq.~(\ref{sp:eq:4th}) {\it via} a unitary transformation. For the pairing channel we used a simple pairing interaction of the form~\cite{ber91,gar99} \begin{eqnarray} \label{pairing_int_contact} \qquad \quad v(\mathbf{r}_{1},\mathbf{r}_{2})=V^q_{0}\left[ 1- \eta \left( \frac{\rho_0\left( \mathbf{R}\right)}{\rho_{sat}}\right)^{}\right] \delta(\mathbf{r}), \end{eqnarray} where $\mathbf{R}=(\mathbf{r_1}+\mathbf{r_2})/2$ is the center of mass of the two interacting particles and $\mathbf{r}=\mathbf{r_1}-\mathbf{r_2}$ is their mutual distance. In the present article we use the so-called volume shape~\cite{san05} with parameter $V^n_{0}=V^p_{0}=-200$ MeV.fm$^3$, $\eta=0$, and $\rho_{sat}=0.16$ fm${^{-3}}$. Since this interaction has an ultraviolet divergency~\cite{bul02}, we use a simple cut-off procedure in quasi-particle space $E_{cut}=60$ MeV. For more details on this topic we refer to Ref.~\cite{bor06}. The choice of the pairing interaction is crucial to determine properties of nuclei far from stability~\cite{dob94,pas13b}. At present we followed the Saclay-Lyon fitting protocol, so we decoupled the problem in two steps. After the complete fit of the N2LO functional, the $V_{0}$ parameters can be fixed to pairing effects. In this article we have used prefixed values of the $V_0$ parameters, but we plan to extend our fitting procedure to take into account also pairing effects more precisely~\cite{gom92}. The pairing interaction for protons and neutrons is not necessary the same, since Coulomb effects should also taken into account in the calculation of proton Cooper pairs~\cite{nak11}. We plan to include such effects in the next version of the code. In Tab.\ref{tab:energy:pair}, we compare \verb|WHISKY| against \verb|LENTEUR| for $^{120}$Sn and SLy5 interaction plus volume pairing Eq.\ref{pairing_int_contact}. We observe that the accuracy is remarkably high. The small discrepancy of 4 keV originates from a different definition of cut-off in single particle states: \verb|LENTEUR| operates with a cut-off on the total angular momentum $j$ of the quasi-particle states entering the calculation, while \verb|WHISKY| operates with a cut-off on the orbital angular momentum. {\begin{table} \begin{center} \begin{tabular}{c|cc} \hline \hline \multicolumn{3}{c}{$^{120}$Sn}\\ \hline [MeV] & \verb|WHISKY| & \verb|LENTEUR| \\ \hline Total energy & -1018.81\textbf{4} & -1018.81\textbf{8} \\ Kinetic energy & 2188.1\textbf{27} & 2188.1\textbf{42} \\ Field energy & -3485.1\textbf{15} & -3485.1\textbf{31} \\ Spin-orbit energy & -55.00\textbf{0} & -55.00\textbf{1} \\ Coulomb (direct) & 367.336 & 367.336 \\ Coulomb (exchange) & -19.147 & -19.147 \\ Neutron pairing energy & -15.01\textbf{4} & -15.01\textbf{7} \\ \hline \hline \end{tabular} \end{center} \caption{Comparaison between the energies obtained by WHISKY and lenteur with self-consistent HFB calculations using the SLy5 interaction. See text for details } \label{tab:energy:pair} \end{table} In Fig.\ref{pair}, we compare the total density $\rho(r)$ for $^{120}$Sn obtained with the two codes and also the pairing density $\tilde{\rho}$. Following Refs.~\cite{dob84,ben05} we define it as \begin{eqnarray} \tilde{\rho}_{q} (r) & = & -\sum_{nlj} \ \frac{(2 j + 1 )}{4 \pi} \ \frac{V^{nlj,q}(r)U^{nlj,q}(r)}{r^2} \,, \end{eqnarray} where $V^{nlj,q}(r), U^{nlj,q}(r)$ are the quasi-particle amplitudes expressed in r-space. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{rho_pairing.eps} \end{center} \caption{(Colors online) Isoscalar particle density and pairing density for $^{120}$Sn obtained with a self-consistent mean-field calculation with the SLy5 interaction.} \label{pair} \end{figure} The agreement is excellent, thus demonstrating the very high accuracy of our new NEDF solver. \section{Fit of N2LO interaction}\label{sec:fit} To fit the N2LO pseudo-potential we adopted a modified version of the Saclay-Lyon fitting protocol~\cite{cha97,kou12}: the protocol includes here both properties of some selected double-magic nuclei and some basic properties of the infinite nuclear medium as saturation density, incompressibility and the equation of state of pure neutron matter (PNM) derived from realistic nucleon-nucleon interactions~\cite{Wiringa}. We consider \emph{all} terms of the interaction, and we treat spurious center of mass motion with the usual one-body approximation~\cite{cha97,ben03}. We also assume equal neutron and proton masses and we use the value $\frac{\hbar^2}{2m}=20.73553$ MeV.fm$^2$~\cite{cha97}. \subsection{Fitting protocol} To obtain the parameters of the pseudo-potential we need to minimize the following penalty function~\cite{dob14} \begin{eqnarray}\label{eq:chi2} \chi^2=\sum_{i=1}^M\frac{\left( \mathcal{O}_i -f_i(\mathbf{p}) \right)^2}{\Delta \mathcal{O}_i^2}\;, \end{eqnarray} where the sum runs over all the $M$ (pseudo)-observables $\mathcal{O}_i $ we want to constraint in our fit, $f_i $ is the value obtained with our solver for a given array of parameters $\mathbf{p}=\left\{ t_0,t_1,t_2,\dots \right\}$, while $\Delta \mathcal{O}_i$ is the weight we give to each point in the fit. Let's mention that $\Delta \mathcal{O}_i$ does not correspond necessarily to the experimental uncertainty. In Tab.~\ref{Totcont}, we give the actual constraints we used to build the $\chi^2$ function in Eq.~(\ref{eq:chi2}). On top of this constraints, we paid particular attention in tuning the spin-orbit parameter $W_0$ to some specific range of acceptable values. Finally, it is worth noticing that during the $\chi^2$ minimisation the parameters $\mathbf{p}$ cannot vary freely: in order to avoid finite-size instabilities~\cite{hel13}, the critical densities in all channels are computed at each iteration, and an asymmetric constraint is imposed in terms of a penalty function \begin{eqnarray}\label{eq:chi2:spurious} \chi^2_{fs}=\sum_{\alpha}\exp^{-2\beta \left(\mathcal{O}_{\alpha} -\rho_{crit}\right)}\;, \end{eqnarray} where $\mathcal{O}_{\alpha=(S,M,T)}$ is the lowest density at which an instability appears in symmetric nuclear matter (SNM). $\rho_{crit}$ is an \emph{empirical} value defined in Refs.~\cite{hel13,Pas15T} to avoid unphysical instabilities. $\beta$ is an arbitrary parameter ($\beta =10$ here) fixed in such a way that the penalty function grows very fast when we approach the critical density from below, but gives no contribution when above it. This constraint is applied in all channels for which we calculate the response function of the system (see Sec.~\ref{sec:finitesize}). Finite-size instabilities may also have important impact at high density on astrophysical applications such as the neutrino mean free path~\cite{pas14bsk}. However, in this work, we concentrate ourselves on finite-size instabilities only in densities ranges that are relevant for finite nuclei. In other words, we allow in this preliminary work the appearance of instabilities at densities above $\rho_{crit}$ which is slightly above saturation density. \begin{table} \centering \begin{tabular}{|l| c c c c |} \hline {\centering \large \quad \ \ Fit Constraints} & $\mathcal{O}_i$ & \ $\Delta \mathcal{O}_i$ & Units & Reference \\ \hline \hline \ \ \textbf{Infinite nuclear matter} \ \ & & & & \\ \ $\rho_{sat}$ & 0.1600 & 0.001 & fm$^{-3}$ & \cite{rhoelec,rhomassformula} \\ \ E/A ($\rho_{sat}$) & -16.0000 & 0.2 & MeV & \cite{rhoelec,rhomassformula} \\ \ $m^*/m$ & 0.7000 & 0.02 & & \cite{BlaizotKinf,meff} \\ \ $K_\infty$ & 230.00 & 10.00 & MeV & \cite{BlaizotKinf} \\ \ $J$ & 32.00 & 2.00 & MeV & \\ \hfill \textit{EoS PNM} & & & &\cite{Wiringa} \\ \ E/N ($\rho$=0.1) & 11.88 & 2.0 & MeV & \\ \ E/N ($\rho$=0.3) & 35.94 & 7.0 & MeV & \\ \ E/N ($\rho$=0.35) & 44.14 & 9.0 & MeV & \\ \hfill \textit{Stability} & & & & \cite{hel13} \\ \ INM(S,M,T)& $\rho_{crit} \geq 0.24$ & asymmetric & fm$^{-3}$ & \\ & & constraint & &\\ \hline \hline \ \ \textbf{Finite nuclei } \ \ & & & & \\ \hfill \textit{Binding energies} & & & & \cite{wan12}\\ \ $^{40}$Ca \hfill & -342.02300 & 1.5 & MeV & \\ \ $^{48}$Ca \hfill & -415.98300 & 1.0 & MeV & \\ \ $^{56}$Ni \hfill & -483.95300 & 1.5 & MeV & \\ \ $^{100}$Sn \hfill & -825.13000 & 1.5 & MeV & \\ \ $^{132}$Sn \hfill & -1102.67300 & 1.0 & MeV & \\ \ $^{208}$Pb \hfill & -1635.86100 & 1.0 & MeV & \\ \hline \hfill \textit{Proton radii } & & & & \cite{ang04} \\ \ $ ^{40}$Ca \hfill & 3.38282 & 0.03 & fm & \\ \ $ ^{48}$Ca \hfill & 3.39070 & 0.02 & fm & \\ \ $ ^{56}$Ni \hfill & 3.66189 & 0.03 & fm & \\ \ $^{132}$Sn \hfill & 4.64745 & 0.02 & fm & \\ \ $^{208}$Pb \hfill & 5.45007 & 0.02 & fm & \\ \hline \hline \quad \qquad \textbf{Parameter $W_0$} \ \ & 120.0 & 2.0 & MeV.fm$^5$ & \\ \hline \end{tabular} \caption{Constraints $\mathcal{O}_i$ used in the fitting procedure and the associated error $\Delta \mathcal{O}_i$. See text for details.} \label{Totcont} \end{table} At the end of the minimisation procedure, we have obtained the parameters $\mathbf{p}=\left\{ t_0,t_1,t_2,\dots \right\}$ given in Tab.~\ref{tab:inter}. Notice that the exponent $\alpha$ of the density dependent term has been fixed from the beginning (see Sec.~\ref{sec:infm}). From the table, it is difficult to judge the quantitative relative importance of the different parameters. A way to bypass the problem is to use the concept of \emph{naturalness}. Following Ref.~\cite{kor10B} we multiply each N2LO coupling constant by \begin{equation} S=f_\pi^{2(l-1)}\Lambda^{n+l-2}\;, \end{equation} where $f_\pi=93$ MeV is the pion decay constant, $\Lambda=687$ MeV, $l$ is the power of the density of the corresponding term and $n$ is the order. Special treatment is required for the density dependent coupling constant. See Ref.~\cite{kor10B} for details. It is important to keep in mind that the value of $\Lambda$ is somehow arbitrary since it has been derived in Ref.~\cite{kor10B} by observing the behaviour of several N1LO functionals. The results are presented in Tab.~\ref{tab:natural}. Owing to the arbitrariness of the value of $\Lambda$, one should not look too close to the actual numbers, but only to the order of magnitude. By inspecting the table, we clearly observe that there is a natural hierarchy in the coupling constants: the N2LO coupling constants are one order of magnitude smaller than the N1LO ones. This is a very important aspect since the entire idea behind the N$\ell$LO expansion is to have a fast convergence: from these results, we can expect that within this scheme, the N3LO coupling constants would be another order of magnitude smaller. \begin{table} \begin{center} \begin{tabular}{cc|cc} \hline \hline \multicolumn{4}{c}{SN2LO1}\\ \hline $n$ & $i$ & $t_i^{(n)}$ [MeVfm$^{3+n}$] & $x_i^{(n)}$\\ \hline 0 & 0 & -2486.90786& 0.57160369\\ 2 & 1 & 497.51821 & -0.05521333\\ 2 & 2& -451.60715 & -0.99803779\\ 4 & 1 & -11.95063& 0.10279808\\ 4 & 2 & -15.04405 & -0.93024200\\ \hline \multicolumn{4}{c}{$t_3=13707.18320$ [MeVfm$^{3(1+\alpha)}$] $x_3=0.88704830$}\\ \hline \multicolumn{4}{c}{$\alpha$ = 1/6}\\ \multicolumn{4}{c}{$W_0$ = 117.904418 [MeVfm$^5$]}\\ \hline \hline \end{tabular} \end{center} \caption{Numerical values for N2LO parameters. } \label{tab:inter} \end{table} \begin{table} \begin{center} \begin{tabular}{cc|cc} \hline \hline \multicolumn{4}{c}{SN2LO1}\\ \multicolumn{4}{c}{Natural units}\\ \hline $C^{\rho}_0$ & -1.06 & $C^{\rho}_1$ & 0.754 \\ $C^{\rho}_0 [\rho^\alpha]$ &13.0 & $C^{\rho}_1 [\rho^\alpha]$ & -12.1 \\ $C^{\tau}_0$ & 0.892 & $C^{\tau}_1$ & 0.00624 \\ $C^{\Delta\rho}_0$ & -1.06 & $C^{\Delta\rho}_1$ & 0.382 \\ $C^{\nabla J}_0$ & -1.22 & $C^{\nabla J}_1$ & -0.406 \\ $C^{T}_0$ & -0.0882 & $C^{T}_1$ & -0.816 \\ \hline $C^{(\Delta\rho)^2}_0$ & -0.115 & $C^{(\Delta\rho)^2}_1$ & 0.0396 \\ $C^{M \rho}_0$ & -0.288 & $C^{M \rho}_1$ & 0.143 \\ $C^{M s}_0$ & 0.117 & $C^{M s}_1$ & -0.0162 \\ \hline \hline \end{tabular} \end{center} \caption{Values of the parameters of the N2LO pseudo-potential expressed in natural units.} \label{tab:natural} \end{table} \subsection{Finite-size instabilities}\label{sec:finitesize} As discussed in the introduction, several effective interactions are biased by spurious instabilities~\cite{mar02,report,dep16}. To avoid such a problem, we have developed in Ref.~\cite{pas13} a new fitting protocol based on the LR formalism~\cite{bec14}. From previous analysis of Refs~\cite{hel13,Pas15T}, we have noticed that when a pole in the response function appears at densities lower than $\approx1.2$ saturation density then it is very likely to observe an instability also in the atomic nucleus. Of course, such a criterion does not apply to the spinodal instability, that has a well-defined physical meaning~\cite{duc07}. We have thus added such an additional constraint on top of our fitting protocol to guarantee stable results (see Eq.~\ref{eq:chi2:spurious}). In principle, finite-size instabilities may appear in isospin asymmetric matter as well, see discussion in Ref.~\cite{dav14a}. However, we have not derived the LR formalism for the N2LO functional in this case: as an empirical rule, we decided to add a check on the behaviour of finite size-instabilities also in pure neutron matter even if this does not guarantee that an instability may appear at lower critical density for some specific asymmetry value. At present, such a check is not possible and we leave this aspect for a near future investigation. We start by considering the properties of Landau parameters~\cite{lan59}. Their calculation for an extended Skyrme pseudo-potential has been reported in Ref.~\cite{dav14c}. These parameters can be related to properties of infinite nuclear medium and help us constraining some important parts of the effective interaction~\cite{dav14c,Dav16H,back75,zuo03}. In Fig.~\ref{landau}, we show the density dependence of the Landau parameters in SNM. We observe that apart from the physical spinodal instability observed in the $F_0$ parameters, all the Landau inequalities~\cite{mar02} are respected up to two times the saturation density. The only instability appears in the $G'_0$ parameter at $\rho\approx0.35$ fm$^{-3}$. This does not represent a major issue for this study since we do consider only finite-nuclei and and not astrophysical applications~\cite{gor10}. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=-90]{Landau.eps} \end{center} \caption{(Colors online) Landau parameters in SNM for the SN2LO1 pseudo-potential as a function of the density of the system. See text for details. } \label{landau} \end{figure} In Fig.~\ref{lr}, we show the position of the critical densities obtained in SNM as a function of the transferred momentum $q$. The LR is calculated for each spin (S) spin projection (M) and isospin (I) channel (S,M,I). See Ref~\cite{report} for more details on the adopted notation. We observe no finite-size instabilities, apart from the physical spinodal one~\cite{duc07}, around saturation density. This means that our interaction is well stable in all spin-isospin channels~\cite{hel13,Pas15T}. This results confirm our preliminary findings in Ref.~\cite{pas13}: the LR formalism can be considered as a very simple tool to be added in a fitting procedure to avoid exploring regions of parameters that induce unphysical instabilities. \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth]{lr_SNM_param.eps} \end{center} \caption{(Colors online) Critical densities in SNM as a function of transferred momentum $q$. The horizontal dashed lines represent the saturation density $\rho_0$ and the critical density $\rho_{crit}$.} \label{lr} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.50\textwidth,angle=-90]{EOS_SNM_param_art.eps} \end{center} \caption{(Colors online) Equation of state for SNM and PNM obtained with the N2LO Skyrme interaction. The squares represent the values obtained from BHF calculations. } \label{eos:tot} \end{figure} \subsection{Infinite nuclear matter}\label{sec:infm} In our fitting protocol, we include information of the infinite nuclear medium. Following Ref.~\cite{cha97}, we have used as a constrain three points of the EoS in PNM dervied in Ref~\cite{Wiringa}. We can now benchmark our results against other well known EoS as the one derived via Brueckner-Hartree-Fock (BHF)~\cite{bal97}. In Fig.~\ref{eos:tot}, we compare the EoS for symmetric matter and neutron matter obtained with BHF and the SN2LO1 interaction. For completeness the results with SLy5 are also given. The SN2LO1 follows quite closely the BHF results, and in particular the EoS of PNM up to 3 times saturation density. Beyond this point the EoS becomes slightly softer. We remind the reader that SLy5 and SN2LO1 follow each other quite closely in PNM at low-density since they have been constrained on the same points in this density region. On the same figure, we also give the results for spin-polarised symmetric matter and spin-polarised pure neutron matter and compare SLy5 and SN2LO1 results. Although these two quantities have not been fitted explicitly, we observe a qualitative similar behaviour in the two functionals. For completeness, in Tab.~\ref{tab:inm}, we give the main features of the EoS of SN2LO1, $i.e.$ saturation density $\rho_0$, incompressibility $K_{\infty}$, symmetry energy $J$ and slope of symmetry energy $L$ (not fitted). The values we obtained are in agreement with the existing constraints~\cite{dut12}. As already discussed in Ref.~\cite{cha97}, there is a strong model correlation for N1LO between the nuclear incompressibility and the effective mass. In our case, the correlation between $K_\infty$ and ${m}/{ m^*}$ is of course different since the new parameters give us more freedom in adjusting these two values. It can be calculated analytically in infinite matter with the result \begin{equation}\label{corrN2LO} K_\infty = - 9 (\alpha+1) \frac{E}{A}(\rho_0) + \frac{3}{5} \frac{\hbar^2}{2m} k_F^2 \left( 3 \ (3 \alpha-1) - 2 \ (3 \alpha - 2) \frac{m}{\ m^*} \right) + \frac{3}{140} C_0^{(4)} \rho k_F^4 ( 3 \alpha + 10)\;. \end{equation} In Fig.~\ref{corr:kinf}, we observe that to obtain a reasonable value of the nuclear incompressibility, the allowed range for $\alpha$ is $\alpha\in[1/6,1/3]$. In a future work, we plan to remove the density dependent term and to replace it with a real three-body term~\cite{sad13} to make the pseudo-potential suitable also for multi-reference calculations~\cite{lac09,ben09}. \begin{table} \begin{center} \begin{tabular}{c|c|c} \hline & SN2LO1 & SLy5 \\ \hline $\rho_0$ [fm$^{-3}$] &0.162 &0.1603 \\ E/A($\rho_0$) [MeV]& -15.948&-15.98\\ $K_{\infty}$ [MeV] &221.9 & 229.92\\ $J$ [MeV] & 31.95 & 32.03\\ $L$ [MeV] & 48.9 &48.15\\ $m^*/m$ & 0.709 & 0.696 \\ \hline \hline \end{tabular} \end{center} \caption{Infinite matter properties at saturation for SN2LO1 and SLy5~\cite{cha97}. See text for details. } \label{tab:inm} \end{table} \begin{figure}[!h] \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{CorrelationKmSN2LO1.eps} \end{center} \caption{(Colors online) Correlation between the effective mass and the nuclear incompressibility in infinite nuclear matter for different values of the power of the density dependent term.} \label{corr:kinf} \end{figure} \subsection{Finite nuclei} In this section, we analyse the properties of finite nuclei obtained with the extended Skyrme pseudo-potential. In Fig.~\ref{tikzmass}, we show the energy difference $\Delta E$ between the experimental values and the ones calculated using either SLy5 or SN2LO1 for the few selected double-magic nuclei used in the fit. The results obtained with SN2LO1 are of the same quality as SLy5. Moreover they are all very close to the tolerance $\Delta \mathcal{O}_i$ we used for the fit given in Tab.~\ref{Totcont}. \begin{figure} \centering \begin{tikzpicture}[scale=0.6] \draw (-5,0) grid (5,7); \draw (0,-0.7) node[below]{\large $\Delta$E = E$_{\mbox{\small{th}}}$ - E$_{\mbox{\small{exp}}}$ \quad [MeV]}; \draw[line width = 1.5pt] (0,0) -- (0,7); \draw(-5.2,1)node[left]{\Large $^{40}$ Ca}; \draw(-5.2,2)node[left]{\Large $^{48}$ Ca}; \draw(-5.2,3)node[left]{\Large $^{56}$ Ni}; \draw(-5.2,4)node[left]{\Large $^{100}$ Sn}; \draw(-5.2,5)node[left]{\Large $^{132}$ Sn}; \draw(-5.2,6)node[left]{\Large $^{208}$ Pb}; \foreach \x in {-5,...,5} \draw(\x,0)node[below]{\x}; \draw[white, fill, rounded corners,opacity=0.9] (1.4,0.6) rectangle (4.9,2.4); \draw (3.2,2) node {\Large $\bullet$ SN2LO1}; \draw (2.6,1) node[color=orange] {\Large $\bullet$ SLy5}; \draw[line width = 4pt] plot[xcomb,mark=*] coordinates {(-345.36828+342.02300,1.15) (-416.94469+415.98300,2.15)(-482.12111+483.95300,3.15)(-826.06845+825.13000,4.15)(-1104.44457+1102.67300,5.15)(-1634.36724+1635.86100,6.15)}; \draw[line width = 4pt, color=orange] plot[xcomb,mark=*] coordinates {(-344.05099+342.02300,0.85)(-415.90143+415.98300,1.85)(-482.65365+483.95300,2.85)(-827.80765+825.13000,3.85)(-1103.86452+1102.67300,4.85)(-1635.97989+1635.86100,5.85)}; \end{tikzpicture} \caption{Difference of binding energies obtained with SN2LO1 and SLy5 and experimental values extracted from Ref. \cite{wan12}.} \label{tikzmass} \end{figure} In Fig.~\ref{tikzrad}, we compare the differences of proton radii $\Delta r_p$ obtained with SLy5 and our new pseudo-potential SN2LO1. In this case we see that SN2LO1 behaves marginally better than SLy5 giving a result typically closer to the experimental values. It is worth noticing that compared to SLy5, we have few additional constraints concerning finite-size instabilities that were not present in the original fitting protocol of SLy5. The closest functional to SN2LO1, in terms of fitting protocol, is represented by SLy5$^*$~\cite{pas13}. We do not report here the direct comparison, but we have checked that the results are qualitatively the same. \begin{figure} \centering \begin{tikzpicture}[scale=0.6] \draw (-5,0) grid (7,6); \draw (0,-0.7) node[below]{\large $\Delta$r$_p$ = r$_{\mbox{\small{th}}}$ - r$_{\mbox{\small{exp}}}$ \quad [10$^{-2}$ fm]}; \draw[line width = 1.5pt] (0,0) -- (0,6); \draw(-5.2,1)node[left]{\Large $^{40}$ Ca}; \draw(-5.2,2)node[left]{\Large $^{48}$ Ca}; \draw(-5.2,3)node[left]{\Large $^{56}$ Ni}; \draw(-5.2,4)node[left]{\Large $^{132}$ Sn}; \draw(-5.2,5)node[left]{\Large $^{208}$ Pb}; \draw (340.594-338.282,1.15) node {\huge $\bullet$}; \draw (344.018-339.070,2.15) node {\huge $\bullet$}; \draw (368.978-366.189,3.15) node {\huge $\bullet$}; \draw (464.828-464.745,4.15) node {\huge $\bullet$}; \draw (543.504-545.007,5.15) node {\huge $\bullet$}; \draw (341.583-338.282,0.85) node[color=orange] {\huge $\bullet$}; \draw (345.076-339.070,1.85) node[color=orange] {\huge $\bullet$}; \draw (369.798-366.189,2.85) node[color=orange] {\huge $\bullet$}; \draw (466.245-464.745,3.85) node[color=orange] {\huge $\bullet$}; \draw (544.953-545.007,4.85) node[color=orange] {\huge $\bullet$}; \draw[white, fill, rounded corners,opacity=0.9] (-4,0.6) rectangle (-0.5,2.4); \draw (-2.5,2) node {\Large $\bullet$ SN2LO1}; \draw (-3,1) node[color=orange] {\Large $\bullet$ SLy5}; \foreach \x in {-5,...,7} \draw(\x,0)node[below]{\x}; \draw[line width = 4pt,xscale=100.0,yscale=1.0] plot[xcomb] coordinates {(3.40594-3.38282,1.15)(3.44018-3.39070,2.15)(3.68978-3.66189,3.15)(4.64828-4.64745,4.15)(5.43504-5.45007,5.15)}; \draw[line width = 4pt,xscale=100.0,yscale=1.0,color=orange] plot[xcomb] coordinates {(3.41583-3.38282,0.85)(3.45076-3.39070,1.85)(3.69798-3.66189,2.85)(4.66245-4.64745,3.85)(5.44953-5.45007,4.85)}; \end{tikzpicture} \caption{Proton radii difference of two interactions (SN2LO1/SLy5) calculated with WHISKY with experimental radii obtained in Ref.~\cite{ang04}.} \label{tikzrad} \end{figure} In Figs.\ref{be}, we compare the differences between the binding energies calculated for isotopic (isotonic) chains with Z(N)=20, 28, 50, 82 for our extended Skyrme interaction. The experimental measurements are taken from Ref.~\cite{wan12}. On the same figure, we also report the values obtained with SLy5. Notice that we did not optimised the value of the pairing strength to improve the reproduction of experimental data. Moreover, since the effective masses are numerically quite similar for SLy5 and SN2LO1, we used exactly the same pairing interaction. \begin{figure}[!h] \begin{center} \includegraphics[width=0.34\textwidth,angle=-90]{isot_param2.eps} \hspace{-0.3cm} \includegraphics[width=0.34\textwidth,angle=-90]{isoton_param2.eps} \end{center} \caption{(Colors online) Systematic comparison of binding energies, expressed in MeV, for isotopic (isotonic) chains calculated with our extended Skyrme interaction SN2LO1 and experimental ones. On the same figure we also compare with the SLy5 parametrisation. See text for details.} \label{be} \end{figure} The main feature we observe is the strong arch-like structures. This is the main drawback of a fitting protocol that fixes a very limited number of nuclei. A better fitting protocol has been designed for example for UNEDF functionals~\cite{kor10,kor12,kor13} and we plan to use it for a systematic exploration of the parameter space of higher order terms. In Fig.~\ref{charge}, we compare the proton radii. The data are taken from Ref.~\cite{ang04}. The new interaction is fairly closer to experimental data than the original SLy5 and the main trends are reproduced. One of the biggest discrepancy we observe in the data is related to the anomalous isotopic dependence of proton radii of calcium isotopes. With the current parametrisation we have not been able to reproduce both $^{40}$Ca and $^{48}$Ca. A recent article~\cite{rei17} suggests that a different form of the pairing functionals based on Fayans form~\cite{fay00} may be the key to solve this anomaly, while the specific form of the functional used for the calculation of the central potential is not relevant. Since we did not fix any particular pairing functional in our fit, we plan to test the results of Ref.~\cite{rei17} with our new functional. Finally, we have explored the behaviour of single particle spectra. In Fig.~\ref{fig:sparticle40ca}, we compare the Hartree-Fock neutron single particle states for $^{40}$Ca obtained using SLy5 and SN2LO1. The values are compared with the experimental values extracted from Ref.~\cite{Sch07}. The HF states obtained with the two functionals are very close to each other. SN2LO1 shows a slight compression of the spectrum, but this is simply related to a slightly larger effective mass (see Tab.~\ref{tab:inm}). Similar behaviour is also observed in Fig.~\ref{fig:sparticle208pb} for neutron single-particle states in $^{208}$Pb. As discussed in Sec.~\ref{sec:hfb}, the higher order gradient terms induce three extra spin-orbit fields Eqs.\ref{eq:so1}-\ref{eq:so4}. In principle this should provide us with some extra flexibility compared to a standard Skyrme interaction. However, the major problem encountered in this first analysis is to find the right observable that may let us explore a new region of parameter space that may increase their importance. We recall that we neglected completely tensor terms at N2LO level, which means two extra tensor parameters~\cite{Dav15,Dav16AN}. This could also give an extra freedom to correct some known anomaly in the shell evolution of some particular states~\cite{col07}. The exploration of this particular aspect is currently under investigation. \begin{figure}[!h] \begin{center} \includegraphics[width=0.4\textwidth,angle=-90]{isot_rad_param_2} \end{center} \caption{(Colors online) Systematic comparison of proton radii. Experimental data are taken from Ref.~\cite{ang04}.} \label{charge} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=0.5] \draw [->, yscale = 0.5] (0,-24) -- (0,0); \draw [very thin, color=gray, opacity = 0.4, yscale=0.5] (0,-24) grid[step=1] (14 ,0); \draw (-1.6,-6) node[above,rotate=90] {\Large [MeV]}; \foreach \y in {-24, -20,...,0} \draw(-0.3,\y*0.5)node[left]{\Large \y}; \draw[yscale = 0.5] ( 3,-25) node {\Large Exp}; \black\draw [very thick, yscale = 0.5] (2,-22.39) -- (4,-22.39); \blue\draw [very thick, yscale = 0.5] (2,-18.19) -- (4,-18.19) ; \red\draw [very thick, yscale = 0.5] (2,-15.64) -- (4,-15.64) ; \black\draw [very thick, yscale = 0.5] (2,-8.36) -- (4,-8.36); \blue\draw [very thick, yscale = 0.5] (2,-5.84) -- (4,-5.84) ; \red\draw [very thick, yscale = 0.5] (2,-4.20) -- (4,-4.20) ; \black\draw [very thick, yscale = 0.5] (2,-1.56) -- (4,-1.56) ; \draw[yscale = 0.5] ( 7,-25) node {\Large SLy5}; \black\draw [very thick, yscale = 0.5] (6,-22.10) -- (8,-22.10) ; \blue\draw [very thick, yscale = 0.5] (6,-17.26) -- (8,-17.26) ; \red\draw [very thick, yscale = 0.5] (6,-15.17) -- (8,-15.17) ; \black\draw [very thick, yscale = 0.5] (6,-9.69) -- (8,-9.69); \blue\draw [very thick, yscale = 0.5] (6,-5.28) -- (8,-5.28); \red\draw [ very thick, yscale = 0.5] (6,-3.11) -- (8,-3.11); \black\draw [very thick, yscale = 0.5] (6,-1.26) -- (8,-1.26); \draw[yscale = 0.5] ( 11,-25) node {\Large SN2LO1}; \black\draw [very thick, yscale = 0.5] (10,-22.02) -- (12,-22.02) node[right]{\large 1$d_{5/2}$}; \blue\draw [very thick, yscale = 0.5] (10,-17.30) -- (12,-17.30) node[right]{\large 2$s_{1/2}$}; \red\draw [very thick, yscale = 0.5] (10,-15.46) -- (12,-15.46) node[right]{\large 1$d_{3/2}$}; \black\draw [very thick, yscale = 0.5] (10,-9.47) -- (12,-9.47) node[right]{\large 1$f_{7/2}$}; \blue\draw [very thick, yscale = 0.5] (10,-5.19) -- (12,-5.19) node[right]{\large 2$p_{3/2}$}; \red\draw [very thick, yscale = 0.5] (10,-3.11) -- (12,-3.11) node[right]{\large 2$p_{1/2}$}; \black\draw [very thick, yscale = 0.5] (10,-1.45) -- (12,-1.45) node[right]{\large 1$f_{5/2}$}; \black\draw [dashed,thick, yscale = 0.5] (4,-22.39) -- (6,-22.10) ; \black\draw [dashed,thick, yscale = 0.5] (8,-22.10) -- (10,-22.02) ; \blue\draw [dashed,thick, yscale = 0.5] (4,-18.19)--(6,-17.26) ; \blue\draw [dashed,thick, yscale = 0.5] (8,-17.26) -- (10,-17.30) ; \red\draw [dashed,thick, yscale = 0.5] (4,-15.64)--(6,-15.17) ; \red\draw [dashed,thick, yscale = 0.5] (8,-15.17) -- (10,-15.46) ; \black\draw [dashed,thick, yscale = 0.5](4,-8.36)--(6,-9.69) ; \black\draw [dashed,thick, yscale = 0.5] (8,-9.69) -- (10,-9.47) ; \blue\draw [dashed,thick, yscale = 0.5](4,-5.84)--(6,-5.28) ; \blue\draw [dashed,thick, yscale = 0.5](8,-5.28) -- (10,-5.19) ; \red\draw [dashed,thick, yscale = 0.5](4,-4.20)--(6,-3.11) ; \red\draw [dashed,thick, yscale = 0.5](8,-3.11) -- (10,-3.11) ; \black\draw [dashed,thick, yscale = 0.5](4,-1.56)--(6,-1.26) ; \black\draw [dashed,thick, yscale = 0.5] (8,-1.26) --(10,-1.45) ; \end{tikzpicture} \caption{Neutron single-particle energies around the Fermi energy in the $^{40}$Ca for SLy5 and SN2LO1 parametrisations. The experimental values are taken from Ref.~\cite{Sch07}. See text for details.} \label{fig:sparticle40ca} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=0.5] \draw [->] (0,-13) -- (0,0); \draw [very thin, color=gray, opacity = 0.4] (0,-13) grid[step=1] (14,0); \draw (-1.4,-7) node[above,rotate=90] { \Large [MeV]}; \foreach \y in { -12,-10,...,0} \draw(-0.3,\y)node[left]{\Large \y}; \draw ( 3,-13) node[below] {\Large Exp}; \black\draw [very thick] (2,-11.40) -- (4,-11.40); \blue\draw [very thick] (2,-9.81) -- (4,-9.81) ; \red\draw [very thick] (2,-9.24) -- (4,-9.24) ; \black\draw [very thick] (2,-8.26) -- (4,-8.26); \blue\draw [very thick] (2,-7.94) -- (4,-7.94) ; \red\draw [very thick] (2,-7.37) -- (4,-7.37) ; \black\draw [very thick] (2,-3.94) -- (4,-3.94) ; \blue\draw [very thick] (2,-3.16) -- (4,-3.16) ; \black\draw ( 7,-13) node[below] {\Large SLy5}; \black\draw [very thick] (6,-12.76) -- (8,-12.76) ; \blue\draw [very thick] (6,-12.09) -- (8,-12.09) ; \red\draw [very thick] (6,-9.40) -- (8,-9.40) ; \black\draw [very thick] (6,-9.25) -- (8,-9.25); \blue\draw [very thick] (6,-9.13) -- (8,-9.13); \red\draw [ very thick] (6,-8.15) -- (8,-8.15); \black\draw [very thick] (6,-3.2) -- (8,-3.2); \blue\draw [very thick] (6,-1.91) -- (8,-1.91); \black\draw ( 11,-13) node[below] {\Large SN2LO1}; \black\draw [very thick] (10,-12.69) -- (12,-12.69) node[right]{1$h_{9/2}$}; \blue\draw [very thick] (10,-12.11) -- (12,-12.11) node[right]{2$f_{7/2}$}; \red\draw [very thick] (10,-9.47) -- (12,-9.47) node[below right]{1$i_{13/2}$}; \black\draw [very thick] (10,-9.37) -- (12,-9.37) node[right]{3$p_{3/2}$}; \blue\draw [very thick] (10,-9.27) -- (12,-9.27) node[above right]{2$f_{5/2}$}; \red\draw [very thick] (10,-8.31) -- (12,-8.31) node[above right]{3$p_{1/2}$}; \black\draw [very thick] (10,-3.35) -- (12,-3.35) node[right]{2$g_{9/2}$}; \blue\draw [very thick] (10,-2.03) -- (12,-2.03) node[right]{1$i_{11/2}$}; \black\draw [dashed,thick] (4,-11.40) -- (6,-12.76) ; \black\draw [dashed,thick] (8,-12.76) -- (10,-12.69) ; \blue\draw [dashed,thick] (4,-9.81)--(6,-12.09) ; \blue\draw [dashed,thick] (8,-12.09) -- (10,-12.11) ; \red\draw [dashed,thick] (4,-9.24)--(6,-9.40) ; \red\draw [dashed,thick] (8,-9.40) -- (10,-9.47) ; \black\draw [dashed,thick](4,-8.26)--(6,-9.25) ; \black\draw [dashed,thick] (8,-9.25) -- (10,-9.37) ; \blue\draw [dashed,thick](4,-7.94)--(6,-9.13) ; \blue\draw [dashed,thick](8,-9.13) -- (10,-9.27) ; \red\draw [dashed,thick](4,-7.37)--(6,-8.15) ; \red\draw [dashed,thick](8,-8.15) -- (10,-8.31) ; \black\draw [dashed,thick](4,-3.94)--(6,-3.2) ; \black\draw [dashed,thick] (8,-3.2) --(10,-3.35) ; \blue\draw [dashed,thick](4,-3.16)--(6,-1.91) ; \blue\draw [dashed,thick] (8,-1.91) --(10,-2.03) ; \end{tikzpicture} \caption{Same as Fig.~\ref{fig:sparticle40ca}, but for $^{208}$Pb. } \label{fig:sparticle208pb} \end{figure} \section{Conclusions}\label{sec:conclusions} In the present article, we have discussed the formalism to include fourth-order gradient terms of the N2LO Skyrme interaction. We have derived the functional, the complete expression of the densities in the case of spherical symmetry and the corresponding HF equation. The resulting 4-th order differential equation has been solved with a new numerical code named \verb|WHISKY|. This code has been tested against two different HFB solvers to check numerical accuracy of the new solver. Thanks to this new code, we have been able to perform for the very first time a complete fit of a stable N2LO Skyrme interaction including finite-nuclei. This achievement has been made possible by the use of the Linear Response formalism as a tool to prevent unphysical instabilities. For the very first time, we thus have been able to prove that it is possible to go \emph{beyond} the standard Skyrme interaction by including physically motivated terms. Thanks to the work on the foundations of various non-relativistic effective interactions~\cite{Dav16AN}, we have been able to clarify the inner nature of the higher order gradient terms in the extended N$\ell$LO Skyrme pseudo-potential. The LR formalism we have been able to solve also the long-standing problem of finite-size instabilities in effective functionals. Finite-size instabilities seem to appear in various functionals not only the Skyrme-like ones~\cite{dep16}. The LR formalism thus represents a simple tool that should be included in all modern fitting protocol to avoid the appearance of non physical results. Combining all the previous results, we have been able to derive the complete set of parameters of the N2LO pseudo-potential, named SN2LO1 in this paper. We have compared its performances on both infinite nuclear matter (pseudo)-observables as well as ground state properties of some selected nuclei. The global performances are of the same quality as the standard SLy5. However, it is very important to underline here that since SN2LO1 has four additional parameters compared to SLy5, we have imposed extra stability constraints to our functional: SLy5 has a finite-size instability in the spin-channel and thus can not be used to perform calculations where the time-odd channel is open. To the best of our knowledge, SN2LO1 is free from pathologies and it can be safely used in various numerical codes. Finally we insist on the fact that the higher order terms introduce several new features as for example three new spin-orbit fields that have not been completely investigated in this article and may give rise to new properties of the functional: N2LO clearly offers some new degrees of freedom and goes beyond N1LO. \section*{Acknowledgments} We are grateful to W. Ryssens for providing us with the \verb|MOCCA| results, as well to K. Bennaceur for providing us with the \verb|LENTEUR| code as well for fruitful discussion. We also acknowledge interesting discussions with M. Bender. The work of J.N. has been supported by grant FIS2014-51948-C2-1-P, Mineco (Spain). \begin{appendix} \section{Coupling constants}\label{app:cc} In this section we give the explicit expressions of the new coupling constants of N2LO functional in terms of Skyrme parameters. The expression of the coupling constants for the standard Skyrme functional can be found in Ref.~\cite{les07}. \begin{eqnarray} \label{eq:taxa} C_0^{(\Delta \rho)^2} & = & \tfrac{1}{128} \, \left[ 9 t_1^{(4)} - t_2^{(4)} \left( 5 + 4 x_2^{(4)} \right) \right] \quad \\ C_1^{( \Delta \rho)^2} & = & - \tfrac{1}{128} \, \left[ 3 t_1^{(4)} \left( 1 + 2 x_1^{(4)} \right) + t_2^{(4)} \left( 1 + 2 x_2^{(4)} \right) \right] \quad \\ C_0^{ M \rho} & = & \tfrac{1}{32} \, \left[ 3 t_1^{(4)} + t_2^{(4)} \left( 5 + 4 x_2^{(4)} \right) \right] \quad \\ C_1^{ M \rho} & = & - \tfrac{1}{32} \, \left[ t_1^{(4)} \left( 1 + 2 x_1^{(4)} \right) - t_2^{(4)} \left( 1 + 2 x_2^{(4)} \right) \right] \quad \\ C_0^{ M s} & = & - \tfrac{1}{32} \, \left[ t_1^{(4)} \left( 1 - 2 x_1^{(4)} \right) - t_2^{(4)} \left( 1 + 2 x_2^{(4)} \right) \right] \quad \\ C_1^{ M s} & = & - \tfrac{1}{32} \, \left[ t_1^{(4)} - t_2^{(4)} \right] \quad \end{eqnarray} \section{Densities in Cartesian representation}\label{app:dens} We define the density matrix in coordinate space as in~\cite{rin80} \begin{eqnarray} \rho_q(\mathbf{r}\sigma,\mathbf{r}'\sigma')=\frac{1}{2}\rho_q(\mathbf{r},\mathbf{r}')\delta_{\sigma\sigma'}+\frac{1}{2}\mathbf{s}_q(\mathbf{r},\mathbf{r}')\langle \sigma'| \hat{\sigma}|\sigma\rangle \;, \end{eqnarray} where \begin{eqnarray} \rho_q(\mathbf{r},\mathbf{r}')&=&\sum_{\sigma}\rho_q(\mathbf{r}\sigma,\mathbf{r}'\sigma')\\ \mathbf{s}_q(\mathbf{r},\mathbf{r}')&=&\sum_{\sigma\sigma'}\rho_q(\mathbf{r}\sigma,\mathbf{r}'\sigma')\langle \sigma'| \hat{\sigma}|\sigma\rangle. \end{eqnarray} The Skyrme energy density functional up to 2nd order is composed by seven local densities whose explict expression can be found, for example, in Ref.~\cite{les07}. The extension to fourth order requires the definition of six additional local densities \begin{eqnarray} \tau_{\mu\nu, q}(\mathbf{r})&=&\left. \nabla_\mu\nabla_\nu'\rho_q(\mathbf{r},\mathbf{r}') \right|_{\mathbf{r}=\mathbf{r}'}\\ K_{\mu\nu\kappa, q}(\mathbf{r})&=&\left. \nabla_\mu\nabla_\nu's_{\kappa q}(\mathbf{r},\mathbf{r}') \right|_{\mathbf{r}=\mathbf{r}'}\\ \Pi_{\mu, q}(\mathbf{r})&=&\left. \nabla \cdot \nabla' j_{\mu, q}(\mathbf{r},\mathbf{r}') \right|_{\mathbf{r}=\mathbf{r}'}\\ V_{\mu \nu, q}(\mathbf{r})&=&\left. \nabla \cdot \nabla' J_{\mu\nu, q}(\mathbf{r},\mathbf{r}') \right|_{\mathbf{r}=\mathbf{r}'}\\ Q_q(\mathbf{r})&=&\left. \Delta \Delta' \rho_q(\mathbf{r},\mathbf{r}') \right|_{\mathbf{r}=\mathbf{r}'}\\ S_{\mu, q}(\mathbf{r})&=&\left. \Delta \Delta' s_{\mu, q}(\mathbf{r},\mathbf{r}') \right|_{\mathbf{r}=\mathbf{r}'} \end{eqnarray} Similarly to the spin-current pseudo-tensor $J_{\mu\nu, q}(\mathbf{r})$, the density $\tau_{\mu\nu, q}(\mathbf{r})$ can be decomposed into a pseudo-scalar, vector and traceless pseudo-tensor term. For more details we refer to Ref.~\cite{bec14}. \end{appendix}
proofpile-arXiv_065-3649
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Determining semantic textual similarity (STS) is one of the most critical tasks in information retrieval and natural language processing. Vector-based sentence representation models have been widely used to compare and rank words, phrases or sentences using various similarity and relatedness scores~\cite{Find-similar,mitchell2010composition,DBLP:conf/icml/LeM14}. Recently, neural network-based sentence representation models~\cite{MuellerAAAI2016, hill-cho-korhonen:2016:N16-1} have been proposed for learning textual similarity. However, these vector-based models often use shallow information, such as words and characters, and whether they can account for phenomena such as negation and quantification is not clear. Consider the sentences: \textit{Tom did not meet some of the players} and \textit{Tom did not meet any of the players}. If functional words such as \textit{some} or \textit{any} are ignored or represented as the same vector, then these sentences are to be represented by identical vectors. However, the first sentence implies that there is a player who Tom did not meet, whereas the second sentence means that Tom did not meet anyone, so the sentences have different meanings. Conversely, logic-based approaches have been successful in representing the meanings of complex sentences, having had a positive impact for applications such as recognizing textual entailment~\cite{D16-1242, mineshima2016building, abzianidze:2015:EMNLP, abzianidze:2016:*SEM}. However, purely logic-based approaches only assess entailment or contradiction relations between sentences and do not offer graded notions of semantic similarity. In this paper, we propose to leverage logic cues to learn textual similarity. Our hypothesis is that \emph{observing proof processes when testing the semantic relations is predictive of textual similarity}. We show that our approach can be more effective than systems that ignore these logic cues. \section{Related Work} Vector-based models of semantic composition have been widely studied with regards to calculating STS. ~\citet{mitchell-lapata:2008:ACLMain,mitchell2010composition} proposed a sentence vector model involving word vector addition or component-wise multiplication. Addition and multiplication are commutative and associative and thus ignore word order. ~\citet{polajnar-rimell-clark:2015:LSDSem} proposed a discourse-based sentence vector model considering extra-intra sentential context. Also, a categorical compositional distributional semantic model has been developed for recognizing textual entailment and for calculating STS~\cite{grefenstette-sadrzadeh:2011:EMNLP, kartsaklis-kalchbrenner-sadrzadeh:2014:P14-2, kartsaklis-sadrzadeh:2016:COLING}. However, these previous studies are mostly concerned with the structures of basic phrases or sentences and do not address logical and functional words such as negations and connectives. Neural network-based models of semantic composition~\cite{MuellerAAAI2016, hill-cho-korhonen:2016:N16-1} have also been proposed. Although these models achieve higher accuracy, their end-to-end nature introduces challenges in the diagnosis of the reasons that make two sentences to be similar or dissimilar to each other. These diagnosis capabilities may play an important role in making the system explainable and also to guide future system improvements in a more precise manner. Our approach presented in this paper is partially inspired by the latter two objectives. Meanwhile, some previous studies have proposed logic systems for capturing the semantic relatedness of sentences. The Meaning Factory~\cite{bjerva:semeval14} uses both shallow and logic-based features for learning textual similarity. In this system, the overlap of predicates and entailment judgments are extracted as logic-based features. UTexas~\cite{beltagy:semeval14} uses Probabilistic Soft Logic for learning textual similarity. In this system, each ground atom in the logical formulas has a probability based on distributional semantics of a word. The weights of the logical formulas are calculated from the probabilities of their ground atoms and are extracted as features. These previous studies improved the accuracy by using logic-based features derived from the entailment results of first-order theorem proving in addition to using shallow features such as sentence lengths. In our study, we determine the semantic similarity of sentences based on the conception of proof-theoretic semantics~\cite{BekkiMineshima2016Luo}. The key idea is that not only the entailment results but also the \emph{theorem proving process} can be considered as features for learning textual similarity. That is, by taking into account not only whether a theorem is proved but also \textit{how} it is proved, we can capture the semantic relationships between sentence pairs in more depth. Another difference between our study and previous logic systems is that we use higher-order predicate logic. Higher-order predicate logic is able to represent complex sentence semantics such as generalized quantifiers more precisely than first-order predicate logic. In addition, higher-order predicate logic makes the logical structure of a sentence more explicit than first-order predicate logic does, so it can simplify the process of proof search~\cite{miller-nadathur:1986:ACL}. \section{System Overview} Figure 1 shows an overview of the system which extracts features for learning textual similarity from logical proofs. To produce semantic representations of sentences and prove them automatically, we use ccg2lambda~\cite{martinezgomez-EtAl:2016:P16-4}, which is a semantic parser combined with an inference system based on natural deduction. First, sentences are parsed into syntactic trees based on Combinatory Categorial Grammar (CCG)~\cite{Steedman00}. CCG is a syntactic theory suitable for semantic composition from syntactic structures. Meaning representations are obtained based on semantic templates and combinatory rules for the CCG trees. Semantic templates are defined manually based on formal semantics. Combinatory rules specify the syntactic behaviors of words and compositional rules for the CCG trees. In ccg2lambda, two wide-coverage CCG parsers, C\&C~\cite{clark2007wide} and EasyCCG~\cite{Lewis14a*ccg}, are used for converting tokenized sentences into CCG trees robustly. According to a previous study \cite{EACL2017}, EasyCCG achieves higher accuracy. Thus, when the output of both C\&C and EasyCCG can be proved, we use EasyCCG's output for creating features. Second, the meanings of words are described using lambda terms. Semantic representations are obtained by combining lambda terms in accordance with the meaning composition rules specified in the CCG tree. The semantic representations are based on Neo-Davidsonian event semantics~\cite{Parsons90,D16-1242}, in which every verb is decomposed into a predicate over events and a set of functional expressions relating the events. Adverbs and prepositions are also represented as predicates over events. \begin{figure} \centerline{\includegraphics[bb=0.000000 0.000000 822.000000 274.000000,width=1.0\hsize]{fig1.pdf}} \caption{System overview.} \label{Figure 1} \end{figure} Third, we attempt to prove entailment relations between sentence pairs. For this purpose, we use Coq~\cite{opac-b1101046}, which can be used for efficient theorem-proving for natural language inference using both first-order and higher-order logic~\cite{D16-1242}. Coq's proof calculus is based on natural deduction~\cite{prawitz1965natural}, a proof system based on inference rules called introduction and elimination rules for logical connectives. The inference system implemented in ccg2lambda using Coq achieves efficient automatic inference by feeding a set of predefined tactics and user-defined proof-search tactics to its interactive mode. The natural deduction system is particularly suitable for injecting external axioms during the theorem-proving process~\cite{EACL2017}. Finally, features for learning textual similarity are extracted from the proofs produced by ccg2lambda during the theorem-proving process. In this study, we experimented with logistic regression, support vector regression and random forest regression, finding that random forest regression was the most effective. We therefore chose random forest regression for learning textual similarity, with its hyperparameters being optimized by grid search. The mean squared error (MSE) was used to measure the prediction performance of our system. \section{Proof Strategy for Learning Textual Similarity} \subsection{Overview of the proof strategy} Sentence similarity depends on complex elements, such as word overlaps and semantic relations. We capture the similarity between the sentence pair $(A, B)$ as a function of the provability of bidirectional entailment relations for $(A, B)$ and combine it with shallow features. After obtaining logical formulas $A'$ and $B'$ from $A$ and $B$, we attempt to prove the bidirectional entailment relations, $A' \Rightarrow B' $ and $B' \Rightarrow A'$. If the initial natural deduction proofs fail, we re-run the proof, adding relevant external axioms or skipping unproved sub-goals until the proof is completed. After that, features for learning textual similarity are extracted by quantifying the provability of the bidirectional entailment relations. The details of the procedure are as follows. First, we attempt a natural deduction proof without using external axioms, aiming to prove entailment relations, $A' \Rightarrow B'$ and $B' \Rightarrow A'$. If both fail, then we check whether $A'$ contradicts $B'$, which amounts to proving the negation of the original conclusion, namely $A' \Rightarrow \neg B'$ and $B' \Rightarrow \neg A'$. The similarity of a sentence pair tends to be higher when the negation of the conclusion can be proved, compared with the case where neither the conclusion nor its negation can be proved. In the SICK (Sentences Involving Compositional Knowledge) dataset~\cite{MARELLI14.363} (see Section 6.1 for details), 70\% of the sentence pairs annotated as contradictory are assigned a relatedness score in [$3, 5$). Next, if we fail to prove entailment or contradiction, that is, we cannot prove the conclusion or its negation, we identify an unproved sub-goal which is not matched by any predicate in the premise. We then attempt to prove $A' \Rightarrow B'$ and $B' \Rightarrow A'$ using axiom injection, following the method introduced in \citet{EACL2017}. In axiom injection, unproved sub-goals are candidates to form axioms. We focus only on predicates that share at least one argument with both the premise and the conclusion. This means that an axiom can be generated only if there is a predicate $p$ in the pool of premises and a predicate $q$ in a sub-goal and $p$ and $q$ share a variable in an argument position, possibly with the same case (e.g., Subject or Object). In generating axioms, the semantic relationships between the predicates in the premise and those in the conclusion are checked using lexical knowledge. In this study, we use WordNet~\cite{Miller:1995:WLD:219717.219748} as the source of lexical knowledge. Linguistic relations between predicates are checked in the following order: inflections, derivationally related forms, synonyms, antonyms, hypernyms, similarities, and hyponyms. If any one of these relations is found in the lexical knowledge, an axiom can be generated. Again, if the proof fails, we attempt to prove the negation of the conclusion using the axiom injection mechanism. If the proof by axiom injection fails because of a lack of lexical knowledge, we obtain sentence similarity information from partial proofs by simply accepting the unproved sub-goals and forcibly completing the proof. After the proof is completed, information about the generated axioms and skipped sub-goals is used to create features. \begin{figure}[t] \scriptsize \centering \InferenceRule{G: \mathcal{A} \wedge \mathcal{B}}{$\wedge$-Intro}{ \SeqFormulas{G_1: \mathcal{A}}{G_2: \mathcal{B}}} \hspace{4em} \InferenceRule{P: \mathcal{A}_1 \wedge \mathcal{A}_2 \wedge \cdots \wedge \mathcal{A}_n}{$\wedge$-Elim}{P_1: \mathcal{A}_1, \, P_2: \mathcal{A}_2, \, \ldots, P_n : \mathcal{A}_n} \medskip \InferenceRule{G: \mathcal{A} \to \mathcal{B}}{$\to$-Intro}{\SeqFormulas{P: \mathcal{A}}{G: \mathcal{B}}} \hspace{3em} \InferenceRule{\SeqFormulas{P_1: \mathcal{A} \to \mathcal{B}}{P_2: \mathcal{A}}}{$\to$-Elim}{P: \mathcal{B}} \medskip \InferenceRule{G: \exists x \mathcal{A}(x)}{$\exists$-Intro}{G_1: \mathcal{A}(x)} \hspace{3em} \InferenceRule{P: \exists x \mathcal{A}(x)}{$\exists$-Elim}{P_1: \mathcal{A}(x)} \hspace{3em} \InferenceRule{\SeqFormulas{P_1: \mathcal{A}(t)}{P_2: t = u}}{$=$-Elim}{P: \mathcal{A}(u)} \caption{Example of the inference rules used in natural deduction. $P, P_1, \ldots P_n$ are formulas in the premise, while $G, G_1, G_2$ are formulas in the goal. The initial formulas are at the top, with the formulas obtained by applying the inference rules shown below.} \label{InferenceRules} \end{figure} \subsection{Proving entailment relations} As an illustration of how our natural deduction proof works, consider the case of proving entailment for the following sentence pair: \hspace{2em} $A$: A man is singing in a bar. \hspace{2em} $B$: A man is singing. The sentences $A$ and $B$ are mapped onto logical formulas $A'$ and $B'$ based on event semantics via CCG-based semantic composition, as follows. \begin{align*} & \scalebox{0.9}{$A': \exists e_1 x_1 x_2(\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1)$} \\ & \qquad \scalebox{0.9}{$\wedge \ \LF{bar}(x_2) \wedge \LF{in}(e_1, x_2))$} \\ & \scalebox{0.9}{$B': \exists e_1 x_1(\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1))$} \end{align*} First, we attempt a natural deduction proof of $A' \Rightarrow B'$, setting $A'$ as the premise and $B'$ as the goal of the proof. Then $A'$ and $B'$ are decomposed according to the inference rules. Figure \ref{InferenceRules} shows the major inference rules we use in the proofs. Inference rules in natural deduction are divided into two types: introduction rules and elimination rules. Introduction rules specify how to prove a formula in the goal, decomposing a goal formula into smaller sub-goals. Elimination rules specify how to use a premise, decomposing a formula in the pool of premises into smaller ones. \begin{figure}[t] \footnotesize \InferenceRuleThree{ \SeqFormulasThree{P_0: \ \exists e_1 x_1 x_2 (\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1)}{ \hspace{3em} \wedge \ \LF{bar}(x_2) \wedge \LF{in}(e_1, x_2))}{ G_0: \exists e_1 x_1(\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1))}}{ $\exists$-Elim ($P_0$), $\exists$-Intro ($G_0$)}{ \SeqFormulasThree{P_1: \ \LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1)}{ \hspace{3em} \wedge \ \LF{bar}(x_2) \wedge \LF{in}(e_1, x_2)}{ G_1: \LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1)}}{ $\wedge$-Elim ($P_1$), $\wedge$-Intro ($G_1$)}{ \SeqFormulasThree{P_2: \ \LF{man}(x_1), \, P_3: \LF{sing}(e_1), \, P_4: \LF{subj}(e_1) = x_1,}{ \hspace{3em} P_5: \LF{bar}(x_2), \, P_6: \LF{in}(e_1, x_2)}{ G_2: \LF{man}(x_1), \, G_3: \LF{sing}(e_1), \, G_4: \LF{subj}(e_1) = x_1}} \caption{The proof process for the example entailment relation.} \label{ProcessEntail} \end{figure} The proof process for $A' \Rightarrow B'$ is shown in Figure \ref{ProcessEntail}. Here $A'$ is initially set to the premise $P_0$ and $B'$ to the goal $G_0$. $P_0$ and $G_0$ are then decomposed using elimination rules ({\small$\wedge$-\textsc{Elim}, $\exists$-\textsc{Elim}}) and introduction rules ({\small$\wedge$-\textsc{Intro}, $\exists$-\textsc{Intro}}). Then we obtain a set of premise formulas $\mathcal{P} = \setof{P_2, P_3, P_4, P_5, P_6}$, and a set of sub-goals $\mathcal{G} = \setof{G_2, G_3, G_4}$. The proof is performed by searching for a premise $P_i$ whose predicate and arguments match those of a given sub-goal $G_j$. If such a logical premise is found, the sub-goal is removed. In this example, the sub-goals $G_2$, $G_3$, and $G_4$ match the premises $P_2$, $P_3$, and $P_4$, respectively. Thus, $A' \Rightarrow B'$ can be proved without introducing axioms. Second, we attempt the proof in the opposite direction, $B' \Rightarrow A'$, by switching $P_0$ and $G_0$ in Figure \ref{ProcessEntail}. Again, by applying inference rules, we obtain the following sets of premises $\mathcal{P}$ and sub-goals $\mathcal{G}$: \smallskip \small \begin{tabular}{l} $\mathcal{P} = \{P_2: \LF{man}(x_1), \, P_3: \LF{sing}(e_1),$ \\ \hspace{2.4em} $P_4: \LF{subj}(e_1) = x_1\}$ \\ $\mathcal{G} = \{G_2: \LF{man}(x_1),\, G_3: \LF{sing}(e_1),$ \\ \hspace{2.4em} $G_4: \LF{subj}(e_1) = x_1,$ \\ \hspace{2.4em} $G_5: \LF{bar}(x_2), G_6: \LF{in}(e_1, x_2))\}$ \\ \end{tabular} \normalsize \noindent Here, the two sub-goals $G_5$ and $G_6$ do not match any of the premises, so the attempted proof of $B' \Rightarrow A'$ fails. We therefore attempt to inject additional axioms, but in this case no predicate in $\mathcal{P}$ shares the argument $x_2$ of the predicates $\LF{bar}(x_2)$ and $\LF{in}(e_1,x_2)$ in $\mathcal{G}$. Thus, no axiom can be generated. To obtain information from a partial proof, we forcibly complete the proof of $B' \Rightarrow A'$ by skipping the unproved sub-goals $\LF{bar}(x)$ and $\LF{in}(e_1,x_2)$. \subsection{Proving the contradiction} The proof strategy illustrated here can be straightforwardly applied to proving the contradiction. In natural deduction, a negative formula of the form $\neg A$ can be defined as $A \to \LF{False}$ (``the formula $A$ implies the contradiction''), by using a propositional constant \LF{False} to encode the contradiction. Thus, the inference rules for negation can be taken as special cases of implication rules, as shown in Figure~\ref{NegationRule}. As an illustration, let us consider the following sentence pair: $A$: No man is singing. $B$: There is a man singing loudly. \noindent Figure \ref{ProveContra} shows the proof process. The sentences $A$ and $B$ are mapped to $P_0$ and $P_1$, respectively, via compositional semantics and the goal $G_0$ is set to \LF{False}. By decomposing $P_1$ using elimination rules and then by combining $P_2, P_3$, and $P_4$, we can obtain $P_6$. From $P_0$ and $P_6$ we can then derive the contradiction. These proofs are performed by an automated prover implemented on Coq, using tactics for first-order theorem proving. When a proof is successful, Coq outputs the resulting proof (a proof term), from which we can extract detailed information such as the number of proof steps and the types of inference rules used. In addition to the entailment/contradiction result, information about the proof process is used to create features. \section{Description of the Features} To maximize accuracy when learning textual similarity, we adopt a hybrid approach that uses both logic-based features extracted from the natural deduction proof and other, non-logic-based features. All features are scaled to the [$0, 1$] range. \subsection{Logic-based Features} We propose 15 features consisting of nine different types of logic-based features. Six of these feature types are derived from the bidirectional natural deduction proofs: six features are extracted from the direct proof ($A' \Rightarrow B'$) and another six from the reverse proof ($B' \Rightarrow A'$). The remaining three feature types are derived from semantic representations of the sentence pairs. The feature types are as follows. \begin{figure}[!t] \centering \footnotesize \InferenceRule{G: \neg \mathcal{A}}{$\neg$-Intro}{\SeqFormulas{P: \mathcal{A}}{G: \LF{False}}} \hspace{2em} \InferenceRule{\SeqFormulas{P_1: \neg \mathcal{A}}{P_2: \mathcal{A}}}{$\neg$-Elim}{P: \LF{False}} \caption{Inference rules of negation.} \label{NegationRule} \end{figure} \begin{figure}[t] \footnotesize \InferenceRuleThree{\SeqFormulasFour{ P_0: \ \neg \exists e_1 \exists x_1(\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1))}{ P_1: \ \exists e_1 \exists x_1(\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1)}{ \hspace{3em} \wedge \ \LF{loudly}(e_1))}{ G_0: \LF{False}}}{ $\exists$-Elim, $\wedge$-Elim ($P_2$) }{ \SeqFormulas{ P_2: \ \LF{man}(x_1), \, P_3: \LF{sing}(e_1), \, P_4: \LF{subj}(e_1) = x_1,}{ P_5: \LF{loudly}(e_1)}}{ $\exists$-Intro, $\wedge$-Intro ($P_2$)}{ P_6: \ \exists e_1 \exists x_1 (\LF{man}(x_1) \wedge \LF{sing}(e_1) \wedge (\LF{subj}(e_1) = x_1))} \caption{\label{ProveContra}Proof process for the contradiction example.} \vspace{-0.3cm} \end{figure} \noindent{\bf Logical inference result.} As stated in Section 4, we include features to distinguish the case where either the conclusion or its negation can be proved from the one where neither can be proved. If the conclusion can be proved, the feature is set to 1.0. If the negation of the conclusion can be proved, the feature is set to 0.5. If neither can be proved, the feature is set to 0.0. \noindent{\bf Axiom probabilities.} The probability of an axiom and the number of axioms appearing in the proof are used to create features. The probability of an axiom is defined as the inverse of the length of the shortest path that connects the senses in the is-a (hypernym/hyponym) taxonomy in WordNet. When multiple axioms are used in the proof, the average of the probabilities of the axioms is extracted as a feature. If the proof can be completed without using axioms, the feature is set to 1.0. \noindent{\bf Proved sub-goals.} Given that proofs can be obtained either by proving all the sub-goals or skipping unproved sub-goals, we use the proportion of proved sub-goals as a feature. Our assumption is that if there are more unproved sub-goals then the sentence pair is less similar. When there are $m$ logical formulas in the premise pool and $n$ proved sub-goals, we set the feature to $n/m$. If the theorem can be proved without skipping any sub-goals, the feature is set to 1.0. It may be the case that the number of sub-goals is so large that some sub-goals remain unproved even after axiom injection. Since the proportion of unproved sub-goals is decreased by axiom injection, we use the proportion of unproved sub-goals both with and without axiom injection as features. \noindent{\bf Cases in unproved sub-goals.} Subject or object words can affect the similarity of sentence pairs. Therefore, the number of each case in unproved sub-goals, like $\LF{subj}(e_1)$ in Figures \ref{ProcessEntail} and \ref{ProveContra}, is used as a feature. Here, we count subjective, objective, and dative cases. \noindent{\bf Proof steps.} In general, complex theorems are difficult to prove and in such cases the sentence pairs are considered to be less similar. We therefore use the number of Coq's proof steps, namely the number of inference rule applications in a given proof, as a feature. \noindent{\bf Inference rules.} The complexity of a natural deduction proof can be measured in terms of the inference rules used for each proof step. We therefore extract the relative frequency with which each inference rule is used in the proof as a feature. We check seven inference rules for natural deduction using Coq (cf. Figure \ref{InferenceRules}): introduction and elimination rules for conjunction ({\small$\wedge$-\textsc{Intro}, $\wedge$-\textsc{Elim}}), implication ({\small$\to$-\textsc{Intro}, $\to$-\textsc{Elim}}), and existential quantification ({\small$\exists$-\textsc{Intro}, $\exists$-\textsc{Elim}}), and the elimination rule for equality ({\small$=$-\textsc{Elim}}). \noindent{\bf Predicate overlap.} Intuitively, the more predicates overlap between the premise and the conclusion, the more likely it is that the inference can be proved. We therefore use the proportion of predicates that overlap between the premise and the conclusion as a feature. \noindent{\bf Semantic type overlap.} Each semantic representation in higher-order logic has a semantic type, such as \textsf{Entity} for entities and \textsf{Prop} for propositions. As with predicates, we use the degree of semantic type overlap between the premise and the conclusion as a feature. \noindent{\bf Existence of negative clauses.} Whether or not the premise or conclusion contain negative clauses is an effective measure of similarity. In semantic representations, negative clauses are represented by the negation operator $\neg$, so we check for negation operators in the premise and the conclusion and set this feature to 1.0 if either contains one. \begin{table*}[!t] \begin{center} \scalebox{0.97}{ \begin{tabular}{ccccc} \hline ID & Sentence1 & Sentence2 & Entailment & Score\\ \hline \hline 23 & There is no biker jumping in the air. & A lone biker is jumping in the air & \textit{no} & 4.2 \\ \hline 1412 & Men are sawing logs. & Men are cutting wood. & \textit{yes} & 4.5 \\ \hline 9963 & The animal is grazing on the grass. & The cop is sitting on a police bike. & \textit{unknown} & 1 \\ \hline \end{tabular} } \vspace{-0.3cm} \caption{ \label{tab:examples} Examples in the SICK dataset with different entailment labels and similarity scores.} \vspace{-0.5cm} \end{center} \end{table*} \subsection{Non-logic-based Features} We also use the following eight non-logic-based features. \noindent{\bf Noun/verb overlap.} We extract and lemmatize all nouns and verbs from the sentence pairs and use the degrees of overlap of the noun and verb lemmas as features. \noindent{\bf Part-of-speech overlap.} We obtain part-of-speech (POS) tags for all words in the sentence pairs by first tokenizing them with the Penn Treebank Project tokenizer\footnote{ftp://ftp.cis.upenn.edu/pub/treebank/public\_html/\\ tokenization.html} and then POS tagging them with C\&C POS tagger~\cite{curran2003investigating}. The degree of overlap between the sentences' POS tags is used as a feature. \noindent{\bf Synset overlap.} For each sentence in the pair, we obtain the set containing all the synonym lemmas (the synset) for the words in the sentence. The degree of overlap between the sentences' synsets is used as a feature. \noindent{\bf Synset distance.} For each word in the first sentence, we compute the maximum path similarity between its synset and the synset of any other word in the second sentence. Then, we use the average of maximum path similarities as a feature. \noindent{\bf Sentence length.} If the conclusion sentence is long, there will possibly be many sub-goals in the proof. We therefore use the average of the sentence lengths and the difference in length between the premise and the conclusion sentences as features. \noindent{\bf String similarity.} We use the similarity of the sequence of characters within the sentence pairs as a feature. The Python {\sl Difflib}\footnote{https://docs.python.org/3.5/library/difflib.html} function returns the similarity between two sequences as a floating-point value in [$0, 1$]. This measure is given by $2.0*M / T$, where $T$ is the total number of elements in both sequences and $M$ is the number of matches. This feature is 1.0 if the sequences are identical and 0.0 if they have nothing in common. \noindent{\bf Sentence similarity from vector space models.} We calculate sentence similarity by using three major vector space models, TF-IDF, latent semantic analysis (LSA)~\cite{LSA}, and latent Dirichlet allocation (LDA)~\cite{LDA}. We use these cosine similarities as features. \noindent{\bf Existence of passive clauses.} Passive clauses have an influence on similarity. In CCG trees, passive clauses are represented using the syntactic category $S_{pss} \backslash NP$. We check for the occurrence of passive clauses in the premise and conclusion, and if either of them contains a passive clause then the feature is set to 1.0. \section{Experiments and Evaluation} \subsection{Experimental Conditions} We evaluated our system\footnote{Available at https://github.com/mynlp/ccg2lambda.} using two datasets: the SemEval-2014 version of the SICK dataset~\cite{MARELLI14.363} and the SemEval-2012 version of the MSR-paraphrase video corpus dataset (MSR-vid)~\cite{semeval2012}. The experimental conditions were as follows. \begin{table}[!t] \begin{center} \scalebox{0.93}{ \begin{tabular}{lccc} \hline & $\gamma$ & $\rho$ & MSE \\ \hline \hline Mueller et al. (2016) & $0.882$ & $0.835$ & $0.229$ \\ \hline \hline Our system & $0.838$ & $0.796$ & $0.561$ \\ \hline SemEval2014 Best Score & $0.828$ & $0.769$ & $0.325$ \\ \hline The Meaning Factory & $0.827$ & $0.772$ & $0.322$ \\ \hline UTexas & $0.714$ & $0.674$ & $0.499$ \\ \hline Baseline & $0.653$ & $0.745$ & $0.808$ \\ \hline \end{tabular} } \vspace{-0.3cm} \caption{\label{tab:results_sick} Results on the test split of SICK dataset.} \vspace{-0.5cm} \end{center} \end{table} \subsubsection{The SICK dataset} The SICK dataset is a dataset for studying STS as well as for recognizing textual entailment (RTE). It was originally developed for evaluating compositional distributional semantics, so it contains logically challenging expressions such as quantifiers, negations, conjunctions and disjunctions. The dataset contains $9927$ sentence pairs with a $5000$/$4927$ training/test split. These sentence pairs are manually annotated with three types of labels \textit{yes} (entailment), \textit{no} (contradiction), or \textit{unknown} (neutral) as well as a semantic relatedness scores in [$1, 5$] (see Table~\ref{tab:examples} for a sample). In this dataset, sentence pairs whose gold entailment labels are \textit{no} tend to be scored a little more highly than the average, whereas those whose labels are \textit{unknown} have a wide range of scores. Thus, we set the baseline of the relatedness score to 5 when the gold entailment label was \textit{yes} and to 3 when the label was \textit{no} or \textit{unknown}. We compared our system with the following systems: the state-of-the-art neural network-based system~\cite{MuellerAAAI2016}; the best system~\cite{zhao:semeval14} from SemEval-2014; and two of the logic-based systems stated in Section 2: namely The Meaning Factory~\cite{bjerva:semeval14} and UTexas~\cite{beltagy:semeval14}. The Pearson correlation coefficient $\gamma$, Spearman's rank correlation coefficient $\rho$, and the MSE were used as the evaluation metrics. \medskip \subsubsection{The MSR-vid dataset} The MSR-vid dataset is our second dataset for the STS task and contains $1500$ sentence pairs with a $750$/$750$ training/test split. All sentence pairs are annotated with semantic relatedness scores in the range [0, 5]. We used this dataset to compare our system with the best system from SemEval-2012~\cite{bar:semeval12} and the logic-based UTexas system~\cite{beltagy:acl14}. We used the Pearson correlation coefficient $\gamma$ as the evaluation metric. \subsection{Results} Table~\ref{tab:results_sick} shows the results of our experiments with the SICK dataset. Although the state-of-the-art neural network-based system yielded the best results overall, our system achieved higher scores than SemEval-2014 submissions, including the two logic-based systems (The Meaning Factory and UTexas), in terms of Pearson correlation and Spearman's correlation. The main reason for our system's lower performance in terms of MSE is that some theorems could not be proved because of a lack of lexical knowledge. In the current work, we only consider word-level knowledge (word-for-word paraphrasing); we may expand the knowledge base in the future by using more external resources. As we mentioned above, the sentence pairs annotated as \textit{unknown} produced a wide range of scores. The Pearson correlation of the \textit{unknown} portion of the SICK dataset was 0.766, which suggests that our logic-based system can also be applied to neutral sentence pairs. \begin{table}[!t] \begin{center} \begin{tabular}{lc} \hline & $\gamma$ \\ \hline \hline SemEval2012 Best Score & $0.873$ \\ \hline \hline Our system & $0.853$ \\ \hline Beltagy et al. (2014) & $0.830$ \\ \hline \end{tabular} \vspace{-0.3cm} \caption{\label{tab:results_msrvid} Results on the test split of MSR-vid.} \vspace{-0.3cm} \end{center} \end{table} \begin{table}[!htbp] \begin{center} \scalebox{0.89}{ \begin{tabular}{lccc} \hline & $\gamma$ & $\rho$ & MSE \\ \hline \hline Predicate overlap & $\mathbf{0.691}$ & $0.609$ & $\mathbf{0.734}$ \\ \hline Inference rules & $0.632$ & $\mathbf{0.619}$ & $0.794$ \\ \hline Probability of axioms & $0.543$ & $0.540$ & $0.865$ \\ \hline Proof steps & $0.458$ & $0.494$ & $0.915$ \\ \hline Proved sub-goals & $0.432$ & $0.443$ & $0.926$ \\ \hline Logical inference result & $0.386$ & $0.399$ & $0.939$ \\ \hline Unproved sub-goals' case & $0.301$ & $0.307$ & $0.973$ \\ \hline Semantic type overlap & $0.245$ & $0.219$ & $0.987$ \\ \hline Negative clauses & $0.163$ & $0.323$ & $1.004$ \\ \hline \hline Noun/verb overlap & $0.661$ & $0.554$ & $0.763$ \\ \hline Vector space model & $0.594$ & $0.510$ & $0.857$ \\ \hline String similarity & $0.414$ & $0.418$ & $0.977$ \\ \hline Synset overlap & $0.382$ & $0.341$ & $0.978$ \\ \hline Synset distance & $0.352$ & $0.330$ & $0.999$ \\ \hline Part-of-speech overlap & $0.349$ & $0.346$ & $0.954$ \\ \hline Sentence length & $0.231$ & $0.240$ & $0.993$ \\ \hline Passive clauses & $0.023$ & $0.046$ & $1.017$ \\ \hline \hline Only logic-based & $0.798$ & $0.760$ & $0.613$ \\ \hline Only non logic-based & $0.793$ & $0.732$ & $0.621$ \\ \hline \hline All & $\mathbf{0.838}$ & $\mathbf{0.796}$ & $\mathbf{0.561}$ \\ \hline \end{tabular} } \caption{\label{tab:results_feats} Results when training our regressor with each feature group in isolation.} \end{center} \end{table} Table~\ref{tab:results_msrvid} shows the results of our experiments with the MSR-vid dataset. These results also indicate that our logic-based system achieved higher accuracy than the other logic-based systems. \begin{table*} \begin{center} \small \begin{tabular}{rlcccc} \hline & & & Pred & Pred & \\ ID & Sentence Pair & Gold & +logic & -logic & Entailment \\ \hline \hline \multirow{2}{*}{642} & A person is climbing a rock with a rope, which is pink. & \multirow{2}{*}{5.0} & \multirow{2}{*}{4.9} & \multirow{2}{*}{4.1} & \multirow{2}{*}{Yes} \\ & A rock is being climbed by a person with a rope, which is pink. & & & & \\ \hline \multirow{2}{*}{1360} & The machine is shaving the end of a pencil. & \multirow{2}{*}{4.7} & \multirow{2}{*}{4.6} & \multirow{2}{*}{3.8} & \multirow{2}{*}{Yes} \\ & A pencil is being shaved by the machine. & & & & \\ \hline \multirow{2}{*}{891} & There is no one on the shore. & \multirow{2}{*}{3.6} & \multirow{2}{*}{3.7} & \multirow{2}{*}{2.6} & \multirow{2}{*}{No} \\ & A bunch of people is on the shore. & & & & \\ \hline \multirow{2}{*}{1158} & A woman is removing ingredients from a bowl. & \multirow{2}{*}{3.3} & \multirow{2}{*}{3.5} & \multirow{2}{*}{4.1} & \multirow{2}{*}{No} \\ & A woman is adding ingredients to a bowl. & & & & \\ \hline \multirow{2}{*}{59} & Kids in red shirts are playing in the leaves. & \multirow{2}{*}{3.9} & \multirow{2}{*}{3.8} & \multirow{2}{*}{3.1} & \multirow{2}{*}{Unknown} \\ & Three kids are jumping in the leaves. & & & & \\ \hline \multirow{2}{*}{71} & There is no child lying in the snow and making snow angels. & \multirow{2}{*}{3.3} & \multirow{2}{*}{3.3} & \multirow{2}{*}{4.1} & \multirow{2}{*}{Unknown} \\ & Two people in snowsuits are lying in the snow and making snow angels. & & & & \\ \hline \end{tabular} \caption{\label{tab:examples_pos} Examples for which our regressor trained only with logic-based features performs better than when using non-logic features. ``Gold'': correct score, ``Pred+logic'': prediction score only with logic-based features, ``Pred-logic'': prediction score only with non-logic-based features.} \end{center} \end{table*} Table~\ref{tab:results_feats} shows evaluation results for each feature group in isolation, showing that inference rules and predicate overlaps are the most effective features. Compared with the non-logic-based features, the logic-based features achieved a slightly higher accuracy, a point that will be analyzed in more detail in the next section. Overall, our results show that combining logic-based features with non logic-based ones is an effective method for determining textual similarity. \subsection{Positive examples and error analysis} \begin{table*}[t] \begin{center} \small \begin{tabular}{rlccl} \hline ID&Sentence Pair&Gold&System&Axiom \\ \hline \hline \multirow{2}{*}{3974} & A girl is awakening. & \multirow{2}{*}{4.9} & \multirow{2}{*}{3.6} & $\forall x (\LF{awaken}(x) \rightarrow \LF{wake}(x))$ \\ & A girl is waking up. & & & $\forall x (\LF{awaken}(x) \rightarrow \LF{up}(x))$ \\ \hline \multirow{2}{*}{4833} & A girl is filing her nails. & \multirow{2}{*}{4.2} & \multirow{2}{*}{1.8} & $\forall x (\LF{nail}(x) \rightarrow \LF{manicure}(x))$ \\ & A girl is doing a manicure. & & & $\forall x (\LF{file}(x) \rightarrow \LF{do}(x))$ \\ \hline \multirow{3}{*}{1941} & \multirow{3}{*}{\begin{tabular}{@{}l@{}}A woman is putting the baby into a trash can. \\ A person is putting meat into a skillet. \end{tabular}} & \multirow{3}{*}{1.0} & \multirow{3}{*}{3.3} & $\forall x (\LF{woman}(x) \rightarrow \LF{person}(x))$ \\ & & & & $\forall x (\LF{trash}(x) \rightarrow \LF{skillet}(x))$ \\ & & & & $\forall x (\LF{baby}(x) \rightarrow \LF{meat}(x))$ \\ \hline \end{tabular} \caption{\label{tab:examples_neg} Error examples when training the regressor only with logic-based features.} \end{center} \end{table*} Table~\ref{tab:examples_pos} shows some examples for which the prediction score was better when using logic-based features than when using non-logic-based ones. For IDs 642 and 1360, one sentence contains a passive clause while the other sentence does not. In such cases, the sentence pairs are not superficially similar. By using logical formulas based on event semantics we were able to interpret the sentence containing the passive clause correctly and judge that the passive and non-passive sentences are similar to each other. In ID 891, one sentence contains a negative clause while the other does not. Using shallow features, the word overlap is small and the prediction score was much lower than the correct score. Our logic-based method, however, interpreted the first sentence as a negative existential formula of the form $\neg \exists x \mathcal{P}(x)$ and the second sentence as an existential formula $\exists x \mathcal{P'}(x)$. Thus, it could easily handle the semantic difference between the positive and negative sentences. In ID 1158, by contrast, the proportion of word overlap is so high that the prediction score with non-logic-based features was much higher than the correct score. Our method, however, was able to prove the contradiction using an antonym axiom of the form $\forall x (\LF{remove}(x) \rightarrow \neg \LF{add}(x))$ from WordNet and thus predict the score correctly. In ID 59, the proportion of word overlap is low, so the prediction score with non-logic-based features was lower than the correct score. Our method, however, was able to prove the partial entailment relations for the sentence pair and thus predict the score correctly. Here the logic-based method captured the common meaning of the sentence pair: both sentences talk about the kids playing in the leaves. Finally, in ID 71, the prediction score with non-logic-based features was much higher than the correct score. There are two reasons for this phenomenon: negations tend to be omitted in non-logic-based features such as TF-IDF and the proportion of word overlap is high. However, as logical formulas and proofs can handle negative clauses correctly, our method was able to predict the score correctly. Table~\ref{tab:examples_neg} shows examples where using only logic-based features produced erroneous results. In ID 3974, the probability of axiom $\forall x (\LF{awaken}(x) \!\rightarrow\! \LF{up}(x))$ was low (0.25) and thus the prediction score was lower than the correct score. Likewise, in ID 4833, the probability of axiom $\forall x (\LF{file}(x) \!\rightarrow\! \LF{do}(x))$ was very low (0.09) and thus the prediction score was negatively affected. In these cases, we need to consider phrase-level axioms such as $\forall x (\LF{awaken}(x) \!\rightarrow\! \LF{wake}\_\LF{up}(x))$ and $\forall x (\LF{file}\_\LF{nail}(x) \!\rightarrow\! \LF{do}\_\LF{manicure}(x))$ using a paraphrase database. This, however, is an issue for future study. In ID 1941, the system wrongly proved the bidirectional entailment relations by adding external axioms, so the prediction score was much higher than the correct score. Setting the threshold for the probability of an axiom may be an effective way of improving our axiom-injection method. \section{Conclusion} We have developed a hybrid method for learning textual similarity by combining features based on logical proofs of bidirectional entailment relations with non-logic-based features. The results of our experiments on two datasets show that our system was able to outperform other logic-based systems. In addition, the results show that information about the natural deduction proof process can be used to create effective features for learning textual similarity. Since these logic-based features provide accuracy improvements that are largely additive with those provided by non-logic-based features, neural network-based systems may also benefit from using them. In future work, we will refine our system so that it can be applied to other tasks such as question answering. Compared with neural network-based systems, our natural deduction-based system can not only assess how similar sentence pairs are, but also explain what the sources of similarity/dissimilarity are by referring to information about sub-goals in the proof. Given this interpretative ability, we believe that our logic-based system may also be of benefit to other natural language processing tasks, such as question answering and text summarization. \section*{Acknowledgments} We thank the three anonymous reviewers for their detailed comments. This work was supported by JST CREST Grant Number JPMJCR1301, Japan.
proofpile-arXiv_065-3654
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec-intro} \setcounter{equation}{0} The evaluation of definite integrals have a long history dating from the work of Eudoxus of Cnidus (408-355 BC) with the creation of the method of exhaustion. The history of this problem is reported in \cite{kallio-1966a}. A large variety of methods developed for the evaluations of integrals may be found in older Calculus textbooks, such as those by J.~Edwards \cite{edwardsj-1922a,edwardsj-1922b}. As the number of examples grew, they began to be collected in \textit{tables of integrals}. The table compiled by I.~S.~Gradshteyn and I.~M.~Ryzhik \cite{gradshteyn-2015a} is the most widely used one, now in its $8^{th}$-edition. \medskip The interest of the last author in this topic began with entry $3.248.5$ in \cite{gradshteyn-2000a} \begin{equation} I = \int_{0}^{\infty} (1+x^2)^{-3/2} \left[ \varphi(x) + \sqrt{\varphi(x)} \right]^{-1/2} \, dx \label{i1} \end{equation} \noindent where $\displaystyle{ \varphi(x) = 1 + \tfrac{4}{3}x^{2}(1+x^{2})^{-2}.} $ The value $\pi/2 \sqrt{6}$ given in the table is incorrect, as a direct numerical evaluation will confirm. Since an evaluation of the integral still elude us, the editors of the table found an ingenious temporary solution to this problem: it does not appear in \cite{gradshteyn-2007a} nor in the latest edition \cite{gradshteyn-2015a}. This motivated an effort to present proofs of all entries in Gradshteyn-Ryzhik. It began with \cite{moll-2007a} and has continued with several short papers. These have appeared in \textit{Revista Scientia}, the latest one being \cite{amdeberhan-2016b}. \medskip The work presented here deals with the \textit{method of brackets}. This is a new method for integration developed in \cite{gonzalez-2007a,gonzalez-2008a,gonzalez-2009a} in the context of integrals arising from Feynman diagrams. It consists of a small number of rules that converts the integrand into a collection of series. These rules are reviewed in Section \ref{sec-method}, it is important to emphasize that \texttt{most of these rules are still not rigorously justified and currently should be considered a collection of heuristic rules}. \medskip The success of the method depends on the ability to give closed-form expressions for these series. Some of these heuristic rules are currently being placed on solid ground \cite{amdeberhan-2012b}. The reader will find in \cite{gonzalez-2014a,gonzalez-2010a,gonzalez-2010b} a large collection of examples that illustrate the power and flexibility of this method. \medskip The operational rules are described in Section \ref{sec-method}. The method applies to functions that can be expanded in a formal power series \begin{equation} f(x)=\sum_{n=0}^{\infty}a(n)x^{\alpha n+\beta-1}, \label{series-f} \end{equation} \noindent where $\alpha, \, \beta \in \mathbb{C}$ and the coefficients $a(n)\in \mathbb{C}$. (The extra $-1$ in the exponent is for a convenient formulation of the operational rules). The adjective \textit{formal} refers to the fact that the expansion is used to integrate over $[0, \infty)$, even though it might be valid only on a proper subset of the half-line. \begin{Note} \label{note-required} There is no precise description of the complete class of functions $f$ for which the method can be applied. At the moment, it is a working assumption, that the coefficients $a(n)$ in \eqref{series-f} are expressions that admit a unique meromorphic continuation to $n \in \mathbb{C}$. This is required, since the method involves the evaluation of $a(n)$ for $n$ not a natural number, hence an extension is needed. For example, the Bessel function \begin{equation} I_{0}(x) = \sum_{n=0}^{\infty} \frac{1}{n!^{2}} \left( \frac{x}{2} \right)^{2n} \label{bessel-i0} \end{equation} \noindent has $\alpha = 2, \, \, \beta = 1$ and $\displaystyle{ a(n) = 1/2^{2n} n!^{2}}$ can be written as $\displaystyle{ a(n) = 1/2^{2n} \Gamma^{2}(n+1)}$ and now the evaluation, say at $n = \tfrac{1}{2}$, is possible. The same observation holds for the Bessel function \begin{equation} J_{0}(x) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!^{2}} \left( \frac{x}{2} \right)^{2n}. \label{bessel-j0a} \end{equation} \end{Note} The goal of the present work is to produce \textit{non-classical series representations} for functions $f$, which do not have expansions like \eqref{series-f}. These representations are formally of the type \eqref{series-f} but some of the coefficients $a(n)$ might be null or divergent. The examples show how to use these representations in conjunction with the method of brackets to evaluate definite integrals. The examples presented here come from the table \cite{gradshteyn-2015a}. This process is, up to now, completely heuristic. These non-classical series are classified according to the following types: \medskip \noindent $1) \, $ \texttt{Totally (partially) divergent series}. Each term (some of the terms) in the series is a divergent value. For example, \begin{equation} \sum_{n=0}^{\infty} \Gamma(-n) x^{n} \text{ and } \sum_{n=0}^{\infty} \frac{ \Gamma(n-3)}{n!} x^{n}. \end{equation} \smallskip \noindent $2) \, $ \texttt{Totally (partially) null series}. Each term (some of the terms) in the series vanishes. For example, \begin{equation} \sum_{n=0}^{\infty} \frac{1}{\Gamma(-n)} x^{n} \text{ and } \sum_{n=0}^{\infty} \frac{1}{\Gamma(3-n)} x^{n}. \end{equation} \noindent This type includes series where all but finitely many terms vanish. These are polynomials in the corresponding variable. \smallskip \noindent $3) \, $ \texttt{Formally divergent series}. This is a classical divergent series: the terms are finite but the sum of the series diverges. For example, \begin{equation} \sum_{n=0}^{\infty} \frac{ n! ^{2} }{(n+1) \, (2n)!} 5^{n}. \end{equation} In spite of the divergence of these series, they will be used in combination with the method of brackets to evaluate a variety of definite integrals. Examples of these type of series are given next. \smallskip Some examples of functions that admit non-classical representations are given next. \smallskip \noindent $\bullet$ The \textit{exponential integral} with the partially divergent series \begin{equation} \text{Ei}(-x) = - \int_{1}^{\infty} t^{-1} e^{-xt} \, dt = \sum_{n=0}^{\infty} (-1)^{n} \frac{x^{n}}{n \, n!}. \label{ei-null} \end{equation} \smallskip \noindent $\bullet$ The \textit{Bessel $K_{0}$-function} \begin{equation} K_{0}(x) = \int_{0}^{\infty} \frac{ \cos x t \, dt}{(t^{2}+1)^{1/2}} \end{equation} \noindent with totally null representation \begin{equation} K_{0}(x) = \frac{1}{x} \sum_{n=0}^{\infty} (-1)^{n} \frac{\Gamma^{2}(n + \tfrac{1}{2} )}{n! \, \Gamma(-n)} \left( \frac{4}{x^{2}} \right)^{n} \label{k0-null} \end{equation} \noindent and the totally divergent one \begin{equation} K_{0}(x) = \frac{1}{2} \sum_{n=0}^{\infty} (-1)^{n} \frac{\Gamma(-n)}{n!} \left( \frac{x^{2}}{4} \right)^{n}. \label{k0-divergent} \end{equation} Section \ref{sec-method} presents the rules of the method of brackets. Section \ref{sec-independence} shows that the bracket series associated to an integral is independent of the presentation of the integrand. The remaining sections use the method of brackets and non-classical series to evaluate definite integrals. Section \ref{sec-expi} contains the exponential integral $\text{Ei}(-x)$ in the integrand, Section \ref{sec-tricomi} has the Tricomi function $U(a,b;x)$ (as an example of the confluent hypergeometric function), Section \ref{sec-airy} is dedicated to integrals with the Airy function $\text{Ai}(x)$ and then Section \ref{sec-bessel-nu} has the Bessel function $K_{\nu}(x)$, with special emphasis on $K_{0}(x)$. Section \ref{sec-producing} gives examples of definite integral whose value contains the Bessel function $K_{\nu}(x)$. The final section has a new approach to the evaluation of bracket series, based on a differential equation involving parameters. The examples presented in the current work have appeared in the literature, where the reader will find proofs of these formulas by classical methods. One of the goals of this work is to illustrate the flexibility of the method of brackets to evaluated these integrals. \section{The method of brackets} \label{sec-method} \setcounter{equation}{0} The method of brackets evaluates integrals over the half line $[0, \, \infty)$. It is based on a small number of rules reviewed in this section. \begin{definition} For $a \in \mathbb{C}$, the symbol \begin{equation} \langle a \rangle = \int_{0}^{\infty} x^{a-1} \, dx \end{equation} is the {\em bracket} associated to the (divergent) integral on the right. The symbol \begin{equation} \phi_{n} = \frac{(-1)^{n}}{\Gamma(n+1)} \end{equation} \noindent is called the {\em indicator} associated to the index $n$. The notation $\phi_{n_{1}n_{2}\cdots n_{r}}$, or simply $\phi_{12 \cdots r}$, denotes the product $\phi_{n_{1}} \phi_{n_{2}} \cdots \phi_{n_{r}}$. \end{definition} \begin{Note} The indicator $\phi_{n}$ will be used in the series expressions used in the method of brackets. For instance \eqref{ei-null} is written as \begin{equation} \text{Ei}(-x) = \sum_{n} \phi_{n} \frac{x^{n}}{n} \label{ei-null1} \end{equation} \noindent and \eqref{k0-divergent} as \begin{equation} K_{0}(x) = \frac{1}{2} \sum_{n} \phi_{n} \Gamma(-n) \left( \frac{x^{2}}{4} \right)^{n}. \label{k0-divergent1} \end{equation} \noindent In the process of implementing the method of brackets, these series will be evaluated for $n \in \mathbb{C}$, not necessarily positive integers. Thus the notation for the indices does not include its range of values. \end{Note} \medskip \noindent {\bf {\em Rules for the production of bracket series}} The first part of the method is to associate to the integral \begin{equation} I (f) = \int_{0}^{\infty} f(x) \, dx \end{equation} \noindent a bracket series. This is done following two rules: \smallskip \noindent ${\mathbf{Rule \, \, P_{1}}}$. Assume $f$ has the expansion \begin{equation} f(x)=\sum_{n=0}^{\infty}\phi_{n} a(n)x^{\alpha n+\beta-1}. \end{equation} Then $I(f)$ is assigned the \emph{bracket series } \begin{equation} I(f) =\sum_{n}\phi_{n}a(n)\left\langle \alpha n+\beta\right\rangle . \end{equation} \begin{Note} The series including the indicator $\phi_{n}$ have indices \textit{without} limits, since its evaluation requires to take $n$ outside $\mathbb{N}$. \end{Note} \smallskip \noindent ${\mathbf{Rule \, \, P_{2}}}$. For $\alpha \in \mathbb{C}$, the multinomial power $(u_{1} + u_{2} + \cdots + u_{r})^{\alpha}$ is assigned the $r$-dimension bracket series \begin{equation} \sum_{n_{1},n_{2},\ldots, n_{r}} \phi_{n_{1}\, n_{2} \, \cdots n_{r}} u_{1}^{n_{1}} \cdots u_{r}^{n_{r}} \frac{\langle -\alpha + n_{1} + \cdots + n_{r} \rangle}{\Gamma(-\alpha)}. \end{equation} \noindent The integer $r$ is called \textit{the dimension of the bracket series}. \medskip \noindent {\bf {\em Rules for the evaluation of a bracket series}} The next set of rules associates a complex number to a bracket series. \smallskip \noindent ${\mathbf{Rule \, \, E_{1}}}$. The one-dimensional bracket series is assigned the value \begin{equation} \sum_{n} \phi_{n} a(n) \langle \alpha n + b \rangle = \frac{1}{|\alpha|} a(n^{*}) \Gamma(-n^{*}), \end{equation} \noindent where $n^{*}$ is obtained from the vanishing of the bracket; that is, $n^{*}$ solves $an+b = 0$. \begin{Note} The rule $E_{1}$ is a version of the \textit{Ramanujan's Master Theorem}. This theorem requires an extension of the coefficients $a(n)$ from $n \in \mathbb{N}$ to $n \in \mathbb{C}$. The assumptions imposed on the function $f$ is precisely for the application of this result. A complete justification of this rule is provided in \cite{amdeberhan-2012b}. \textit{Making the remaining rules rigorous is the subject of active research. } \end{Note} \smallskip The next rule provides a value for multi-dimensional bracket series where the number of sums is equal to the number of brackets. \smallskip \noindent ${\mathbf{Rule \, \, E_{2}}}$. Assume the matrix $B = (b_{ij})$ is non-singular, then the assignment is \begin{equation} \sum_{n_{1},n_{2}, \cdots,n_{r}} \phi_{n_{1} \cdots n_{r}} a(n_{1},\cdots,n_{r}) \langle b_{11}n_{1} + \cdots + b_{1r}n_{r} + c_{1} \rangle \cdots \langle b_{r1}n_{1} + \cdots + b_{rr}n_{r} + c_{r} \rangle \nonumber \end{equation} \begin{equation} = \frac{1}{| \text{det}(B) |} a(n_{1}^{*}, \cdots n_{r}^{*}) \Gamma(-n_{1}^{*}) \cdots \Gamma(-n_{r}^{*}) \nonumber \end{equation} \noindent where $\{ n_{i}^{*} \}$ is the (unique) solution of the linear system obtained from the vanishing of the brackets. There is no assignment if $B$ is singular. \smallskip \noindent ${\mathbf{Rule \, \, E_{3}}}$. Each representation of an integral by a bracket series has associated an {\em index of the representation} via \begin{equation} \text{index } = \text{number of sums } - \text{ number of brackets}. \end{equation} \noindent In the case of a multi-dimensional bracket series of positive index, then the system generated by the vanishing of the coefficients has a number of free parameters. The solution is obtained by computing all the contributions of maximal rank in the system by selecting these free parameters. Series expressed in the same variable (or argument) are added. \begin{example} A generic bracket series of index $1$ has the form \begin{equation} \sum_{n_{1}, \, n_{2}} \phi_{n_{1},n_{2}} C(n_{1},n_{2}) A^{n_{1}} B^{n_{2}} \langle a_{11}n_{1} + a_{12}n_{2} + c_{1} \rangle, \end{equation} \noindent where $a_{11}, \, a_{12}, \, c_{1}$ are fixed coefficients, $A, \, B$ are parameters and $C(n_{1},n_{2})$ is a function of the indices. The Rule $E_{3}$ is used to generate two series by leaving first $n_{1}$ and then $n_{2}$ as free parameters. The Rule $E_{1}$ is used to assign a value to the corresponding series: \smallskip \noindent $n_{1}$ as a free parameter produces \begin{equation*} T_{1} = \frac{B^{-c_{1}/a_{12}}}{|a_{12}|} \sum_{n_{1}=0}^{\infty} \phi_{n_{1}} \Gamma \left( \frac{a_{11}n_{1}+c_{1}}{a_{12}} \right) C\left( n_{1}, - \frac{a_{11}n_{1} + c_{1}}{a_{12}} \right) \left(AB^{-a_{11}/a_{12}} \right)^{n_{1}}; \end{equation*} \smallskip \noindent $n_{2}$ as a free parameter produces \begin{equation*} T_{2} = \frac{A^{-c_{1}/a_{11}}}{|a_{11}|} \sum_{n_{2}=0}^{\infty} \phi_{n_{2}} \Gamma \left( \frac{a_{12}n_{2}+c_{1}}{a_{11}} \right) C\left( - \frac{a_{12}n_{2} + c_{1}}{a_{11}}, n_{2} \right) \left( BA^{-a_{12}/a_{11}} \right)^{n_{2}}. \end{equation*} \smallskip The series $T_{1}$ and $T_{2}$ are expansions of the solution in terms of different parameters \begin{equation} x_{1} = AB^{-a_{11}/a_{12}} \,\,\, {\rm{ and }} \,\,\, x_{2} = BA^{-a_{12}/a_{11}}. \end{equation} \noindent Observe that $x_{2} = x_{1}^{a_{12}/a_{11}}$. Therefore the bracket series is assigned the value $T_{1}$ \textit{or} $T_{2}$. If one of the series is a null-series or divergent, it is discarded. If \textit{both} series are discarded, the method of brackets does not produce a value for the integral that generates the bracket series. \smallskip Some special cases will clarify the rules to follow in the use of the series $T_{1}$ and $T_{2}$. Suppose $a_{12} = -a_{11}$, then \begin{equation} T_{1} = \frac{B^{-c_{1}/a_{11}}}{|a_{11}|} \sum_{n_{1}=0}^{\infty} \phi_{n_{1}} \Gamma \left( n_{1} + \frac{c_{1}}{a_{11}} \right) C \left( n_{1}, -n_{1} - \frac{c_{1}}{a_{11}} \right) (AB)^{n_{1}} \end{equation} \noindent and \begin{equation} T_{2} = \frac{A^{-c_{1}/a_{11}}}{|a_{11}|} \sum_{n_{2}=0}^{\infty} \phi_{n_{2}} \Gamma \left( n_{2} + \frac{c_{1}}{a_{11}} \right) C \left( -n_{2} - \frac{c_{1}}{a_{11}}, n_{2} \right) (AB)^{n_{2}} \end{equation} \noindent and since both series are expansions in the same parameter $( AB )$, \textit{their values must be added} to compute the value associated to the bracket series. On the other hand, if $a_{12} = -2a_{11}$, then \begin{equation*} T_{1} = \frac{B^{c_{1}/2a_{11}}}{2 |a_{11}|} \sum_{n_{1}=0}^{\infty} \phi_{n_{1}} \Gamma \left( - \frac{1}{2} n_{1} - \frac{c_{1}}{2a_{11}} \right) C \left( n_{1}, \frac{1}{2} n_{1} + \frac{c_{1}}{2a_{11}} \right) \left( A B^{1/2} \right)^{n_{1}} \end{equation*} \noindent and \begin{equation*} T_{2} = \frac{A^{-c_{1}/a_{11}}}{ |a_{11}|} \sum_{n_{2}=0}^{\infty} \phi_{n_{2}} \Gamma \left( -2 n_{2} + \frac{c_{1}}{a_{11}} \right) C \left( 2 n_{2} - \frac{c_{1}}{a_{11}}, n_{2} \right) \left( A^{2} B \right)^{n_{2}}. \end{equation*} \noindent Splitting the sum in $T_{1}$ according to the parity of the indices produces a power series in $A^{2}B$ when $n_{1} = 2 n_{3}$ is even and for $n_{1}$ odd a second power series in the same argument $A^{2}B$ times an extra factor $AB^{1/2}$. Since these are expansions in the same argument, they have to be added to count their contribution to the bracket series. \end{example} \smallskip \begin{Note} It is important to observe that the index is attached to a specific representation of the integral and not just to integral itself. The experience obtained by the authors using this method suggests that, among all representations of an integral as a bracket series, the one with {\em minimal index} should be chosen. \end{Note} \begin{Note} The extension presented in this work shows how to use these divergent series in the evaluation of definite integrals. Example \ref{example-6-222} illustrates this procedure. \end{Note} \smallskip \noindent ${\mathbf{Rule \,\, E_{4}}}$. In the evaluation of a bracket series, repeated series are counted only once. For instance, a convergent series appearing repeated in the same region of convergence should be counted only once. The same treatment should be given to null and divergent series. \begin{Note} Example \ref{ex-rule-e4} in Section \ref{sec-expi} illustrates the use of this rule. \end{Note} \begin{Note} A systematic procedure in the simplification of the series has been used throughout the literature: express factorials in terms of the gamma function and the transform quotients of gamma terms into Pochhammer symbols, defined by \begin{equation} (a)_{k} = a(a+1) \cdots (a+k-1) = \frac{\Gamma(a+k)}{\Gamma(a)}. \label{gamma-poch} \end{equation} \noindent Any presence of a Pochhammer with a negative index $k$ is transformed by the rule \begin{equation} (a)_{-k} = \frac{(-1)^{k}}{(1-a)_{k}}, \quad \text{ for } k \in \mathbb{N}. \label{rule-11} \end{equation} \noindent In the special case when $a$ \textit{is also} a negative integer, the rule \begin{equation} (-km)_{-m} = \frac{k}{k+1} \cdot \frac{(-1)^{m} (km)!}{((k+1)m)!} \end{equation} \noindent holds. This value is justified in \cite{gonzalez-2016a}. The duplication formula \begin{equation} (a)_{2n} = 2^{2n} \left( \frac{a}{2} \right)_{n} \left( \frac{a+1}{2} \right)_{n} \label{poch-dupl} \end{equation} \noindent is also used in the simplifications. Many of the evaluations are given as values of the hypergeometric functions \begin{equation} _{p}F_{q}\left(\genfrac{}{}{0pt}{}{a_{1},\ldots,a_{p}}{b_{1},\ldots,b_{q}}\bigg{|}z\right) = \sum_{n=0}^{\infty} \frac{(a_{1})_{n} \cdots (a_{p})_{n}}{(b_{1})_{n} \cdots (b_{q})_{n} } \frac{z^{n}}{n!}, \label{hyper-def} \end{equation} \noindent with $(a)_{n}$ as in \eqref{gamma-poch}. It is often that the value of $_{2}F_{1}$ at $z=1$ is required. This is given by the classical formula of Gauss: \begin{equation} \label{gauss-value} \pFq21{a \,\,\, b}{c}{1} = \frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \, \Gamma(c-b)}. \end{equation} \end{Note} \begin{Note} The extension considered here is to use the method of brackets to functions that do not admit a series representation as described in Rule $P_{1}$. For example, the Bessel function $K_{0}(x)$ has a singular expansion of the form \begin{equation} \label{exp-k0} K_{0}(x) = - \left( \gamma - \ln 2 + \ln x \right) I_{0}(x) + \sum_{j=0}^{\infty} \frac{H_{j}}{j!^{2}} \frac{x^{2j}}{2^{2j}} \end{equation} \noindent (see \cite[10.31.2]{olver-2010a}). Here $I_{0}(x)$ is the Bessel function given in \eqref{bessel-i0}, $\begin{displaystyle}H_{j} = \sum_{k=1}^{j} \frac{1}{k}\end{displaystyle}$ is the harmonic number and $\gamma = \lim\limits_{j \to \infty} \left( H_{j} - \ln j \right)$ is Euler's constant. The presence of the logarithm term in \eqref{exp-k0} does not permit a direct application of the method of brackets. An alternative is presented in Section \ref{sec-bessel-nu}. \end{Note} \section{Independence of the factorization} \label{sec-independence} \setcounter{equation}{0} The evaluation of a definite integral by the method of brackets begins with the association of a bracket series to the integral. It is common that the integrand contains several factors from which the bracket series is generated. This representation is not unique. For example, the integral \begin{equation} \label{int-j01} I = \int_{0}^{\infty} e^{-ax} J_{0}(x) \, dx \end{equation} \noindent is associated the bracket series \begin{equation} \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a^{n_{1}}}{2^{2n_{2}} \Gamma(n_{2}+1)} \langle n_{1} + 2n_{2} + 1 \rangle, \end{equation} \noindent and rewriting \eqref{int-j01} as \begin{equation} I = \int_{0}^{\infty} e^{-ax/2} e^{-ax/2} J_{0}(x) \, dx, \end{equation} \noindent provides the second bracket series \begin{equation} \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{n_{1}+n_{2}}}{2^{n_{1}+n_{2}+2n_{3}} \Gamma(n_{3}+1)} \langle n_{1} + n_{2} + 2n_{3} + 1 \rangle \end{equation} \noindent associated to \eqref{int-j01}. It is shown next that all such bracket series representations of an integral produce the same the value. \begin{theorem} \label{thm:TwoFactors} Assume $f(x)=g(x)h(x)$, where $f, \, g \text{ and }h$ have expansions as in \eqref{series-f}. Then, the method of brackets assigns the same value to the integrals \begin{equation} I_{1}=\int_{0}^{\infty}f(x) \, dx\text{ and }I_{2}=\int_{0}^{\infty}g(x)h(x) \, dx. \end{equation} \end{theorem} \begin{proof} Suppose that \begin{eqnarray*} f(x) & = &\displaystyle{ \sum_{n} \phi_{n}a(n)x^{\alpha n+\beta} } \label{expan-f}\\ g(x)& = &\displaystyle{ \sum_{n_{1}} \phi_{n_{1}}b\left(n_{1}\right)x^{\alpha n_{1}+\beta_{1}}} \\ h(x)& = &\displaystyle{ \sum_{n_{2}}\phi_{n_{2}}c\left(n_{2}\right)x^{\alpha n_{2}+\beta_{2}} }. \end{eqnarray*} Then \begin{equation} \label{value-I1} I_{1}=\int_{0}^{\infty}f(x) dx=\sum_{n}\phi_{n}a\left(n\right)\left\langle \alpha n+\beta + 1 \right\rangle = \frac{1}{|\alpha|} a(-s)\Gamma(s), \end{equation} \noindent with $s = (1+\beta)/\alpha$. \medskip To evaluate the second integral, observe that \begin{eqnarray*} g(x)h(x) & = & x^{\beta_{1}+\beta_{2}} \left( \sum_{n_{1}=0}^{\infty} \frac{(-1)^{n_{1}}}{n_{1}!} b(n_{1}) x^{\alpha n_{1}} \right) \left( \sum_{n_{2}=0}^{\infty} \frac{(-1)^{n_{2}}}{n_{2}!} c(n_{2}) x^{\alpha n_{2}} \right) \\ & = & x^{\beta_{1}+\beta_{2}} \sum_{n=0}^{\infty} F(n) x^{\alpha n}, \nonumber \end{eqnarray*} \noindent with \begin{eqnarray} F(n) & = & \sum_{k=0}^{n} \frac{(-1)^{k}}{k!} b(k) \frac{(-1)^{n-k}}{(n-k)!} c(n-k) \\ & = & \frac{(-1)^{n}}{n!} \sum_{k=0}^{n} \binom{n}{k} b(k) c(n-k). \nonumber \end{eqnarray} \noindent This yields \begin{equation} f(x) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!}\left[ \sum_{k=0}^{n} \binom{n}{k} b(k) c(n-k) \right] x^{\alpha n + \beta_{1} + \beta_{2}} \end{equation} \noindent and matching this with \eqref{expan-f} gives $\beta = \beta_{1}+\beta_{2}$ and \begin{equation} a(n) = \sum_{k=0}^{n} \binom{n}{k} b(k) c(n-k) = \sum_{k=0}^{\infty} \frac{(-1)^{k}}{k!} \, (-n)_{k} b(k)c(n-k). \label{identity-2} \end{equation} Now, the method of brackets gives \begin{equation} I_{2} = \int_{0}^{\infty} g(x)h(x) \, dx = \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} b(n_{1}) c(n_{2}) \langle \alpha n_{1} + \alpha n_{2} + \beta + 1 \rangle \end{equation} \noindent and it yields two series as solutions \begin{eqnarray} T_{1} & = & \frac{1}{| \alpha |} \sum_{n} \phi_{n} \Gamma \left( n + s \right) b(n) c(-n - s) \\ T_{2} & = & \frac{1}{| \alpha |} \sum_{n} \phi_{n} \Gamma \left( n + s \right) b(-n-s) c(n), \nonumber \end{eqnarray} \noindent with $s = (\beta+1)/\alpha$. Comparing with \eqref{value-I1} shows that $I_{1} = I_{2}$ is equivalent to \begin{equation} \label{required-1} \Gamma(s) a( - s) = \sum_{n} \phi_{n} \Gamma(n+s) b(n) c(-s-n), \end{equation} \noindent that is, \begin{equation} a(-s) = \sum_{n} \phi_{n} (s)_{n} b(n) c(-s-n). \label{required-1} \end{equation} \noindent The identity \eqref{required-1} is the extension of \eqref{identity-2} from $n \in \mathbb{N}$ to $s \in \mathbb{C}$. This extension is part of the requirements on the functions $f$ explained in Note \ref{note-required}. The proof is complete. \end{proof} \smallskip{} It is direct to extend the result to the case of a finite number of factors. \begin{theorem} \label{thm:Independence} Assume $f$ admits a representation of the form $f\left(x\right)=\overset{r}{\underset{i=1}{\prod}}f_{i}\left(x\right)$. Then the value of the integral, obtained by method of brackets, is the same for both series representations. \end{theorem} \section{The exponential integral} \label{sec-expi} \setcounter{equation}{0} The \textit{exponential integral function} is defined by the integral formula \begin{equation} \text{Ei}(-x) = - \int_{1}^{\infty} \frac{\exp(- x t )}{t} \, dt, \text{ for } x > 0. \end{equation} (See \cite[$8.211.1$]{gradshteyn-2015a}). The method of brackets is now used to produce a non-classical series for this function. Start by replacing the exponential function by its power series to obtain \begin{equation} \label{ei-series1} \text{Ei}(-x) = - \sum_{n_{1}} \phi_{n_{1}} x^{n_{1}} \int_{1}^{\infty} t^{n_{1}-1} \, dt \end{equation} \noindent and then use the method of brackets to produce \begin{equation*} \int_{1}^{\infty} t^{n_{1}-1} \, dt = \int_{0}^{\infty} (y+1)^{n_{1}-1} \, dy = \sum_{n_{2}, \, n_{3}} \phi_{n_{2}n_{3}} \frac{ \langle - n_{1} + 1 + n_{2} + n_{3} \rangle \, \langle n_{2} +1 \rangle } {\Gamma(-n_{1}+1)}. \end{equation*} \noindent Replace this in \eqref{ei-series1} to obtain \begin{equation} \text{Ei}(-x) = - \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1}n_{2}n_{3}} x^{n_{1}} \frac{\langle -n_{1}+1+n_{2}+n_{3} \rangle \,\, \langle n_{2} +1 \rangle }{\Gamma(-n_{1}+1)}. \end{equation} The evaluation of this series by the method of brackets generates two identical terms for $\text{Ei}(-x)$: \begin{equation} \text{Ei}(-x) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n \, \Gamma(n+1)} x^{n}. \label{divergent-ei} \end{equation} \noindent Only one of them is kept, according to Rule $E_{4}$. This is a partially divergent series (from the value at $n=0$), written as \begin{equation} \text{Ei}(-x) = \sum_{n} \phi_{n} \frac{x^{n}}{n}. \label{pds-ei1} \end{equation} The next example illustrates how to use this partially divergent series in the evaluation of an integral. \begin{example} Entry $6.223$ of \cite{gradshteyn-2015a} gives the Mellin transform of the exponential integral as \begin{equation} \int_{0}^{\infty} x^{\mu-1} \text{Ei}(- b x) \, dx = - \frac{b^{-\mu}}{\mu} \Gamma(\mu). \end{equation} \noindent To verify this, use the partially divergent series \eqref{pds-ei1} and the method of brackets to obtain \begin{eqnarray} \int_{0}^{\infty} x^{\mu-1} \text{Ei}(-bx) \, dx & = & \sum_{n} \phi_{n} \frac{b^{n}}{n} \int_{0}^{\infty} x^{\mu+n-1} \, dx \\ & = & \sum_{n} \phi_{n} \frac{b^{n}}{n} \langle \mu + n \rangle \nonumber \\ & = & - \frac{b^{-\mu}}{\mu} \Gamma(\mu), \nonumber \end{eqnarray} \noindent as claimed. \end{example} \begin{example} Entry $6.228.2$ in \cite{gradshteyn-2015a} is \begin{equation} \label{formG} G(\nu,\mu,\beta) = \int_{0}^{\infty} x^{\nu-1} e^{-\mu x} \text{Ei}(-\beta x) \, dx = - \frac{\Gamma(\nu)}{\nu (\beta + \nu)^{\nu}} \pFq21{1 \,\, \,\, \nu}{\nu+1}{\frac{\mu}{\beta+\mu}}. \end{equation} \noindent The partially divergent series \eqref{pds-ei1} is now used to establish this formula. First form the bracket series \begin{equation} G(\nu, \mu, \beta) = \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{\beta^{n_{1}} \mu^{n_{2}}}{n_{1}} \langle n_{1} + n_{2} + \nu \rangle. \end{equation} \noindent Rule $E_{1}$ yields two cases from the equation $n_{1}+n_{2}+\nu=0$: \smallskip \noindent \textit{Case 1}: $n_{2} = - n_{1} - \nu$ produces \begin{equation} T_{1} = \mu^{-\nu} \sum_{n_{1}=0}^{\infty} \frac{(-1)^{n_{1}}}{n_{1}!} \frac{\Gamma(n_{1}+\nu)}{n_{1}} \left( \frac{\beta}{\mu} \right)^{n_{1}}, \end{equation} \noindent which is discarded since it is partially divergent (due to the term $n_{1}=0$). \smallskip \noindent \textit{Case 2}: $n_{1} = -n_{2}-\nu$ gives \begin{equation} \label{4-11} T_{2} = - \beta^{-\nu} \sum_{n_{2}=0}^{\infty} \frac{(-1)^{n_{2}}}{n_{2}!} \left( \frac{\mu}{\beta} \right)^{n_{2}} \frac{\Gamma(n_{2}+\nu)}{n_{2}+\nu}, \end{equation} \noindent and using \begin{equation} \Gamma(n_{2}+\nu) = (\nu)_{n_{2}} \Gamma(\nu) \text{ and } n_{2}+\nu = \frac{\Gamma(n_{2}+\nu+1)}{\Gamma(n_{2}+\nu)} = \frac{(\nu+1)_{n_{2}} \Gamma(\nu+1)}{(\nu)_{n_{2}} \Gamma(\nu)} \end{equation} \noindent equation \eqref{4-11} becomes \begin{eqnarray} T_{2} & = & - \frac{\Gamma(\nu)}{\nu \, \beta^{\nu}} \sum_{n_{2}=0}^{\infty} \frac{(\nu)_{n_{2}} (\nu)_{n_{2}}}{n_{2}! \, (\nu+1)_{n_{2}}} \left( - \frac{\mu}{\beta} \right)^{n_{2}} \\ & = & - \frac{\Gamma(\nu)}{\nu \, \beta^{\nu}} \pFq21{\nu \,\,\,\,\, \nu}{\nu+1}{ - \frac{\mu}{\beta}}. \nonumber \end{eqnarray} \noindent The condition $|\mu| < |\beta|$ is imposed to guarantee the convergence of the series. Finally, the transformation rule (see entry $9.131.1$ in \cite{gradshteyn-2015a}) \begin{equation} \pFq21{\alpha \,\,\,\, \beta}{\gamma}{z} = (1-z)^{-\alpha} \pFq21{\alpha \,\,\,\, \gamma - \beta}{\gamma} {\frac{z}{z-1}} \end{equation} \noindent with $\alpha = \beta = \nu, \, \gamma = \nu+1$ and $z = - \mu/\beta$ yields \eqref{formG}. \end{example} \begin{example} The next evaluation is entry $6.232.2$ in \cite{gradshteyn-2015a}: \begin{equation} G(a,b) = \int_{0}^{\infty} \text{Ei}(- a x) \cos bx \, dx = - \frac{1}{b} \tan^{-1} \left( \frac{b}{a} \right). \label{G-form1} \end{equation} \noindent A direct application of the method of brackets using \begin{equation} \cos x = \pFq01{-}{\tfrac{1}{2}}{- \frac{x^{2}}{4}} \label{cosine-hyper} \end{equation} gives \begin{equation} G(a,b) = \sqrt{\pi} \sum_{n_{1}, n_{2}} \phi_{n_{1},n_{2}} \frac{b^{2n_{1}}a^{n_{2}}}{2^{2n_{1}} \Gamma( n_{1} + \tfrac{1}{2}) \, n_{2}} \langle 2n_{1} + n_{2} + 1 \rangle. \end{equation} \noindent This produces two series for $G(a,b)$: \begin{equation} T_{1} = \frac{\sqrt{\pi}}{b} \sum_{n_{2}=0}^{\infty} \frac{(-1)^{n_{2}}}{n_{2}!} \frac{\Gamma(\tfrac{1}{2}(n_{2}+1))} {n_{2} \, \Gamma( - \frac{1}{2}n_{2})} \left( \frac{2a}{b} \right)^{n_{2}}, \label{form-T1} \end{equation} \noindent and \begin{equation} T_{2} = - \frac{\sqrt{\pi}}{a} \sum_{n_{1}=0}^{\infty} \frac{(-1)^{n_{1}}}{n_{1}!} \frac{\Gamma(2n_{1}+1)}{(2n_{1}+1) \Gamma(n_{1} + \tfrac{1}{2}) } \left( \frac{b^{2}}{4a^{2}} \right)^{n_{1}}. \end{equation} \noindent The analysis begins with a simplification of $T_{2}$. Use the duplication formula for the gamma function \begin{equation} \frac{\Gamma(2u)}{\Gamma(u)} = \frac{2^{2u-1}}{\sqrt{\pi}} \Gamma(u + \tfrac{1}{2}) \end{equation} \noindent and write \begin{equation} \frac{1}{2n_{1}+1 }= \frac{(1)_{n_{1}} \left( \tfrac{1}{2} \right)_{n_{1}}}{n_{1}! \, \left( \tfrac{3}{2} \right)_{n_{1}}} \end{equation} \noindent to obtain \begin{equation} T_{2} = - \frac{1}{a} \pFq21{1 \,\,\, \tfrac{1}{2}}{\tfrac{3}{2}}{ - \frac{b^{2}}{a^{2}}}, \end{equation} \noindent provided $|b|<|a|$ to guarantee convergence. The form \eqref{G-form1} comes from the identity \begin{equation} \pFq21{ \tfrac{1}{2} \,\,\, 1}{\tfrac{3}{2}}{-z^{2}} = \frac{\tan^{-1} z}{z} \end{equation} (see $9.121.27$ in \cite{gradshteyn-2015a}). \smallskip The next step is the evaluation of $T_{1}$. Separating the sum \eqref{form-T1} into even and odd indices yields \begin{eqnarray} T_{1} & = & \frac{\sqrt{\pi}}{2 b} \sum_{n=0}^{\infty} \frac{1}{(2n)!} \frac{\Gamma \left( n + \tfrac{1}{2} \right) }{n \Gamma(-n)} \left( \frac{4a^{2}}{b^{2}} \right)^{n} \\ & & - \frac{\sqrt{\pi}}{b} \sum_{n=0}^{\infty} \frac{1}{(2n+1)!} \frac{\Gamma(n+1)}{(2n+1) \Gamma \left( -n - \tfrac{1}{2} \right)} \left( \frac{2a}{b} \right)^{2n+1}, \nonumber \end{eqnarray} \noindent and in hypergeometric form \begin{eqnarray} T_{1} & = & - \frac{\pi}{2b} \,\, \pFq21{0 \,\,\, \tfrac{1}{2}}{\tfrac{1}{2}}{- \frac{a^{2}}{b^{2}}} + \frac{a}{b^{2}} \,\, \pFq21{\tfrac{1}{2} \,\,\, 1 }{\tfrac{3}{2}}{ - \frac{a^{2}}{b^{2}}} \\ & = & - \frac{\pi}{2b} + \frac{1}{b} \tan^{-1} \left( \frac{a}{b} \right). \nonumber \end{eqnarray} \noindent and this is the same as \eqref{G-form1}. \smallskip The evaluation of entry $6.232.1$ in \cite{gradshteyn-2015a} \begin{equation} \int_{0}^{\infty} \text{Ei}(- a x) \sin bx \, dx = - \frac{1}{2b} \ln \left( 1 + \frac{b^{2}}{a^{2}} \right) \end{equation} \noindent is obtained in a similar form. \end{example} \begin{example} Entry $6.782.1$ in \cite{gradshteyn-2015a} is \begin{equation} B(z) = \int_{0}^{\infty} \text{Ei}(-x) J_{0}(2 \sqrt{zx}) \, dx = \frac{e^{-z}-1}{z}. \end{equation} \noindent Here \begin{equation} J_{0}(x) = \pFq01{-}{1}{-\frac{x^{2}}{4}} \end{equation} \noindent is the classical Bessel function defined in \eqref{bessel-j0a}. Therefore \begin{equation} J_{0}(2 \sqrt{z x }) = \sum_{n_{2}} \phi_{n_{2}} \frac{z^{n_{2}}}{\Gamma(n_{2}+1)} x^{n_{2}}. \end{equation} \noindent The standard procedure using the partially divergent series \eqref{divergent-ei} now gives \begin{equation} B(z) = \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{1}{n_{1}} \frac{z^{n_{2}}}{\Gamma(n_{2}+1)} \langle n_{1}+n_{2}+1 \rangle, \end{equation} \noindent which gives the convergent series \begin{equation} T_{1} = - \sum_{n_{1}=0}^{\infty} \frac{(-1)^{n_{1}}}{n_{1}!} \frac{(1)_{n_{1}}}{(2)_{n_{1}}} z^{n_{1}} = - \pFq11{1}{2}{-z} = \frac{e^{-z}-1}{z}, \end{equation} \noindent and the series \begin{equation} T_{2} = - \frac{1}{z} \sum_{n_{2}=0}^{\infty} \frac{(-z)^{-n_{2}}}{\Gamma(1- n_{2})}. \end{equation} \noindent Observe that the expression $T_{2}$ contains a single non-vanishing term, so it is of the partially null type. An alternative form of $T_{2}$ is to write \begin{eqnarray} T_{2} & = & - \frac{1}{z} \sum_{n_{2}=0}^{\infty} \frac{(-z^{-1})^{n_{2}}}{\Gamma(1-n_{2})} \\ & = & - \frac{1}{z} \sum_{n_{2}=0}^{\infty} \frac{(-z^{-1})^{n_{2}} }{\Gamma(1) \, (1)_{-n_{2}}} \nonumber \\ & = & -\frac{1}{z} \sum_{n_{2}=0}^{\infty} (z^{-1})^{n_{2}} (0)_{n_{2}} \,\, (1)_{n_{2} }\,\, \frac{(z^{-1})^{n_{2}}}{n_{2}!} \nonumber \\ & = & - \frac{1}{z} \pFq20{0 \,\,\, 1}{-}{\frac{1}{z}}. \nonumber \end{eqnarray} \noindent The series $\begin{displaystyle}\pFq20{a \,\, b}{-}{z} \end{displaystyle}$ diverges, unless one of the parameters $a$ or $b$ is a non-positive integer, in which case the series terminates and it reduces to a polynomial. This is precisely what happens here: only the term for $n_{2}=0$ is non-vanishing and $T_{2}$ reduces to \begin{equation} T_{2} = - \frac{1}{z}. \end{equation} \noindent This gives the asymptotic behavior $B(z) \sim - 1/z$, consistent with the value of $T_1$ for large $z$. This phenomena occurs every time one obtains a series of the form ${_{p}F_{q}}(z)$ with $p \geq q+2$ when the series diverges. The truncation represents an asymptotic approximation of the solution. \end{example} \section{The Tricomi function} \label{sec-tricomi} \setcounter{equation}{0} The confluent hypergeometric function, denoted by $\pFq11{a}{c}{z}$, defined in \eqref{hyper-def}, arises when two of the regular singular points of the differential equation for the Gauss hypergeometric function $\pFq21{a \,\, b}{c}{z}$, given by \begin{equation} z(1-z)y''+(c-(a+b+1)z)y'-aby=0, \end{equation} are allowed to merge into one singular point. More specifically, if we replace $z$ by $z/b$ in $\pFq21{a \,\, b }{c}{z}$, then the corresponding differential equation has singular points at $0$, $b$ and $\infty$. Now let $b\to\infty$ so as to have infinity as a confluence of two singularities. This results in the function $\pFq11{a}{c}{z}$ so that \begin{equation} \pFq11{a}{c}{z} =\lim_{b\to\infty} \pFq21{a \,\,\, b}{c}{\frac{z}{b}}, \end{equation} and the corresponding differential equation \begin{equation}\label{che} zy''+(c-z)y'-ay=0, \end{equation} known as the confluent hypergeometric equation. Evaluation of integrals connected to this equation are provided in \cite{dixit-2015f}. The equation \eqref{che} has two linearly independent solutions: \begin{equation} M(a,b;x) = \pFq11{a}{b}{x}, \end{equation} \noindent known as the Kummer function and the \textit{Tricomi function} with integral representation \begin{equation} U(a,b;x) = \frac{1}{\Gamma(a)} \int_{0}^{\infty} t^{a-1} \exp(-xt) (1+t)^{b-a-1} \, dt, \end{equation} \noindent and hypergeometric form \begin{equation} \label{hyper-tricomi} U(a,b;x) = \frac{\Gamma(b-1)}{\Gamma(a)} x^{1-b} \pFq11{1+a-b}{2-b}{x} + \frac{\Gamma(1-b)}{\Gamma(1+a-b)} \pFq11{a}{b}{x}. \end{equation} \medskip A direct application of the method of brackets gives \begin{eqnarray*} U(a,b;x) & = & \frac{1}{\Gamma(a)} \int_{0}^{\infty} t^{a-1} \left( \sum_{n_{1}} \phi_{n_{1}} x^{n_{1}} t^{n_{1}} \right) \left( \sum_{n_{2},n_{3}} \phi_{n_{2},n_{3}} t^{n_{3}} \frac{ \langle 1+ a - b + n_{2} + n_{3} \rangle }{\Gamma(1+a-b)} \right) \, dt \\ & = & \frac{1}{\Gamma(a)} \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} x^{n_{1}} \frac{ \langle 1 + a -b +n_{2} + n_{3} \rangle }{\Gamma(1+a-b)} \langle a + n_{1} + n_{3} \rangle. \end{eqnarray*} This is a bracket series of index $1$ and its evaluation produces three terms: \begin{eqnarray*} U_{1}(a,b;x) & = & \frac{\Gamma(1-b)}{\Gamma(1+a-b)} \pFq11{a}{b}{x}, \\ U_{2}(a,b;x) & = & \frac{\Gamma(b-1)}{\Gamma(a)} x^{1-b} \pFq11{1+a-b}{2-b}{x}, \\ U_{3}(a,b;x) & = & x^{-a} \, \pFq20{a \,\,\,\, 1+a-b}{-}{- \frac{1}{x}}. \end{eqnarray*} The first two are convergent in the region $|x|<1$ and their sum yields \eqref{hyper-tricomi}. The series $U_{3}$ is formally divergent, the terms are finite but the series is divergent. \begin{example} The Mellin transform of the Tricomi function is given by \begin{equation} I(a,b;\beta) = \int_{0}^{\infty} x^{\beta-1} U(a,b,x) \, dx. \label{mellin-tricomi} \end{equation} \noindent Entry $7.612.1$ of \cite{gradshteyn-2015a} \begin{equation} \label{hyper-11} \int_{0}^{\infty} x^{\beta-1} \pFq11{a}{b}{-x} \, dx = \frac{\Gamma(\beta) \Gamma(a - \beta) \Gamma(b)}{\Gamma(b-\beta) \Gamma(a)} \end{equation} \noindent is used in the evaluation of $I(a,b,\beta)$. A proof of \eqref{hyper-11} appears in \cite{dixit-2015f}. \smallskip The first evaluation of \eqref{mellin-tricomi} uses the hypergeometric representation \eqref{hyper-tricomi} and the formula \eqref{hyper-11}. This is a traditional computation. Direct substitution gives \begin{eqnarray*} I(a,b,\beta) & = & \frac{\Gamma(b-1)}{\Gamma(a)} \int_{0}^{\infty} x^{\beta - b} \pFq11{1+a-b}{2-b}{x} \, dx + \\ & & \quad \quad \frac{\Gamma(1-b)}{\Gamma(1+a-b)} \int_{0}^{\infty} x^{\beta-1} \pFq11{a}{b}{x} \, dx \\ & = & -(-1)^{-\beta+b} \frac{\Gamma(b-1)}{\Gamma(a)} \frac{\Gamma(\beta-b+1) \Gamma(a- \beta) \Gamma(2-b)}{\Gamma(1+a-b) \Gamma(1- \beta)} \\ & & \quad + (-1)^{-\beta} \frac{\Gamma(1-b) }{\Gamma(1+a-b)} \frac{\Gamma(\beta) \Gamma(a - \beta) \Gamma(b)}{\Gamma(b- \beta) \Gamma(a)}. \end{eqnarray*} \noindent The result \begin{equation} \label{mellin-U} \int_{0}^{\infty} x^{\beta-1} U(a,b,x) \, dx = \frac{\Gamma(a - \beta) \Gamma(\beta - b + 1) \Gamma(\beta)}{\Gamma(a) \Gamma(a-b+1)} \end{equation} \noindent follows from simplification of the previous expression. \smallskip The second evaluation of \eqref{mellin-tricomi} uses the method of brackets and the divergent series $U_{3}$. It produces the result directly. Start with \begin{eqnarray*} I(a,b,\beta) & = & \int_{0}^{\infty} x^{\beta-1} U(a,b,x) \, dx \\ & = & \int_{0}^{\infty} x^{\beta - a -1} \pFq20{a \,\, \, 1 + a - b }{-}{- \frac{1}{x}} \, dx \\ & = & \sum_{n} \phi_{n} (a)_{n} (1+a-b)_{n} \langle \beta - a - n \rangle. \end{eqnarray*} \noindent A standard evaluation by the method of brackets now reproduces \eqref{mellin-U}. \end{example} \begin{example} The evaluation of \begin{equation} J(a,b;\mu) = \int_{0}^{\infty} e^{- \mu x } U(a,b,x) \, dx \end{equation} \noindent is given next. Start with the expansions \begin{equation} \exp(- \mu x) = \sum_{n_{1}} \phi_{n_{1}} \mu^{n_{1}} x^{n_{1}} \label{exp-bracket} \end{equation} \noindent and \begin{eqnarray*} U(a,b,x) & = & x^{-a} \, \pFq20{a \,\,\, 1+a-b}{-}{- \frac{1}{x}} \\ & = & \frac{x^{-a}}{\Gamma(a) \Gamma(1+a-b)} \sum_{n_{2}} \phi_{n_{2}} \Gamma(a+ n_{2}) \Gamma(1+a-b+n_{2}) x^{-n_{2}}, \nonumber \end{eqnarray*} \noindent to write \begin{equation*} J(a,b; \mu) = \frac{1}{\Gamma(a) \Gamma(1+a-b)} \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}}\mu^{n_{1} }\Gamma(a+ n_{2}) \Gamma(1+a-b+n_{2}) \langle n_{1}-a -n_{2} + 1 \rangle. \end{equation*} \noindent This yields the two series \begin{eqnarray*} J_{1}(a,b;\mu) & = & \frac{1}{\Gamma(a) \Gamma(1+a-b)} \sum_{n} \phi_{n} \Gamma(a-1-n) \Gamma(n+1) \Gamma(2-b+n) \mu^{n} \\ & = & \frac{\Gamma(2-b)}{(a-1) \Gamma(1+a-b)} \pFq21{1 \,\,\, 2-b}{2-a}{\mu}, \nonumber \end{eqnarray*} \noindent and \begin{eqnarray*} J_{2}(a,b;\mu) & = & \frac{\mu^{a-1}}{\Gamma(a) \Gamma(1+a-b)} \sum_{n} \phi_{n} \Gamma(-a+1-n) \Gamma(a+n) \Gamma(1+a-b+n) \mu^{n} \\ & = & \mu^{a-1} \Gamma(1-a) \pFq10{1+a-b}{-}{\mu} \\ & = & \frac{\mu^{a-1} \, \Gamma(1-a)}{(1-\mu)^{1+a-b}}. \end{eqnarray*} \smallskip In the case $| \mu | < 1$, both $J_{1}$ and $J_{2}$ are convergent. Therefore \begin{equation*} \int_{0}^{\infty} \exp(- \mu x) U(a,b,x) \, dx = \frac{\Gamma(2-b)}{(a-1) \Gamma(1+a-b)} \pFq21{1 \,\,\, 2-b}{2-a}{\mu} + \frac{\mu^{a-1} \, \Gamma(1-a)}{(1-\mu)^{1+a-b}}. \end{equation*} \smallskip In the case $\mu=1$, the series $J_{2}$ diverges, so it is discarded. This produces \begin{equation} \int_{0}^{\infty} e^{-x} U(a,b,x) \, dx = \frac{\Gamma(2-b)}{(a-1) \Gamma(1+a-b)} \pFq21{1 \,\,\, 2-b}{2-a}{1}. \end{equation} \noindent Gauss' value \eqref{gauss-value} gives \begin{equation} \int_{0}^{\infty} e^{-x} U(a,b,x) \, dx = \frac{\Gamma(2-b)}{\Gamma(2-b+a)}. \end{equation} \noindent In particular, if $a$ is a positive integer, say $a= k$, then \begin{equation} \int_{0}^{\infty} e^{-x} U(k,b,x) \, dx = \frac{1}{(b-2)_{k}}. \end{equation} \noindent This result is summarized next. \begin{proposition} Let \begin{equation} J(a,b;\mu) = \int_{0}^{\infty} e^{-\mu x} U(a,b,x) \, dx. \end{equation} \noindent Then, for $| \mu |< 1$, \begin{equation} J(a,b,\mu) = \frac{\Gamma(2-b)}{(a-1) \Gamma(1+a-b)} \pFq21{1 \,\,\, 2-b}{2-a}{\mu} + \frac{\mu^{a-1} \, \Gamma(1-a)}{(1-\mu)^{1+a-b}}, \end{equation} \noindent and for $\mu=1$, \begin{equation} J(a,b;1) = \frac{\Gamma(2-b)}{\Gamma(2-b+a)}. \end{equation} \noindent In the special case $a=k \in \mathbb{N}$, \begin{equation} J(k,b;1) = \frac{1}{(b-2)_{k}}. \end{equation} \end{proposition} \end{example} \section{The Airy function} \label{sec-airy} \setcounter{equation}{0} The Airy function, defined by the integral representation \begin{equation} \text{Ai}(x) = \frac{1}{\pi} \int_{0}^{\infty} \cos \left( \frac{t^{3}}{3} + x t \right) \, dt \end{equation} \noindent satisfies the equation \begin{equation} \label{airy-ode} \frac{d^{2}y}{dx^{2}} - x y = 0, \end{equation} \noindent and the condition $y \to 0$ as $x \to \infty$. A second linearly independent solution of \eqref{airy-ode} is usually taken to be \begin{equation} \text{Bi}(x) = \frac{1}{\pi} \int_{0}^{\infty} \left[ \exp\left( - \frac{t^{3}}{3} + x t \right) + \sin \left( \frac{t^{3}}{3} + x t \right) \right]\, dt. \end{equation} Using \eqref{cosine-hyper} produces \begin{eqnarray*} \text{Ai}(x) & = & \frac{1}{\pi} \sum_{n_{1}} \phi_{n} \frac{1}{\left( \tfrac{1}{2} \right)_{n_{1}} \, 2^{2n_{1}}} \int_{0}^{\infty} \left( \frac{t^{3}}{3} + x t \right)^{2n_{1}} \, dt \\ & = & \frac{1}{\pi} \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{x^{n_{2}} \langle -2n_{1}+n_{2} + n_{3} \rangle } { \left( \tfrac{1}{2} \right)_{n_{1}} \, 2^{2n_{1}} \, \Gamma(-2n_{1}) 3^{n_{3}}} \int_{0}^{\infty} t^{3 n_{3} + n_{2}} \, dt \nonumber \\ & = & \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{x^{n_{2}}}{\sqrt{\pi} \Gamma(-2n_{1}) \Gamma \left( \tfrac{1}{2} + n_{1} \right) 2^{2n_{1}} 3^{n_{3}} } \langle -2n_{1} + n_{2} + n_{3} \rangle \, \langle 3 n_{3} + n_{2} + 1 \rangle. \nonumber \end{eqnarray*} The usual resolution of this bracket series gives three cases: \noindent \begin{equation} T_{1} = \frac{1}{2} \sqrt{ \frac{3}{\pi}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{\Gamma(- \tfrac{1}{2} - 3n) } {\Gamma(-2n)} \left( \frac{3}{4} \right)^{n} x^{3n+ 1/2} \end{equation} \noindent a totally null series, \begin{equation} T_{2} = \frac{1}{6^{2/3} \, \sqrt{\pi}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{\Gamma( \tfrac{1}{6} - \tfrac{n}{3} ) }{\Gamma( \tfrac{1}{3} - \tfrac{2n}{3} )} \left( \frac{3}{4} \right)^{n/3} x^{n} \end{equation} \noindent a partially divergent series (at the index $n = 18$), and \smallskip \begin{equation} T_{3} = \frac{1}{\sqrt{\pi}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{\Gamma(3n+1) \Gamma(n+ \tfrac{1}{2})}{\Gamma(-n) \Gamma(2n+1)} \left( \frac{4}{3} \right)^{n} x^{-3n-1} \end{equation} \noindent a totally null series, as $T_{1}$ was. \begin{example} The series for $\text{Ai}(x)$ are now used to evaluate the Mellin transform \begin{equation} I(s) = \int_{0}^{\infty} x^{s -1 } \text{Ai}(x) \, dx. \end{equation} \noindent This integral is now computed using the three series $T_{j}$ given above. Using first the value of $T_{1}$ and the formulas \begin{equation} \Gamma(2u) = \frac{2^{2u-1}}{\sqrt{\pi}} \Gamma(u) \Gamma(u + \tfrac{1}{2} ) \text{ and } \Gamma(3u) = \frac{3^{3u-\tfrac{1}{2}}}{2 \pi} \Gamma(u) \Gamma(u + \tfrac{1}{3}) \Gamma( u + \tfrac{2}{3}) \end{equation} \noindent (these appear as $8.335.1$ and $8.335.2$ in \cite{gradshteyn-2015a}, respectively), give \begin{eqnarray} I(s) & = & \frac{1}{2} \sqrt{ \frac{3}{\pi}} \sum_{n} \phi_{n} \left( \frac{3}{4} \right)^{n} \frac{\Gamma( - \tfrac{1}{2} - 3n)}{\Gamma(-2n)} \langle s+ 3 n + \tfrac{1}{2} \rangle \\ & = & \frac{1}{6} \sqrt{\frac{3}{\pi}} \left( \frac{3}{4} \right)^{-s/3 - 1/6} \frac{ \Gamma \left( \tfrac{2s + 1}{6} \right) \Gamma(s) }{\Gamma \left( \frac{2 s+1}{3} \right) }. \nonumber \\ & = & 3^{-(s+2)/3} \frac{\Gamma(s)}{\Gamma( \frac{s+2}{3})} \nonumber \\ & = & \frac{3^{(4 s - 7)/6} }{2 \pi} \Gamma \left( \frac{s +1}{3} \right) \Gamma\left( \frac{s}{3} \right). \nonumber \end{eqnarray} \noindent Similar calculations, using $T_{2}$ or $T_{3}$, give the same result. This result is stated next. \begin{lemma} The Mellin transform of the Airy function is given by \begin{equation} \int_{0}^{\infty} x^{s -1 } {\rm{Ai}}(x) \, dx = \frac{1}{2 \pi} 3^{(4 s - 7)/6}\Gamma \left( \frac{s +1}{3} \right) \Gamma\left( \frac{s}{3} \right). \end{equation} \end{lemma} \end{example} \section{The Bessel function $K_{\nu}$} \label{sec-bessel-nu} \setcounter{equation}{0} This section presents series representations for the Bessel function $K_{\nu}(x)$ defined by the integral representation \begin{equation} K_{\nu}(x) = \frac{2^{\nu} \Gamma(\nu+ \tfrac{1}{2})}{\Gamma(\tfrac{1}{2})} x^{\nu} \int_{0}^{\infty} \frac{\cos t \, dt } {(x^{2}+t^{2})^{\nu+ \tfrac{1}{2}}}, \end{equation} \noindent given as entry $8.432.5$ in \cite{gradshteyn-2015a}. Using the representation \eqref{cosine-hyper} of $\cos t$ as $\begin{displaystyle} \pFq01{-}{\tfrac{1}{2}}{- \frac{t^{2}}{4}} \end{displaystyle}$ and using Rule $P_{2}$ in Section \ref{sec-method} to expand the binomial in the integrand as a bracket series gives \begin{equation} K_{\nu}(x) = 2^{\nu} \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{x^{2n_{3}+\nu}}{2^{2n_{1}} \Gamma(n_{1} + \tfrac{1}{2}) } \langle \nu+ \tfrac{1}{2} + n_{2} + n_{3} \rangle \langle 2 n_{1} + 2n_{2} + 1 \rangle. \end{equation} \noindent The usual procedure to evaluate this bracket series gives three expressions: \begin{eqnarray} T_{1} & = & 2^{\nu-1} x^{-\nu} \sum_{n} \phi_{n} \Gamma(\nu - n) \left( \frac{x^{2}}{4} \right)^{n}, \\ T_{2} & = & 2^{-1-\nu} x^{\nu} \sum_{n} \phi_{n} \Gamma(-\nu -n) \left( \frac{x^{2}}{4} \right)^{n}, \nonumber \\ T_{3} & = & 2^{\nu} \sum_{n} \phi_{n} \frac{2^{2n}}{\Gamma(-n)} \Gamma(n+ \nu + \tfrac{1}{2}) \Gamma(n + \tfrac{1}{2}) x^{-2n-\nu-1}. \nonumber \end{eqnarray} The series $T_{3}$ is a totally null series for $K_{\nu}$. In the case $\nu \not \in \mathbb{N}$, the series $T_{1}$ and $T_{2}$ are finite and $K_{\nu}(x) = T_{1}+T_{2}$ gives the usual expression in terms of the Bessel $I_{\nu}$ function \begin{equation} K_{\nu}(x) = \frac{\pi}{2} \frac{I_{- \nu}(x) - I_{\nu}(x)}{\sin \pi \nu}, \end{equation} \noindent as given in entry $8.485$ in \cite{gradshteyn-2015a}. In the case $\nu = k \in \mathbb{N}$, the series $T_{1}$ is partially divergent (the terms $n=0, \, 1, \ldots, k$ have divergent coefficients) and the series $T_{2}$ is totally divergent (every coefficient is divergent). In the case $\nu=0$, both the series $T_{1}$ and $T_{2}$ become \begin{equation} \label{divergent-k0} \text{Totally divergent series for } K_{0}(x) = \frac{1}{2} \sum_{n} \phi_{n} \Gamma(- n) \left( \frac{x^{2}}{4} \right)^{n}, \end{equation} \noindent using Rule $E_{4}$ to keep a single copy of the divergent series. This complements the \begin{equation} \label{null-k0} \text{Totally null series for } K_{0}(x) = \sum_{n} \phi_{n} \frac{2^{2n}}{\Gamma(-n)} \Gamma^{2}(n+ \tfrac{1}{2}) x^{-2n-1}. \end{equation} \medskip The examples presented below illustrate the use of these divergent series in the computation of definite integrals with the Bessel function $K_{0}$ in the integrand. Entries in \cite{gradshteyn-2015a} with $K_{0}$ as the result of an integral have been discussed in \cite{glasser-2012a}. \begin{example} \label{ex-k0-1} Entry $6.511.12$ of \cite{gradshteyn-2015a} states that \begin{equation} \int_{0}^{\infty} K_{0}(x) \, dx = \frac{\pi}{2}. \label{value-k0-1} \end{equation} \noindent To verify this result, use the totally null representation \eqref{null-k0} to obtain \begin{eqnarray} \int_{0}^{\infty} K_{0}(x) \, dx & = & \sum_{n} \phi_{n} \frac{\Gamma \left( n + \tfrac{1}{2} \right)^{2}}{\Gamma(-n)} 4^{n} \int_{0}^{\infty} x^{-2n-1} \, dx \\ & = & \sum_{n} \phi_{n} \frac{\Gamma \left( n + \tfrac{1}{2} \right)^{2}}{\Gamma(-n)} 4^{n} \langle -2n \rangle. \nonumber \end{eqnarray} \noindent The value of the bracket series is \begin{eqnarray} \int_{0}^{\infty} K_{0}(x) \, dx & = & \frac{1}{2} \Gamma \left(n + \tfrac{1}{2} \right)^{2} 4^{n} \Big{|}_{n=0} \\ & = & \frac{\pi}{2}. \nonumber \end{eqnarray} \end{example} \begin{example} The Mellin transform \begin{equation} G(\beta,s) = \int_{0}^{\infty} x^{s-1} K_{0}(\beta x) \, dx \end{equation} \noindent is evaluated next. Example \ref{ex-k0-1} corresponds to the special case $s=\beta = 1$. The totally divergent series \eqref{divergent-k0} yields \begin{equation} G(\beta,s) = \frac{1}{2} \sum_{n} \phi_{n} \Gamma(-n) \frac{\beta^{2n}}{2^{2n}} \langle 2n + s \rangle \end{equation} \noindent and a direct evaluation of the brackets series using Rule $E_{1}$ gives \begin{equation} G(\beta, s) = \frac{2^{s-2}}{\beta^{s}} \Gamma^{2} \left( \frac{s}{2} \right). \label{mellin-k0} \end{equation} Now using the totally null representation \eqref{null-k0} gives the bracket series \begin{equation} G(\beta, s) = \sum_{n} \phi_{n} \frac{2^{2n} \Gamma^{2}(n + \tfrac{1}{2})}{\beta^{2n+1} \, \Gamma(-n)} \langle s -1-2n \rangle. \end{equation} \noindent One more application of Rule $E_{1}$ gives \eqref{mellin-k0} again. \end{example} \begin{example} Entry $6.611.9$ of \cite{gradshteyn-2015a} is \begin{equation} \label{formula-6-611-9} \int_{0}^{\infty} e^{-ax} K_{0}(bx) \, dx = \frac{1}{\sqrt{b^{2}-a^{2}}} \cos^{-1}\left( \frac{a}{b} \right), \end{equation} \noindent for $\mathop{\rm Re}\nolimits{(a+b)} > 0$. This is a generalization of Example \ref{ex-k0-1}. The totally divergent representation \eqref{divergent-k0} and the series for the exponential function \eqref{exp-bracket} give the bracket series \begin{equation} \int_{0}^{\infty} e^{-ax} K_{0}(bx) \, dx = \frac{1}{2} \sum_{n_{1},n_{2}} \phi_{n_{1}n_{2}} \Gamma(-n_{2}) \frac{a^{n_{1}}b^{2n_{2}}}{2^{2n_{2}}} \langle n_{1} + 2n_{2} + 1 \rangle. \end{equation} The usual procedure gives two expressions: \noindent \begin{equation} T_{1} = \frac{1}{2a} \sum_{n} \phi_{n} \Gamma(2n+1) \Gamma(-n) \left( \frac{b^{2}}{4a^{2}} \right)^{n}, \end{equation} \noindent which is discarded since it is divergent and \begin{equation} T_{2} = \frac{1}{2b} \sum_{n=0}^{\infty} \frac{\Gamma \left( \frac{n+1}{2} \right)^{2}}{n!} \left( - \frac{2a}{b} \right)^{n}. \end{equation} \noindent Separating the series according to the parity of the index $n$ yields \begin{equation} T_{2} = \frac{1}{2b} \left[ \pi \sum_{n=0}^{\infty} \frac{ \left( \tfrac{1}{2} \right)_{n}}{n! }\left( \frac{a^{2}}{b^{2}} \right)^{n} - \frac{2a}{b} \sum_{n=0}^{\infty} \frac{(1)_{n}^{2}}{n! \, \left( \frac{3}{2} \right)_{n}} \left( \frac{a^{2}}{b^{2}} \right)^{n} \right]. \end{equation} \noindent The identity \cite[$9.121.1$]{gradshteyn-2015a} \begin{equation} \pFq21{-n,b}{b}{-z} = (1+z)^{n}, \end{equation} \noindent with $n = -\tfrac{1}{2}$ gives \begin{equation} \frac{\pi}{2b} \sum_{n=0}^{\infty} \frac{ \left( \tfrac{1}{2} \right)_{n}}{n!} \left( \frac{a^{2}}{b^{2}} \right)^{n} = \frac{\pi}{2} \frac{1}{\sqrt{b^{2}-a^{2}}}. \end{equation} \noindent The identity \begin{equation} - \frac{a}{b^{2}} \sum_{n=0}^{\infty} \frac{(1)_{n}^{2}}{n! \, \left( \tfrac{3}{2} \right)_{n}} \left( \frac{a}{b} \right)^{2n} = - \frac{1}{\sqrt{b^{2}-a^{2}}} \sin^{-1} \left( \frac{a}{b} \right) \end{equation} \noindent comes from the Taylor series \begin{equation} \frac{2x \sin^{-1}x}{\sqrt{1-x^{2}}} = \sum_{n=1}^{\infty} \frac{2^{2n}x^{2n}}{n \, \binom{2n}{n}}. \end{equation} (See Theorem $7.6.2$ in \cite{moll-2012a} for a proof). The usual argument now gives \begin{equation} T_{2} = \int_{0}^{\infty} e^{-ax} K_{0}(bx) \, dx = \frac{1}{\sqrt{b^{2}-a^{2}}} \left[ \frac{\pi}{2} - \sin^{-1}\left( \frac{a}{b} \right) \right], \end{equation} \noindent an equivalent form of \eqref{formula-6-611-9}. \end{example} \begin{example} The next example, \begin{equation} \int_{0}^{\infty} x \sin(bx) K_{0}(ax) \, dx = \frac{\pi b}{2} (a^{2}+b^{2})^{-3/2}, \end{equation} \noindent appears as entry $6.691$ in \cite{gradshteyn-2015a}. The factor $\sin bx$ in integrand is expressed as a series: \begin{eqnarray} \sin(bx) & =& b \, x \, \pFq01{-}{\tfrac{3}{2}}{- \frac{b^{2}x^{2}}{4}} \label{sin-1} \\ & = & b \Gamma \left( \tfrac{3}{2} \right) \sum_{n_{2}} \phi_{n_{2}} \frac{ \left( \frac{b^{2}}{4} \right)^{n_{2}}}{\Gamma \left( n_{2} + \tfrac{1}{2} \right)} x^{2n_{2}+1} \nonumber \end{eqnarray} \noindent and the Bessel factor is replaced by its totally-null representation \eqref{null-k0} \begin{equation} K_{0}(ax) = \frac{1}{a} \sum_{n_{1}} \phi_{n_{1}} \frac{\Gamma \left( n_{1} + \tfrac{1}{2} \right)^{2}}{\Gamma(-n_{1})} \left( \frac{4}{a^{2}} \right)^{n_{1}} x^{-2n_{1}-1}. \end{equation} \noindent This yields \begin{multline} \int_{0}^{\infty} x \sin(bx) K_{0}(ax) \, dx = \\ \Gamma \left( \frac{3}{2} \right) \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{\Gamma \left( n_{1} + \tfrac{1}{2} \right)^{2}}{\Gamma \left( n_{2} + \tfrac{3}{2} \right) \Gamma(-n_{1})} \frac{4^{n_{1}-n_{2}} b^{2n_{2}+1}}{a^{2n_{1}+1}} \langle 2+ 2n_{2}-2n_{1} \rangle. \end{multline} \noindent These representation produces two solutions $S_{1}$ and $S_{2}$, one per free index, that \textit{are identical}. The method of brackets rules state that one only should be taken. This is: \begin{equation} S_{1} = \frac{\sqrt{\pi} \, b}{a^{3}} \sum_{k=0}^{\infty} \frac{ \Gamma \left( k+ \tfrac{3}{2} \right) (-1)^k b^{2k} }{k! \, a^{2k}}. \end{equation} \noindent The result now follows from the identity \begin{equation} \sum_{k=0}^{\infty} \frac{ \left( \tfrac{3}{2} \right)_{k}}{k!} \left( - \frac{b}{a} \right)^{k} = \pFq10{\tfrac{3}{2}}{-}{- \frac{b}{a}} \end{equation} \noindent and the binomial theorem obtaining \begin{equation} \pFq10{\tfrac{3}{2}}{-}{x} = \frac{1}{(1-x)^{3/2}}. \end{equation} \end{example} \begin{example} The next example in this section evaluates \begin{equation} G(a,b) = \int_{0}^{\infty} J_{0}(ax) K_{0}(bx) \, dx. \end{equation} \noindent From the representation \begin{equation} J_{0}(ax) = \sum_{n_{1}} \phi_{n_{1}} \frac{a^{2n_{1}} x^{2n_{1}}}{2^{2n_{1}} \Gamma(n_{1}+1)} \end{equation} \noindent and the null-series \eqref{k0-null} it follows that \begin{equation} G(a,b) = \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a^{2n_{1}} 2^{2(n_{2}-n_{1})} \Gamma^{2}(n_{2}+ \tfrac{1}{2})}{\Gamma(n_{1}+1) \Gamma(-n_{2}) b^{2n_{2}+1}} \langle 2n_{1} - 2n_{2} \rangle. \end{equation} \noindent This bracket series generates two identical series, so only one is kept to produce \begin{eqnarray} G(a,b) & = & \frac{1}{2b} \sum_{n} \phi_{n} \frac{\Gamma^{2}(n+ \tfrac{1}{2})}{\Gamma(n+1)} \left( \frac{a^{2}}{b^{2}} \right)^{n} \\ & = & \frac{\pi}{2b} \pFq21{\frac{1}{2} \,\,\, \frac{1}{2}}{1}{ - \frac{a^{2}}{b^{2}} } \nonumber \\ & = & \frac{1}{b} \mathbf{K} \left( \frac{i a}{b} \right). \nonumber \end{eqnarray} Here $\mathbf{K}(z)$ is the elliptic integral of the first kind. Using the identity \begin{equation} \mathbf{K}(i z) = \frac{1}{\sqrt{z^{2}+1}} \mathbf{K} \left( \frac{z}{\sqrt{z^{2}+1}} \right) \end{equation} \noindent yields \begin{equation} G(a,b) = \frac{1}{\sqrt{a^{2}+b^{2}}} \mathbf{K} \left( \frac{a}{\sqrt{a^{2}+b^{2}}} \right). \end{equation} \end{example} \begin{example} The next example evaluates \begin{equation} H(a) = \int_{0}^{\infty} K_{0}^{2}(ax) \, dx. \end{equation} \noindent Naturally $H(a) = H(1)/a$, but it is convenient to keep $a$ as a parameter. The problem is generalized to \begin{equation} H_{1}(a,b) = \int_{0}^{\infty} K_{0}(ax)K_{0}(bx) \, dx, \end{equation} \noindent and $H(a) = H_{1}(a,a)$. The evaluation uses the totally divergent series \eqref{divergent-k0} \begin{equation} K_{0}(ax) = \sum_{n_{1}} \phi_{n_{1}} \frac{a^{2n_{1}} \Gamma(-n_{1})}{2^{2n_{1}+1}} x^{2n_{1}} \end{equation} \noindent as well as the integral representation (see $8.432.6$ \cite{gradshteyn-2015a}) and the corresponding bracket series \begin{eqnarray} K_{0}(bx) & = & \frac{1}{2} \int_{0}^{\infty} \exp \left( -t - \frac{b^{2}x^{2}}{4t} \right) \, \frac{dt}{t} \\ & = & \sum_{n_{2},n_{3}} \phi_{n_{2},n_{3}} \frac{b^{2n_{3}} x^{2n_{3}}}{2^{2n_{3}+1}} \langle n_{2} - n_{3} \rangle. \nonumber \end{eqnarray} \noindent Then \begin{equation} H_{1}(a,b) = \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{2n_{1}} b^{2n_{3}} \Gamma(-n_{1})}{2^{2n_{1}+2n_{3}+2}} \langle n_{2} - n_{3} \rangle \, \langle 2n_{1} + 2n_{3} + 1 \rangle. \end{equation} \noindent The evaluation of this bracket series requires an extra parameter $\varepsilon$ and to consider \begin{equation} H_{2}(a,b,\varepsilon) = \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{2n_{1}} b^{2n_{3}} \Gamma(-n_{1})}{2^{2n_{1}+2n_{3}+2}} \langle n_{2} - n_{3} + \varepsilon \rangle \, \langle 2n_{1} + 2n_{3} + 1 \rangle. \end{equation} \noindent Evaluating this brackets series produces three values, one divergent, which is discarded, and two others: \begin{eqnarray} T_{2} & = & \frac{1}{4a} c^{\varepsilon} \sum_{n} \phi_{n} \Gamma(-n- \varepsilon) \Gamma^{2}(\varepsilon + n + \tfrac{1}{2})c^{n} \\ T_{3} & = & \frac{1}{4a} \sum_{n} \phi_{n} \Gamma(-n + \varepsilon) \Gamma^{2}(n + \tfrac{1}{2}) c^{n}, \nonumber \end{eqnarray} \noindent with $c = b^{2}/a^{2}$. Converting the $\Gamma$-factors into Pochhammer symbols produces \begin{eqnarray} T_{2} & = & \frac{1}{4a} c^{\varepsilon} \Gamma(- \varepsilon) \Gamma^{2} \left( \tfrac{1}{2} + \varepsilon \right) \pFq21{\tfrac{1}{2} + \varepsilon \,\,\, \tfrac{1}{2} + \varepsilon }{1+ \varepsilon}{\,\,\, c} \\ T_{3} & = & \frac{\pi}{4a} \Gamma(\varepsilon) \,\, \pFq21{\tfrac{1}{2} \,\,\, \tfrac{1}{2} }{1 -\varepsilon}{\,\,\, c}. \nonumber \end{eqnarray} \noindent This yields \begin{equation*} H_{2}(a,b, \varepsilon) = \frac{\pi}{4a} \left[ \Gamma(\varepsilon) \pFq21{\tfrac{1}{2} \,\,\, \tfrac{1}{2} }{1 - \varepsilon}{\,\,\, c} - c^{\varepsilon} \frac{\Gamma^{2}( \tfrac{1}{2} + \varepsilon)}{\varepsilon \,\Gamma(\varepsilon) \sin \pi \varepsilon} \pFq21{\tfrac{1}{2} + \varepsilon \,\,\, \tfrac{1}{2} + \varepsilon }{1+ \varepsilon}{\,\,\, c} \right]. \end{equation*} Let $c \to 1 \,\, ( b \to a)$ and use Gauss' formula \eqref{gauss-value} to obtain \begin{equation*} \pFq21{\frac{1}{2} \,\,\, \frac{1}{2} }{1- \varepsilon}{1} = \frac{\Gamma(1 - \varepsilon) \Gamma(- \varepsilon)} {\Gamma^{2} \left( \frac{1}{2} - \varepsilon \right)} \,\,\, {\rm and } \,\,\, \pFq21{\frac{1}{2} + \varepsilon \,\,\, \frac{1}{2} + \varepsilon }{1+ \varepsilon}{1} = \frac{\Gamma(1 + \varepsilon) \Gamma(- \varepsilon)} {\Gamma^{2} \left( \frac{1}{2} \right)}, \end{equation*} \noindent and this produces \begin{eqnarray*} H_{2}(a,a,\varepsilon) & = & \frac{\Gamma(- \varepsilon)^{2} \Gamma^{2} \left( \varepsilon + \tfrac{1}{2} \right) \Gamma( \varepsilon + 1)}{4 \pi a} + \frac{\pi \Gamma(1 - \varepsilon) \Gamma(- \varepsilon) \Gamma(\varepsilon)}{4 a \, \Gamma^{2} \left( \tfrac{1}{2} - \varepsilon \right)} \\ & = & \frac{\pi}{4a} \left[ \frac{\Gamma^{2}(- \varepsilon) \Gamma(\varepsilon+1) \Gamma^{2}(\varepsilon + \tfrac{1}{2})}{\pi^{2}} + \frac{\Gamma(1 - \varepsilon) \Gamma(- \varepsilon) \Gamma(\varepsilon)}{\Gamma^{2}( \tfrac{1}{2} - \varepsilon)} \right]. \nonumber \end{eqnarray*} \noindent Expanding $H_{2}(a,a,\varepsilon)$ in powers of $\varepsilon$ gives \begin{equation} H(a,a,\varepsilon) = \frac{\pi^{2}}{4a} - \frac{\pi^{2}}{4a} ( \gamma + 4 \ln 2 ) \varepsilon + o(\varepsilon). \end{equation} \noindent Letting $\varepsilon \to 0$ gives \begin{equation} \int_{0}^{\infty} K_{0}^{2}(ax) \, dx = \frac{\pi^{2}}{4a}. \label{k0-squared} \end{equation} \end{example} \begin{example} The final example in this section is the general integral \begin{equation} I(a,b;\nu,\lambda;\rho) = \int_{0}^{\infty} x^{\rho-1} K_{\nu}(ax) K_{\lambda}(bx) \, dx. \label{int-kgen} \end{equation} \noindent The case $a=b$ appears in \cite{kolbig-1995a}. The evaluation uses the integral representation \begin{equation} K_{\nu}(ax) = \frac{(ax)^{\nu}}{2^{\nu+1}} \int_{0}^{\infty} \text{exp}\left( - t - \frac{a^{2}x^{2}}{4t} \right) \frac{dt}{t^{\nu+1}} \end{equation} \noindent appearing in \cite[$8.432.6$]{gradshteyn-2015a}. This produces the bracket series representation \begin{equation} K_{\nu}(ax) = \frac{1}{2^{\nu+1}} \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a^{2n_{2}+\nu}}{2^{2n_{2}}} x^{2n_{2}+\nu} \langle n_{1}-n_{2} - \nu \rangle. \end{equation} \noindent The second factor uses the totally null representation \eqref{k0-null} \begin{equation} K_{\lambda}(bx) = 2^{\lambda} \sum_{n_{3}} \phi_{n_{3}} \frac{2^{2n_{3}} \Gamma(n_{3} + \lambda + \tfrac{1}{2} ) \Gamma(n_{3} + \tfrac{1}{2}) }{\Gamma(-n_{3}) b^{2n_{3} + \lambda + 1} } \frac{1}{x^{2n_{3}+\lambda + 1}}. \end{equation} \noindent Replacing in \eqref{int-kgen} produces the bracket series \begin{multline} I(a,b;\nu,\lambda;\rho) = \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{2n_{2}+\nu} 2^{\lambda - \nu -1 + 2n_{3} - 2n_{2}} \Gamma(n_{3}+ \lambda + \tfrac{1}{2}) \Gamma(n_{3} + \tfrac{1}{2} )}{b^{2n_{3} + \lambda +1} \, \Gamma(-n_{3})} \label{nice-brackets} \\ \times \langle n_{1}-n_{2} - \nu \rangle \langle \rho + \nu - \lambda + 2n_{2} - 2n_{3} - 1 \rangle. \end{multline} \noindent The vanishing of the brackets gives the system of equations \begin{eqnarray} n_{1} - n_{2} & = & \nu \\ 2n_{2} - 2n_{3} & = & -\rho - \nu + \lambda +1. \nonumber \end{eqnarray} \noindent The matrix of coefficients is of rank $2$, so it produces three series as candidates for values of the integral, one per free index. \smallskip \noindent \textit{Case 1}: $n_{1}$ free. Then $n_{2} = n_{1} - \nu$ and $n_{3} = \frac{\rho - \nu - \lambda-1}{2} + n_{1}$. This gives \begin{equation*} T_{1} = 2^{\rho-3} \frac{b^{\nu- \rho} }{a^{\nu}} \Gamma \left( \frac{\rho - \nu + \lambda}{2} \right) \Gamma \left( \frac{\rho - \nu - \lambda}{2} \right) \Gamma(\nu) \, \pFq21{ \frac{\rho-\nu + \lambda}{2} \,\, \frac{\rho - \nu - \lambda}{2}}{1 - \nu}{ \frac{a^{2}}{b^{2}}}. \end{equation*} \smallskip \noindent \textit{Case 2}: $n_{2}$ free. Then $n_{1} = n_{2} + \nu$ and $n_{3} = \frac{\rho+\nu - \lambda - 1}{2} + n_{2}$. This gives \begin{equation*} T_{2} = 2^{\rho-3} \frac{a^{\nu}}{b^{\nu+\rho}} \Gamma(- \nu) \Gamma \left( \frac{\rho +\nu + \lambda}{2} \right) \Gamma \left( \frac{\rho + \nu - \lambda}{2} \right) \Gamma(\nu) \, \pFq21{ \frac{\rho + \nu + \lambda}{2} \,\, \frac{\rho + \nu - \lambda}{2}}{1 + \nu}{ \frac{a^{2}}{b^{2}}}. \end{equation*} \smallskip \noindent \textit{Case 3}: $n_{3}$ free. Then $n_{2} = n_{3} + \frac{\lambda - \rho - \nu +1}{2}$ and $n_{1} = n_{3} + \frac{\lambda - \rho + \nu +1}{2}$. This produces \begin{multline*} T_{3} = 2^{\rho - 3} \frac{a^{-\rho + \lambda +1}}{b^{\lambda + 1}} \sum_{n} \frac{\phi_{n} }{\Gamma(-n)} \Gamma \left( \frac{\rho + \nu - \lambda -1}{2} - n \right) \Gamma \left( \frac{\rho - \nu - \lambda -1}{2} - n \right) \\ \Gamma \left( n + \lambda + \tfrac{1}{2} \right) \Gamma(n+ \tfrac{1}{2} ) \left( \frac{a^{2}}{b^{2}} \right)^{n}. \end{multline*} \noindent This series has the value zero. This proves the next statement: \begin{proposition} The integral \begin{equation} I(a,b;\nu,\lambda; \rho) = \int_{0}^{\infty} x^{\rho-1} K_{\nu}(ax) K_{\lambda}(bx) \, dx \end{equation} \noindent is given by \begin{multline*} I(a,b;\nu,\lambda; \rho) = \\ 2^{\rho-3} \frac{b^{\nu- \rho} }{a^{\nu}} \Gamma(\nu) \Gamma \left( \frac{\rho - \nu + \lambda}{2} \right) \Gamma \left( \frac{\rho - \nu - \lambda}{2} \right) \, \pFq21{ \frac{\rho-\nu + \lambda}{2} \,\, \frac{\rho - \nu - \lambda}{2}}{1 - \nu}{ \frac{a^{2}}{b^{2}}} \\ \hspace{0.25in} + 2^{\rho-3} \frac{a^{\nu}}{b^{\nu+\rho}} \Gamma(- \nu) \Gamma \left( \frac{\rho +\nu + \lambda}{2} \right) \Gamma \left( \frac{\rho + \nu - \lambda}{2} \right) \, \pFq21{ \frac{\rho + \nu + \lambda}{2} \,\, \frac{\rho + \nu - \lambda}{2}}{1 + \nu}{ \frac{a^{2}}{b^{2}}}. \end{multline*} \end{proposition} Some special cases of this evaluation are interesting in their own right. Consider first the case $a=b$. Using Gauss' theorem \eqref{gauss-value} it follows that \begin{equation} T_{1} = \frac{2^{\rho - 3}\, \Gamma(\nu) \Gamma \left( \frac{\rho + \lambda - \nu}{2} \right) \Gamma \left( \frac{\rho - \lambda - \nu}{2} \right) \Gamma(1- \nu) \Gamma(1- \rho)} { a^{\rho} \,\Gamma \left( 1 - \frac{\rho +\nu + \lambda }{2} \right) \Gamma \left( 1 - \frac{\rho+\nu - \lambda }{2} \right) } \end{equation} \noindent and \begin{equation} T_{2} = \frac{2^{\rho - 3}\, \Gamma(-\nu) \Gamma \left( \frac{\rho + \lambda + \nu}{2} \right) \Gamma \left( \frac{\nu + \rho - \lambda }{2} \right) \Gamma(\nu+1) \Gamma(1- \rho)} { a^{\rho} \,\Gamma \left( 1 - \frac{\rho -\nu - \lambda }{2} \right) \Gamma \left( 1 - \frac{\rho - \nu + \lambda }{2} \right) }. \end{equation} \begin{proposition} The integral \begin{equation} J(a;\nu,\lambda; \rho) = \int_{0}^{\infty} x^{\rho-1} K_{\nu}(ax) K_{\lambda}(ax) \, dx \end{equation} \noindent is given by \begin{multline*} J(a;\nu,\lambda; \rho) = \frac{2^{\rho - 3}\, \Gamma(\nu) \Gamma \left( \frac{\rho + \lambda - \nu}{2} \right) \Gamma \left( \frac{\rho - \lambda - \nu}{2} \right) \Gamma(1- \nu) \Gamma(1- \rho)} { a^{\rho} \,\Gamma \left( 1 - \frac{\rho +\nu + \lambda }{2} \right) \Gamma \left( 1 - \frac{\rho+\nu - \lambda }{2} \right) } + \\ \frac{2^{\rho - 3}\, \Gamma(-\nu) \Gamma \left( \frac{\rho + \lambda + \nu}{2} \right) \Gamma \left( \frac{\nu + \rho - \lambda }{2} \right) \Gamma(\nu+1) \Gamma(1- \rho)} { a^{\rho} \,\Gamma \left( 1 - \frac{\rho -\nu - \lambda }{2} \right) \Gamma \left( 1 - \frac{\rho - \nu + \lambda }{2} \right) }. \end{multline*} \end{proposition} The next special case is to take $a=b$ and $\lambda = \nu$. Then \begin{equation} T_{1} = \frac{2^{\rho-3}}{a^{\rho}} \frac{\Gamma(\nu) \Gamma \left(\frac{\rho}{2} \right) \Gamma \left( \frac{\rho}{2} - \nu \right) \Gamma(1- \nu) \Gamma(1 - \rho)} { \Gamma \left( 1 - \frac{\rho}{2} - \nu \right) \Gamma \left( 1 - \frac{\rho}{2} \right)} \end{equation} \noindent and \begin{equation} T_{2} = \frac{2^{\rho-3}}{a^{\rho}} \frac{\Gamma(-\nu) \Gamma \left(\frac{\rho}{2} \right) \Gamma \left( \frac{\rho}{2} + \nu \right) \Gamma(\nu+1) \Gamma(1 - \rho)} { \Gamma \left( 1 - \frac{\rho}{2} + \nu \right) \Gamma \left( 1 - \frac{\rho}{2} \right)}. \end{equation} \noindent This proves the next result: \begin{proposition} The integral \begin{equation} L(a; \nu, \rho) = \int_{0}^{\infty} x^{\rho-1} K_{\nu}^{2}(ax) \, dx \end{equation} \noindent is given by \begin{eqnarray*} L(a; \nu, \rho) & = & \frac{2^{\rho-3}}{a^{\rho}} \left[ \frac{\Gamma(\nu) \Gamma(1- \nu) \Gamma \left( \frac{\rho}{2} - \nu \right) } {\Gamma \left( 1 - \frac{\rho}{2} - \nu \right)} + \frac{\Gamma(-\nu) \Gamma(1+\nu) \Gamma \left( \frac{\rho}{2} + \nu \right) } {\Gamma \left( 1 - \frac{\rho}{2} + \nu \right)} \right]. \end{eqnarray*} \end{proposition} \medskip The last special case is $\rho=1$; that is, the integral \begin{equation} M(a,b;\nu,\lambda) = \int_{0}^{\infty} K_{\nu}(ax) K_{\lambda}(bx) \, dx. \end{equation} \noindent It is shown that the usual application of the method of brackets yield only divergent series, so a new approach is required. The argument begins with converting the brackets series in \eqref{nice-brackets} to \begin{multline} M(a,b;\nu,\lambda) = \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{2n_{2}+\nu} 2^{\lambda - \nu -1 + 2n_{3} - 2n_{2}} \Gamma(n_{3}+ \lambda + \tfrac{1}{2}) \Gamma(n_{3} + \tfrac{1}{2} )}{b^{2n_{3} + \lambda +1} \, \Gamma(-n_{3})} \label{nice-brackets1} \\ \times \langle n_{1}-n_{2} - \nu \rangle \langle \nu - \lambda + 2n_{2} - 2n_{3} \rangle. \end{multline} \noindent A routine application of the method of brackets gives three series \begin{eqnarray*} T_{1} & = & \frac{b^{\nu-1}}{4a^{\nu}} \Gamma \left( \frac{1-\nu+\lambda}{2} \right) \Gamma \left( \frac{1- \nu - \lambda}{2} \right) \Gamma(\nu) \pFq21{ \frac{1 - \nu + \lambda}{2} \,\,\, \frac{1 - \nu - \lambda}{2}}{1-\nu}{ \frac{a^{2}}{b^{2}}} \nonumber \\ T_{2} & = & \frac{a^{\nu}}{4b^{\nu+1}} \Gamma \left( \frac{1+\nu+\lambda}{2} \right) \Gamma \left( \frac{1+\nu - \lambda}{2} \right) \Gamma(-\nu) \pFq21{ \frac{1 + \nu + \lambda}{2} \,\,\, \frac{\nu - \lambda+1}{2}}{1+\nu}{ \frac{a^{2}}{b^{2}}} \nonumber \end{eqnarray*} \noindent and a totally null series $T_{3}$. Gauss' value \eqref{gauss-value} shows that $T_{1}$ and $T_{2}$ diverge when $a \to b$. Therefore \eqref{nice-brackets1} is replaced by \begin{multline} M(a,b;\nu,\lambda) = \lim\limits_{\varepsilon \to 0} \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{2n_{2}+\nu} 2^{\lambda - \nu -1 + 2n_{3} - 2n_{2}} \Gamma(n_{3}+ \lambda + \tfrac{1}{2}) \Gamma(n_{3} + \tfrac{1}{2} )}{b^{2n_{3} + \lambda +1} \, \Gamma(-n_{3})} \label{nice-brackets2} \\ \times \langle n_{1}-n_{2} - \nu + \varepsilon \rangle \langle \nu - \lambda + 2n_{2} - 2n_{3} \rangle. \end{multline} \noindent Proceeding as before produces a null series that is discarded and also \begin{eqnarray*} T_{1} & = & \frac{a^{-\nu+2 \varepsilon}}{4 b^{1 - \nu + 2 \varepsilon}} \Gamma(\nu - \varepsilon) \Gamma \left( \frac{1 + \lambda - \nu}{2} + \varepsilon \right) \Gamma \left( \frac{1 - \lambda - \nu}{2} + \varepsilon \right) \\ & & \quad \times \pFq21{ \frac{1 - \nu + \lambda}{2} + \varepsilon \,\,\, \frac{1 - \nu - \lambda}{2} + \varepsilon}{1 - \nu + \varepsilon}{ \frac{a^{2}}{b^{2}}} \\ T_{2} & = & \frac{a^{\nu}}{4 b^{1 + \nu} } \Gamma(-\nu + \varepsilon) \Gamma \left( \frac{1 + \lambda + \nu}{2} \right) \Gamma \left( \frac{1 - \lambda + \nu}{2} \right) \\ & & \quad \times \pFq21{ \frac{1 + \nu - \lambda}{2} \,\,\, \frac{1 + \nu + \lambda}{2}}{1 + \nu - \varepsilon}{ \frac{a^{2}}{b^{2}}}. \end{eqnarray*} \noindent In the limit as $b \to a$, these become \begin{eqnarray*} T_{1} & = & \frac{\Gamma(\nu- \varepsilon) \Gamma \left( \frac{1 + \lambda - \nu}{2} + \varepsilon \right) \Gamma \left( \frac{1 - \lambda - \nu}{2} + \varepsilon \right) \Gamma(1 - \nu + \varepsilon ) \Gamma( - \varepsilon) } { 4a \, \Gamma \left( \frac{1 - \nu - \lambda}{2} \right) \Gamma \left( \frac{1 - \nu + \lambda}{2} \right)} \\ T_{2} & = & \frac{\Gamma(-\nu+ \varepsilon) \Gamma \left( \frac{1 + \lambda +\nu}{2} \right) \Gamma \left( \frac{1 - \lambda + \nu}{2} \right) \Gamma(1 + \nu - \varepsilon ) \Gamma( - \varepsilon) } { 4a \, \Gamma \left( \frac{1 + \nu + \lambda}{2} - \varepsilon \right) \Gamma \left( \frac{1 + \nu - \lambda}{2} - \varepsilon \right)}. \end{eqnarray*} \noindent Passing to the limit as $\varepsilon \to 0$ gives \begin{equation} \int_{0}^{\infty} K_{\nu}(ax) K_{\lambda}(ax) \, dx = \frac{\pi^{2}}{4 a \sin \pi \nu} \left[ \tan \left( \frac{\pi}{2}(\lambda + \nu) \right) - \tan \left( \frac{\pi}{2}( \lambda - \nu) \right) \right]. \end{equation} \noindent In the special case $\lambda = \nu$, it follows that \begin{equation} \int_{0}^{\infty} K_{\nu}^{2}(ax) \, dx = \frac{\pi^{2}}{4a \cos \pi \nu }, \text{ valid for } | \nu | < \tfrac{1}{2}. \end{equation} \noindent This value generalizes \eqref{k0-squared}. It appears in Prudnikov et al. \cite{prudnikov-1986a} as entries $.2.16.28.3$ and $2.16.33.2$. \end{example} \section{An example with an integral producing the Bessel function} \label{sec-producing} The evaluation of integrals in Section \ref{sec-bessel-nu} contain the Bessel function $K_{\nu}$ in the integrand. This section uses the method developed in the current work to evaluate some entries in \cite{gradshteyn-2015a} where the answer involves $K_{0}$. \begin{example} \label{ex-rule-e4} The first example is entry $6.532.4$ in \cite{gradshteyn-2015a} \begin{equation} \label{gr-65324} \int_{0}^{\infty} \frac{x J_{0}(ax)}{x^{2}+b^{2}} \, dx = K_{0}(ab). \end{equation} The analysis begins with the series \begin{eqnarray} J_{0}(ax) & = & \sum_{n=0}^{\infty} \frac{1}{n!^{2}} \left( - \frac{a^{2}x^{2}}{4} \right)^{n} \\ & = & \sum_{n_{1}=0}^{\infty} \phi_{n_{1}} \frac{ a^{2n_{1}}} {2^{2n_{1}} \, \Gamma(n_{1}+ 1)} x^{2n_{1}} \nonumber \end{eqnarray} Rule $P_{2}$ gives \begin{equation} \frac{1}{x^{2}+b^{2}} = \sum_{n_{2},n_{3}} \phi_{n_{2},n_{3}} x^{2n_{2}} b^{2n_{3}} \langle 1 + n_{2} + n_{3} \rangle. \end{equation} \noindent Therefore \begin{equation} \int_{0}^{\infty} \frac{x J_{0}(ax) }{x^{2}+b^{2}} \, dx = \sum_{n_{1},n_{2},n_{3}} \phi_{n_{1},n_{2},n_{3}} \frac{a^{2n_{1}} b^{2n_{3}}}{2^{2n_{1}} \, \Gamma(n_{1}+1)} \langle 1+n_{2}+n_{3} \rangle \langle 2 + 2n_{1} + 2n_{2} \rangle. \end{equation} \noindent The method of brackets produces three series as candidates for solutions, one per free index $n_{1}, \, n_{2}, \, n_{3}$: \begin{eqnarray} T_{1} & = & \frac{1}{2} \sum_{n=0}^{\infty} \phi_{n} \Gamma(-n) \left( \frac{a^{2}b^{2}}{4} \right)^{n} \\ T_{2} & = & \frac{2}{a^{2}b^{2}} \sum_{n=0}^{\infty} \phi_{n} \frac{\Gamma^{2}(n) }{\Gamma(-n)} \left( \frac{4}{a^{2}b^{2}} \right)^{n} \nonumber \\ T_{3} & = & \frac{1}{2} \sum_{n=0}^{\infty} \phi_{n} \Gamma(-n) \left( \frac{a^{2}b^{2}}{4} \right)^{n}. \nonumber \end{eqnarray} \noindent The fact that $T_{1} = T_{3}$ and using Rule $E_{4}$ shows that only one of these series has to be counted. Since $T_{1}$ and $T_{2}$ are non-classical series of \textit{distinct variables}, both are representations of the value of the integral. Observe that $T_{2}$ is the totally null representation of $K_{0}(ab)$ given in \eqref{k0-divergent}. This confirms \eqref{gr-65324}. The fact that $T_{3}$ is also a value for the integral gives another totally divergent representation for $K_{0}$: \begin{equation} \label{new-k0} K_{0}(x) = \frac{2}{x^{2}} \sum_{n=0}^{\infty} \phi_{n} \frac{\Gamma^{2}(n+1)}{\Gamma(-n)} \left( \frac{4}{x^{2}} \right)^{n}. \end{equation} \noindent To test its validity, the integral in Example \ref{ex-k0-1} is evaluated again, this time using \eqref{new-k0}: \begin{eqnarray} \int_{0}^{\infty} K_{0}(x) dx & = & \int_{0}^{\infty} \frac{2}{x^{2}} \sum_{n} \phi_{n} \frac{\Gamma^{2}(n+1)}{\Gamma(-n)} 2^{2n} x^{-2n} \\ & = & \sum_{n} \phi_{n} 2^{2n+1} \frac{\Gamma^{2}(n+1)}{\Gamma(-n)} \int_{0}^{\infty} x^{-2n-2} \, dx \nonumber \\ & = & \sum_{n} \phi_{n} 2^{2n+1} \frac{\Gamma^{2}(n+1)}{\Gamma(-n)} \langle -2n-1 \rangle. \nonumber \end{eqnarray} \noindent The bracket series is evaluated using Rule $E_{1}$ to confirm \eqref{value-k0-1}. \end{example} \begin{example} Entry $6.226.2$ in \cite{gradshteyn-2015a} is \begin{equation} \label{62262} \int_{0}^{\infty} {\rm{Ei}}\left(- \frac{a^{2}}{4x} \right) e^{-\mu x} \, dx = - \frac{2}{\mu} K_{0}(a \sqrt{\mu}). \end{equation} \noindent The evaluation starts with the partially divergent series \eqref{pds-ei1} \begin{equation} {\rm{Ei}}\left( - \frac{a^{2}}{4x} \right) = \sum_{n_{1}=0}^{\infty} \phi_{n_{1}} \frac{a^{2n_{1}}}{n_{1} 2^{2n_{1}}} \frac{1}{x^{n_{1}}} \end{equation} \noindent and this yields \begin{equation} \int_{0}^{\infty} {\rm{Ei}}\left(- \frac{a^{2}}{4x} \right) e^{-\mu x} \, dx = \sum_{n_{1},n_{2}} \phi_{n_{1}n_{2}} \frac{a^{2n_{1}} \mu^{n_{2}}}{n_{1} 2^{2n_{1}}} \langle n_{2} - n_{1} + 1 \rangle. \end{equation} \noindent The method of brackets gives two series. The first one \begin{eqnarray} T_{1} & = & \frac{1}{\mu} \sum_{n_{1}} \phi_{n_{1}} \frac{\Gamma(1-n_{1})}{n_{1}2^{2n_{1}}} (a^{2} \mu)^{n_{1}} \label{form-t1} \\ & = & - \frac{1}{\mu} \sum_{n_{1}} \phi_{n_{1}} \frac{\Gamma(-n_{1}) }{2^{2n_{1}}} (a^{2} \mu)^{n_{1}} \nonumber \\ & = & - \frac{2}{\mu} K_{0}( a \sqrt{\mu}), \nonumber \end{eqnarray} \noindent using \eqref{k0-divergent}. The second series is \begin{equation} T_{2} = \sum_{n_{2}}\phi_{n_{2}} \frac{a^{2n_{2}+2} \mu^{n_{2}}}{(n_{2}+1) 2^{2(n_{2}+1)}} \Gamma(-n_{2}-1). \end{equation} \noindent Now shift the index by $m = n_{2}+1$ to obtain \begin{eqnarray} T_{2} & = & \sum_{m} \phi_{m-1} \frac{a^{2m} \mu^{m-1}}{m2^{2m}} \Gamma(-m). \nonumber \\ & = & - \frac{1}{\mu} \sum_{m} \phi_{m} \Gamma(-m) \frac{a^{2m} \mu^{m}}{2^{2m}}. \nonumber \end{eqnarray} \noindent This is the same sum as $T_{1}$ in the second line of \eqref{form-t1}. Recall that the summation indices are placed after the conversion of the indicator $\phi_{n_{2}}$ to its expression in terms of the gamma function. According to Rule $E_{4}$, the sum $T_{2}$ is discarded. This establishes \eqref{62262}. \end{example} \section{A new use of the method of brackets} \label{sec-new-use} \setcounter{equation}{0} This section introduces a procedure to evaluate integrals of the form \begin{equation} I(a_{1},a_{2}) = \int_{0}^{\infty} f_{1}(a_{1}x) f_{2}(a_{2}x) \, dx. \end{equation} Differentiating with respect to the parameters leads to \begin{equation} a_{1} \frac{\partial I(a_{1},a_{2})}{\partial a_{1}} + a_{2} \frac{\partial I(a_{1},a_{2})}{\partial a_{2}} = \int_{0}^{\infty} x \frac{d}{dx} \left[ f_{1}(a_{1}x) f_{2}(a_{2} x) \right] \, dx. \end{equation} \noindent Integration by parts produces \begin{equation} \label{formula-ode1} I(a_{1},a_{2}) = x f_{1}(a_{1}x)f_{2}(a_{2}x)\Big{|}_{0}^{\infty} - \left( a_{1} \frac{\partial I(a_{1},a_{2})}{\partial a_{1}} + a_{2} \frac{\partial I(a_{1},a_{2})}{\partial a_{2}} \right). \end{equation} \noindent A direct extension to many parameters leads to the following result. \begin{theorem} \label{nice-1} Let \begin{equation} I(a_{1},\cdots,a_{n}) = \int_{0}^{\infty} \prod_{j=1}^{n} f(a_{j}x) \, dx. \end{equation} \noindent Then \begin{equation} \label{nice-form1} I(a_{1},\cdots, a_{n}) = x \prod_{j=1}^{n} f_{j}(a_{j}x)\Big{|}_{0}^{\infty} - \sum_{j=1}^{n} a_{j} \frac{\partial I(a_{1},\cdots, a_{n})}{\partial a_{j}}. \end{equation} \end{theorem} \begin{example} The integral \begin{equation} I(a,b) = \int_{0}^{\infty} e^{-ax} J_{0}(bx) \, dx \end{equation} \noindent is evaluated first by a direct application of the method of brackets and then using Theorem \ref{nice-1}. \smallskip The bracket series for $I(a,b)$ \begin{equation} I(a,b) = \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a^{n_{1}} b^{2n_{2}}}{2^{2n_{2}} \Gamma(n_{2}+1)} \langle n_{1} + 2n_{2}+1 \rangle \end{equation} \noindent is obtained directly from \eqref{exp-bracket} \begin{equation} e^{-ax} = \sum_{n_{1}} \phi_{n_{1}} a^{n_{1}} x^{n_{1}} \end{equation} \noindent and \begin{equation} J_{0}(bx) = \pFq01{-}{1}{- \frac{(bx)^{2}}{4}} = \sum_{n_{2}} \phi_{n_{2}} \frac{b^{2n_{2}}}{\Gamma(n_{2}+1) 2^{2n_{2}}} x^{2n_{2}}. \end{equation} \noindent Solving for $n_{1}$ in the equation coming from the vanishing of the bracket gives $n_{1} = -2n_{2}-1$, which yields \begin{equation} T_{1} = \sum_{n_{2}=0}^{\infty} \frac{(-1)^{n_{2}}}{n_{2}!} \frac{a^{-2n_{2}-1} b^{2n_{2}}}{2^{2n_{2}}} \frac{\Gamma(2n_{2}+1)}{\Gamma(n_{2}+1)}. \end{equation} \noindent To simplify this sum transform the gamma factors via \eqref{gamma-poch} and use the duplication formula \eqref{poch-dupl} to produce \begin{equation} T_{1} = \frac{1}{a} \sum_{n_{2}=0}^{\infty} \frac{ \left(\tfrac{1}{2} \right)_{n_{2}}}{n_{2}!} \left( - \frac{b^{2}}{a^{2}} \right)^{n} = \frac{1}{a} \pFq10{\frac{1}{2}}{-}{- \frac{b^{2}}{a^{2}}}. \end{equation} \noindent The identity $\displaystyle{\pFq10{c}{-}{z} = (1-z)^{-c}}$ gives $\displaystyle{T_{1} = \frac{1}{\sqrt{a^{2}+b^{2}}}.}$ A direct calculation shows that the series obtained from solving for $n_{2}$ yields the same solution, so it discarded. Therefore \begin{equation} \int_{0}^{\infty} e^{-ax} J_{0}(bx) \, dx = \frac{1}{\sqrt{a^{2}+b^{2}}}. \label{bessel-j0} \end{equation} \smallskip The evaluation of this integral using Theorem \ref{nice-1} begins with checking that the boundary terms vanish. This comes from the asymptotic behavior $J_{0}(x) \sim 1$ as $x \to 0$ and $\displaystyle{J_{0}(x) \sim \sqrt{\frac{2}{\pi x}} \cos x }$ as $x \to \infty$. The term \begin{equation} a \frac{\partial I(a,b)}{\partial a} = \sum_{n_{1},n_{2}} \phi_{n_{1}n_{2}} \frac{n_{1} a^{n_{1}} b^{2n_{2}} }{2^{2n_{2}} \Gamma(n_{2}+1)} \langle n_{1} + 2n_{2} + 1 \rangle. \end{equation} \noindent This generates two series \begin{equation} T_{1} = \frac{1}{b} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{n \Gamma\left( \tfrac{1+n}{2} \right)}{\Gamma\left( \tfrac{1-n}{2} \right)} \left( \frac{2a}{b} \right)^{n} \end{equation} \noindent and \begin{equation} T_{2} = - \frac{1}{a} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{\Gamma(2n+2)}{\Gamma(n+1)} \left( \frac{b^{2}}{4a^{2}} \right)^{n}. \end{equation} Similarly \begin{equation} b \frac{\partial I(a,b)}{\partial b} = 2 \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{n_{2} a^{n_{1}} b^{2n_{2}}}{2^{2n_{2}} \Gamma(n_{2}+1)} \langle n_{1} + 2n_{2} + 1 \rangle \end{equation} \noindent which yields the two series \begin{eqnarray*} \tilde{T}_{1} & = & \frac{2}{b} \sum_{n=0}^{\infty} \phi_{n} \frac{\Gamma \left( \frac{n+1}{2} \right) \Gamma \left( - \frac{n+1}{2} \right) } { \Gamma \left( \frac{1-n}{2} \right)} \left( \frac{2a}{b} \right)^{n} \\ \tilde{T}_{2} & = & \frac{2}{a} \sum_{n=0}^{\infty} \frac{2}{a} \sum_{n=0}^{\infty} \phi_{n} \frac{n \Gamma(2n+1)}{\Gamma(n+1)} \left( \frac{b^{2}}{4a^{2}} \right)^{n} \end{eqnarray*} \noindent Since the boundary terms vanish, the relation \eqref{formula-ode1} gives \begin{equation} I(a,b) = \begin{cases} -T_{1} - \tilde{T}_{1}, & \quad |4a^{2}| < |b^{2}| \\ -T_{2} - \tilde{T}_{2}, & \quad |b^{2}| < |4a^{2}|. \end{cases} \end{equation} The form $T_{2}+\tilde{T}_{2}$ is simplified by converting them to hypergeometric form to produce \begin{eqnarray} T_{2} & = & - \frac{1}{a} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n!} \frac{\Gamma(2n+2)}{\Gamma(n+1)} \left( \frac{b^{2}}{4a^{2}} \right)^{n} = - \frac{a^{2}}{(a^{2}+b^{2})^{3/2}} \\ \tilde{T}_{2} & = & \frac{2}{a} \sum_{n=0}^{\infty} \phi_{n} \frac{n \Gamma(2n+1)}{\Gamma(n+1)} \left( \frac{b^{2}}{4a^{2}} \right)^{n} = - \frac{b^{2}}{(a^{2}+b^{2})^{3/2}}. \nonumber \end{eqnarray} \noindent Then \begin{equation} I(a,b) = - T_{2} - \tilde{T}_{2} = \frac{a^{2}}{(a^{2}+b^{2})^{3/2}} + \frac{b^{2}}{(a^{2}+b^{2})^{3/2}} = \frac{1}{\sqrt{a^{2}+b^{2}}}. \end{equation} \noindent This gives \begin{equation} I(a,b) = \int_{0}^{\infty} e^{-ax} J_{0}(bx) \, dx = \frac{1}{\sqrt{a^{2}+b^{2}}}. \end{equation} \noindent The option $T_{1}+ \tilde{T}_{1}$ gives the same result. \end{example} \begin{example} \label{example-6-222} Entry $6.222$ in \cite{gradshteyn-2015a} is \begin{eqnarray} \label{ei-double} I(a_{1},a_{2}) & = & \int_{0}^{\infty} \text{Ei}(-a_{1}x) \text{Ei}(-a_{2}x) \, dx \\ & = & \left( \frac{1}{a_{1}} + \frac{1}{a_{2}} \right) \ln(a_{1}+a_{2}) - \frac{\ln a_{1}}{a_{2}} - \frac{\ln a_{2}}{a_{1}}. \nonumber \end{eqnarray} \noindent In particular \begin{equation} \int_{0}^{\infty} \text{Ei}^{2}(-ax) \, dx = \frac{2 \ln 2 }{a}. \end{equation} The evaluation of this integral by the method of brackets begins with the partially divergent series for $\text{Ei}(-x)$ which yields (using \eqref{ei-null1} $ = $ \eqref{pds-ei1}): \begin{equation} \label{sum-1a} I(a_{1},a_{2}) = \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a_{1}^{n_{1}} a_{2}^{n_{2}}}{n_{1}n_{2}} \langle n_{1}+n_{2} + 1\rangle. \end{equation} \noindent The usual procedure requires the relation $n_{1}+n_{2}+1 = 0$ and taking $n_{1}$ as the free parameter gives \begin{equation} I_{1}(a_{1},a_{2}) = - \frac{1}{a_{2}} \sum_{n_{1}=0}^{\infty} \frac{(-1)^{n_{1}}}{n_{1}(n_{1}+1)} \left( \frac{a_{1}}{a_{2}} \right)^{n_{1}}, \end{equation} \noindent and when $n_{2}$ as free parameter one obtains the series \begin{equation} I_{2}(a_{1},a_{2}) = - \frac{1}{a_{1}} \sum_{n_{2}=0}^{\infty} \frac{(-1)^{n_{2}}}{n_{2}(n_{2}+1)} \left( \frac{a_{2}}{a_{1}} \right)^{n_{2}}. \end{equation} \noindent These two series correspond to different expansions: the first one in $x = a_{1}/a_{2}$ and the second one in $x^{-1}= a_{2}/a_{1}$. Both series are partially divergent, so the Rule $E_{3}$ states that these sums must be discarded. The usual method of brackets fails for this problem. The solution using Theorem \ref{nice-1} is described next. An elementary argument shows that $x {\rm{Ei}}(-x) \to 0 $ as $x \to 0$ or $\infty$. Then \eqref{formula-ode1} becomes \begin{eqnarray} & & \\ I(a_{1},a_{2}) & = & - a_{1} \frac{\partial I}{\partial a_{1}} - a_{2} \frac{\partial I}{\partial a_{2}} \nonumber \\ & = & - \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a_{1}^{n_{1}} a_{2}^{n_{2}}}{n_{2}} \langle n_{1} + n_{2} + 1 \rangle - \sum_{n_{1},n_{2}} \phi_{n_{1},n_{2}} \frac{a_{1}^{n_{1}} a_{2}^{n_{2}}}{n_{1}} \langle n_{1} + n_{2} + 1 \rangle, \nonumber \\ & \equiv & S_{1} + S_{2}, \nonumber \end{eqnarray} \noindent using \eqref{sum-1a} to compute the partial derivatives. The method of brackets gives two series for each of the sums $S_{1}$ and $S_{2}$: \begin{eqnarray} T_{1,1} & = & \frac{1}{a_{2}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n+1} \left( \frac{a_{1}}{a_{2}} \right)^{n} \\ T_{1,2} & = & - \frac{1}{a_{1}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n} \left( \frac{a_{2}}{a_{1}} \right)^{n} \\ T_{2,1} & = & - \frac{1}{a_{2}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n} \left( \frac{a_{1}}{a_{2}} \right)^{n} \\ T_{2,2} & = & \frac{1}{a_{1}} \sum_{n=0}^{\infty} \frac{(-1)^{n}}{n+1} \left( \frac{a_{2}}{a_{1}} \right)^{n}, \end{eqnarray} the series $T_{1,1}$ and $T_{1,2}$ come from the first sum $S_{1}$ and $T_{2,1}, \, T_{2,2}$ from $S_{2}$. Rule $E_{3}$ indicates that the value of the integral is either \begin{equation} \label{mess-1} I(a_{1},a_{2}) = T_{1,1} + T_{2,1} \quad \text{ or } \quad I(a_{1},a_{2}) = T_{1,2} + T_{2,2}; \end{equation} the first form is an expression in $a_{1}/a_{2}$ and the second one in $a_{2}/a_{1}$. The series $T_{1,1}$ is convergent when $|a_{1} | < |a_{2}|$ and it produces the function \begin{equation} f(a_{1},a_{2}) = \frac{1}{a_{1}} \log \left( 1 + \frac{a_{1}}{a_{2}} \right) \end{equation} \noindent and $T_{2,2}$ is also convergent and is gives \begin{equation} g(a_{1},a_{2}) = \frac{1}{a_{2}} \log \left( 1 + \frac{a_{2}}{a_{1}} \right). \end{equation} \noindent Observe that, according to \eqref{mess-1} to complete the evaluation of $I(a_{1},a_{2})$, some of the series required are partially divergent series. The question is how to make sense of these divergent series. The solution proposed here is, for instance, to interpret $T_{2,1}$ as a partially divergent series attached to the function $g(a_{1},a_{2})$. Therefore, the sum in \eqref{mess-1}, the term $T_{2,1}$ is replaced by $g(a_{1},a_{2})$ to produce \begin{eqnarray} I(a_{1},a_{2}) & = & f(a_{1},a_{2}) + g(a_{1},a_{2}) \\ & = & \frac{1}{a_{1}} \log \left( 1 + \frac{a_{1}}{a_{2}} \right) + \frac{1}{a_{2}} \log \left( 1 + \frac{a_{2}}{a_{1}} \right), \nonumber \end{eqnarray} and this confirms \eqref{ei-double}. A similar interpretation of $T_{1,2} + T_{2,2}$ gives the same result. \end{example} \section{Conclusions} \label{sec-conclusions} The method of brackets consists of a small number of heuristic rules used for the evaluation of definite integrals on $[0, \, + \infty)$. The original formulation of the method applied to functions that admit an expansion of the form $\begin{displaystyle} \sum_{n=0}^{\infty} a(n) x^{\alpha n + \beta - 1} \end{displaystyle}$. The results presented here extend this method to functions, like the Bessel function $K_{\nu}$ and the exponential integral $\text{Ei}$, where the expansions have expansions of the form $\begin{displaystyle} \sum_{n=0}^{\infty} \Gamma(-n) x^{n} \end{displaystyle}$ (where all the coefficients are divergent) or $\begin{displaystyle} \sum_{n=0}^{\infty} \frac{1}{\Gamma(-n)} x^{n} \end{displaystyle}$ (where all the coefficients vanish). A variety of examples illustrate the validity of this formal procedure. \medskip \noindent \textbf{Acknowledgments.} The authors wish to thank a referee for a careful reading of the original version of the paper. The first author thanks the support of the Centro de Astrof\'{i}sica de Valparaiso. The last author acknowledges the partial support of NSF-DMS 1112656. \bigskip
proofpile-arXiv_065-3655
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{The phonon of pyrite chromium dioxide at the ambient pressure} The pyrite CrO$_2$ phase has been demonstrated to be stable ferromagnetic (FM) half-metallic state occurring at critical pressure of $\sim$45 GPa \cite{Li2012}. Here, we use the phonon spectrum, which is one useful way to investigate the stability and structural rigidity. The method of force constants has been used to calculate the phonon frequencies as implemented in PHONOPY package~\cite{Togo1,Togo2,Togo3}. We employ $3 \times3 \times 3$ supercell with 108 Cr atoms and 216 O atoms to obtain the real-space force constants. Our result for the phonon dispersions at the ambient pressure is shown in Fig.\ref{figS1}, respectively. We find that there is the absence of any imaginary frequencies over the entire BZ, demonstrating that the pyrite CrO$_{2}$ is dynamical stability. \section{The other Weyl points close to Fermi level between $(N$-$1)$th and $N$th bands } Pyrite CrO$_2$ exhibits the FM metallic rather than semimetallic features. Due to its complex topological electronic band structure, some other Weyl points between the $(N-1)$th and $N$th bands also appear close to the Fermi level. We find that there are three pairs of the Weyl points (the energies relative to Fermi level are lower than 0.3 eV). Their positions in momentum space are $\frac{2\pi}{a}$(0.0011, 0.1414, 0.0842), $\frac{2\pi}{a}$(0.1419, 0.0844, 0.0010), and $\frac{2\pi}{a}$(0.0907, 0.0006, 0.1613), which are of the most relevant located only 0.111, 0.137, and 0.141 eV below the Fermi level, respectively. Although we found a plethora of topological features formed by the $(N-1)$th and $N$th bands, these additional Weyl points and their associated Fermi arcs may overlap with the bulk states when projected onto a surface, such as (001) or (110) surfaces. Hence, these Weyl points can not contribute visible spectroscopic signatures of surface Fermi arcs. \begin{figure} \centering \includegraphics[scale=0.3]{PHONON.pdf} \caption{The phonon dispersions of pyrite CrO2. \label{figS1}}. \end{figure} \section{The topological features with magnetization along [001] direction} Our first-principles calculations suggest that there are only tiny energy differences among all magnetic configurations in pyrite CrO$_2$, implying that an applied magnetic field can easily manipulate the spin-polarized direction. Therefore, we also perform the calculations for magnetization along [001] direction. When the FM magnetization is parallel to [001] direction, the system reduces to magnetic space group $D_{4h}(C_{4h})$ and the three-fold rotational symmetry $C_3$ is broken. Hence, the Weyl points arising from the triply-degenerate points splitting may not locate on $\Gamma$-R axis. In this case, we only pay attention to the Weyl points between $N$th and $(N+1)$th bands. There are five pairs of Weyl points formed at the boundary of electron and hole pockets. Furthermore, the presence of an odd number of pairs of Weyl points between $N$th and $(N+1)$th bands can be clarified by the product of the inversion eigenvalues of the number of occupied bands $N$ at eight time reversal invariant momenta points $k_{\mathrm{inv}}$ \cite{Hughes2011}, as \begin{equation} \chi_{P}=\prod\limits_{k_{\mathrm{inv}};i\in \mathrm{occ}} \zeta_i (k_{\mathrm{inv}}). \end{equation} Our calculations show that the value of $\chi_{P}$ is -1, implying that the system may be WSM co-existing with an odd number of pairs of Weyl points. In pyrite CrO$_2$ with magnetization along [001] direction, five pairs of Weyl points between $N$th and $N+1$th bands are present. Their precise positions in momentum space, Chern numbers, and the energies related to the Fermi level $E_F$ are listed in Table \ref{tableS}. \begin{table} \caption{ The Weyl points between $N$th and $(N+1)$th bands with magnetization along [001] direction. The positions (in reduced coordinates $k_x$, $k_y$, and $k_z$), Chern numbers, and the energies relative to $E_F$ are listed. The coordinates of the other WPs are related to the ones listed by the $I$ symmetry.} \begin{tabular}{p{1.0 cm}|*{1}{p{4.0cm}} *{3}{p{1.4cm}} \hline \hline Weyl & \centering Coordinates [$k_x(2\pi/a)$, &\centering Chern & $E-E_F$ \\ points & \centering $k_y(2\pi/a)$, $k_z(2\pi/a)$] &\centering number & (meV) \\ \hline 1 &\centering (0.0821, 0.0, -0.0549) &\centering $-1$ &{\centering 19} \\ 2 &\centering (0.0, 0.0, 0.0159) &\centering $+1$ & 45 \\ 3 &\centering (0.0821, 0.0, 0.0549) &\centering $+1$ &{\centering 19} \\ 4 &\centering (0.0, 0.0549, 0.0821) &\centering $-1$ &{\centering 19} \\ 5 &\centering (0.0 0.0549, -0.0821) &\centering $+1$ &{\centering 19} \\ \hline \hline \end{tabular} \label{tableS} \end{table} \section{The triply-degenerate points in the absence of spin-orbital coupling } In the absence of spin-orbital coupling, the symmetry group is the abelian group $T_h$, which contains four three-fold rotational symmetry $C_3$ axes [111], $[1\bar{1}1]$, $[11\bar{1}]$, and $[\bar{1}11]$, inversion $I$, and three mirror symmetries $M_x$, $M_{y}$, and $M_z$, respectively. The mirror symmetries send \begin{equation} \begin{split} M_x: (x, y, z) \rightarrow (-x, y, z),\\ M_y: (x, y, z) \rightarrow (x, -y, z),\\ M_z: (x, y, z) \rightarrow (x, y, -z),\\ \end{split} \end{equation} and $C_3 ^{111}$ and the product $IM_x M_y M_z$ of inversion $I$ and mirror reflection symmetries leave every momentum point invariant along $\Gamma$-R (or $\mathbf{k}\parallel [111]$) axis. Hence, at each point along the $\Gamma$-R axis, the Bloch states that form a possibly degenerate eigenspace (band) of the Hamiltonian must be invariant under $C_3 ^{111}$ and $IM_x M_y M_z$. Without SOC, there are three eigenvalues of $C_3$ rotational symmetry, namely, $e^{-i \frac{2\pi}{3}}$, $e^{i \frac{2\pi}{3}}$, and 1 ($e^{i\pi}$), and we denote the corresponding eigenstates as $\psi_{1}$, $\psi_{2}$, and $\psi_{3}$, respectively. Using the basis ($\psi_{1}$, $\psi_{2}$, $\psi_{3}$), the representations of a operators $O$ can be determined as \begin{equation} O_{ij}=\langle \psi_{i}|O|\psi_{j}\rangle, \end{equation} so $C_3 ^{111}$ and mirror symmetries $M_x$, $M_y$, and $M_z$ can be expressed as \begin{equation} C_3 ^{111}=\mathrm{diag}\{e^{-i \frac{2\pi}{3}}, e^{i \frac{2\pi}{3}}, 1\}, \end{equation} \\ \begin{equation} M_x=\left( \begin{array}{ccc} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right), \end{equation} \begin{equation} M_y=\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right), \end{equation} \begin{equation} M_z=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ \end{array} \right). \end{equation} It can be seen that $C_3 ^{111}$ and $M_i$ ($i=x$, $y$, $z$) can not commute with each other, leading to that failure of $C_3 ^{111}$ and $M_i$ to be simultaneously diagonalizable. Therefore, in the absence of SOC, along $\Gamma$-R (or $\mathbf{k}\parallel [111]$) axis, the three bands with the three different eigenvalues of $C_3 ^{111}$ always appear as a singly-degenerate band ($\psi_{3}$) and a doubly-degenerate band ($\psi_{1}$ and $\psi_{2}$). If the single degenerate and the doubly-degenerate bands cross each other accidentally, a triply-degenerate node will form because their different $C_3 ^{111}$ eigenvalues prohibit hybridization \cite{Changarxiv}. When spin-orbital coupling is considered, the triply-degenerate node would like to split into Weyl points depending on the magnetic space group.
proofpile-arXiv_065-3668
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The performance of fork-join queues is a highly focused research topic for many years for the ubiquitousness of fork-join queues in both real-life workflows and computing systems. In a fork-join queueing system, a job is \textit{forked} into $n$ sub-tasks when it arrives at a control node, and each sub-task is sent to a single node to be conquered. Results of finished sub-tasks are summarized at a central \textit{join} node. When the job arrival rate $\lambda$ is high, a sub-task may have to wait for service in the sub-queue of its hosting node in a first-come-first-serving order. A basic fork-join queue considers a job is done after all results of the job have been received at the join node (see Fig. \ref{queues} (a)). \begin{figure*} \centering \subfloat[Basic $(3,3)$ Fork-Join Queues] {\includegraphics[width=0.7\columnwidth]{pics/fj-basic-queue-model.pdf}}\hfill \subfloat[Non-Purging $(3,2)$ Fork-Join Queues]{\includegraphics[width=0.64\columnwidth]{pics/fj-non-purging-model.pdf}}\hfill \subfloat[Purging $(3,2)$ Fork-Join Queues]{\includegraphics[width=0.64\columnwidth]{pics/fj-purging-model.pdf}} \caption{Fork-Join Queues: $A^{(1)}$ Is a Sub-Task of Job $A$} \label{queues} \end{figure*} In Big Data era, more and more mainstream computing infrastructures become distributively deployed, and inevitably recruit fork-join queues to facilitate the storing and processing of large-scale datasets. For example: 1) Cassandra\cite{Lakshman:2010ef} and Dynamo\cite{DeCandia:2007cn}, two popular key-value data stores, use fork-join queues to concurrently perform read and write operations on all the replicas of the target key-value pairs; 2) The client of an $(n,k)$ MDS (maximum distance separable) erasure coding based cloud storage system only needs to retrieve any $k$ out of all $n$ blocks of a file to reconstruct the file; 3) The data transmission process of multipath routing protocols can generally be simplified as a multi-stage fork-join queueing process. Latency is commonly a critical concern in building and optimizing Big Data clusters. For example, in Amazon's cloud platform, services commonly have latency requirements which are in general measured at the $99.9^{th}$ percentile of the distribution \cite{DeCandia:2007cn}. The Dynamo storage system must be capable of meeting such stringent SLAs. In this scenario, basic fork-join queues may cause serious performance issues when the number of data replicas are large, since they require all the sub-tasks of a job to be finished before making the job's response. By contrast, $(n,k)$ fork-join queues, as named in \cite{Joshi:2012cv}, only require the job's any $k$ out of $n$ sub-tasks to be finished, and thus have performance advantages in such scenarios. For example, a write request in Casandra can either be responded when a quorum of replicas have been successfully written, or just get responded once the fast answer from all touched replicas is acknowledged when there is a need to pursue high throughputs. As depicted in Fig. \ref{queues}, there are mainly two versions of $(n,k)$ fork-join queues: The purging one removes all the remaining sub-tasks of a job from both sub-queues and service stations once it receives the job's $k^{th}$ answer. The file retrieval process from an MDS coded cloud storage system is such an example. As a contrast, the non-purging one keeps queuing and executing remaining sub-tasks. For example, a write operation in Cassandra needs to update all the replicas of the target key-value pair, while can response to user as soon as a quorum of replicas have been successfully written. \paragraph{The State-of-the-Art Research on Basic Fork-Join Queues} The popularity of fork-join systems has drawn great attentions from database/OS/networking communities to the performance analyses of fork-join queues for a rather long period of time. Unfortunately, there is still no exact closed-form solution of the sojourn time of the job in $n\geq 3$ basic fork-join queues. The difficulty lies in the fact that the sojourn times of a job's sub-tasks are not independent, as their hosting sub-queues share the same sub-task arrival process. Since most of existing exact analysis techniques are developed for independent and identical (iid) random variables, it is very hard to trace the sojourn time distribution for fork-join queues. For $n\geq 3$ fork-join queues under Poisson job arrival process and with iid exponential service time distributions, Nelson et al. \cite{Nelson:1988jk} proposed an initiative approximation technique which is based on the fact that the sojourn times $X_{1\mathrel{{.}\,{.}}\nobreak k}$ of sub-tasks $1,2,...,k$ are associated variables, whose maximum can be bounded by the maximum of their iid equivalents \cite{Esary:1967eo}: $P(X_{1\mathrel{{.}\,{.}}\nobreak n} \leq t)\geq\prod_{i=1}^{n}P(X^{IID}_i\leq t)$. According to that, the upper bounds and closed-form approximations of the sojourn time were given in this work. Simulation experiments in \cite{Lebrecht:2007tm} showed that Nelson's approximation is still the most reliable one, compared to following works such as \cite{Varma:1994gm} and \cite{Varki:2001wc}. \paragraph{The State-of-the-Art Research and Open Challenges on $(n,k)$ Fork-Join Queues} Despite the popularity of $(n,k)$ fork-join queues in Big Data systems and many other fields, there are even no practical approximations on the sojourn time of $(n,k)$ fork-join queues: Unlike the maximum, the $k^{th}$ order statistic cannot be bounded by using associated variables' property, which makes the sojourn time of $(n,k)$ fork-join queues more hard to analyze, compared to basic fork-join queues. \begin{figure}[h!] \centering \includegraphics[width=0.8\columnwidth]{pics/split-merge-model.pdf} \caption{A $(3,2)$ Split-Merge Queue} \label{sm-queue} \end{figure} Currently, there are only exact quantity analyses for purging $(n,1)$ fork-join queues \cite{Gardner:2015kb,Lee:2017gi}, because such a queue is equivalent to a single queue with $n$ times the sub-queue's service rate. For general purging $(n,k)$ fork-join queues, there are only rough bounds have been given: Joshi et al. \cite{Joshi:2012cv,Joshi:2017bj} resort to the split-merge queue model (see Fig. \ref{sm-queue}) to find proper upper and lower bounds. Compared to purging $(n,k)$ fork-join queues, all empty sub-queues in the split-merge model are blocked and cannot serve subsequent tasks until $k$ sub-tasks of the current job are completed, which makes the split-merge model much easier to trace. However, these split-merge based bounds tend to be extremely loose when increasing $k$ or the load factor $\rho$, as we depict in Section \ref{bounds}. Since non-purging $(n,k)$ fork-join queues cannot be reduced to the split-merge model, they are more difficult to analyze, even including $(n,1)$ queues. Recently, Fidler et al. \cite{Fidler:2016tw} gave non-asymptotic statistical bounds on the sojourn times for non-purging fork-join queues. However, no reasonable approximations have been proposed. \paragraph{Methodology and Contributions} This paper aims at fixing the lack of proper approximations for non-purging $(n,k)$ fork-join queues and tackling the uncontrollability of bounds for purging $(n,k)$ fork-join queues. To achieve these objectives, we trace fork-join queues in a fundamental way: The linear relationship between $(n,k)$ fork-join queues and their basic $(k,k), (k+1,k+1),...,(n,n)$ equivalents is depicted for the first time; This relationship is then used to bridge the existing approximations for basic fork-join queues to the approximations and bounds for $(n,k)$ fork-join queues. Our innovations and contributions are highlighted as follows: \begin{itemize} \item A brand-new closed-form \texttt{linear transformation technique} for jointly-identical random variables, by which order statistics can be transformed into a closed-form linear combination of maxima. Besides, there is no need to assume the independence of variables. \item The first reasonable and practical method to approximate the expected sojourn time of non-purging $(n,k)$ fork-join queues with general service time distributions. This method relies on the cooperation between the linear transformation technique and the existing approximations for basic fork-join queues. \item Improvements over the upper bounds on the expected sojourn time of purging $(n,k)$ fork-join queues, which are gained by resorting the bounds to that of the non-purging equivalent $(n,k)$ fork-join queues. \end{itemize} This paper is organized as follows: The linear transformation technique is developed in Section \ref{pre}; This technique is then employed in Section \ref{app} to find proper approximations for non-purging $(n,k)$ fork-join queues; The flaws of existing bounds for purging $(n,k)$ fork-join queues and our improvements over upper bounds are depicted in Section \ref{bounds}; In Section \ref{discuss}, we discuss the limitation of this linear transformation technique; Related works are reviewed in Section \ref{review}; We conclude this work and point out some promising future research directions in Section \ref{con}. \section{Preliminaries: Linear Transformations of Order Statistics}\label{pre} In this section, we consider a family of rvs (random variables) $X_1, X_2,...,X_n$ (denoted as $X_{1\mathrel{{.}\,{.}}\nobreak n}$) defined on a probability space, and let $X_{(n,k)}$ denotes their $k^{th}$ order statistic, $P_k$ denotes the possibility $P(X_1\leq t, X_2\leq t,...,X_k\leq t)$ and $P_{n,k}$ denotes the possibility $P(X_1\leq t, X_2\leq t,...,X_k\leq t, X_{k+1}>t, X_{k+2}>t,...,X_n>t)$. Obviously, $P_k$ is the distribution of the maximum of $X_{1\mathrel{{.}\,{.}}\nobreak k}$. \begin{definition}[Jointly-Identical] For $n$ identically distributed rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$ and $\forall k\in [1\mathrel{{.}\,{.}}\nobreak n]$, if any $k$ arbitrarily chosen rvs keep the same joint probability distribution, these $n$ identical rvs are named as jointly-identical rvs. \end{definition} \begin{lemma}{For $n$ jointly-identical rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$, $$P_{n,k}=\sum_{i=k}^{n}A^{n,k}_i{P_{i}}, 1\leq \forall k \leq n,$$ where the const coefficient $A^{n,k}_i$ can be calculated by the following recurrence: $$A^{n,k}_i=\left\{ \begin{array}{lc} 1&{i=k},\\ -\sum_{j=1}^{i-k}{n-i+j \choose j}A^{n,k}_{i-j}&{k+1\leq i\leq n}. \end{array} \right.$$ }\label{th-mp} \begin{proof} Let $P_{\overline{n-k}|k}$ denotes $P(X_{k+1}>t,X_{k+2}>t,...,X_n > t|X_1\leq t,X_2\leq t,...,X_k\leq t)$ and $P_{\overline{n-i},i-k|k}$ denotes $P(X_{i+1}>t,X_{i+2}>t,...,X_n > t, X_{k+1}\leq t, X_{k+2}\leq t, X_{i}\leq t|X_{1}\leq t,X_{2}\leq t,...,X_k \leq t)$, $k+1\leq i \leq n$. Certainly, we have \begin{gather} P_{\overline{n-k}|k}=\frac{P_{n,k}}{P_{k}}\text{ , }P_{\overline{n-i},i-k|k}=\frac{P_{n,i}}{P_{k}}\label{eq1}. \end{gather} As $X_{1\mathrel{{.}\,{.}}\nobreak n}$ are jointly-identical rvs, the following equation holds: \begin{gather} P_{\overline{n-k}|k}=1-\sum_{i=k+1}^{n}{n-k \choose i-k}{P_{\overline{n-i},i-k|k}}\label{eq2}. \end{gather} By insertion of Eq. \ref{eq1} into Eq. \ref{eq2}, the following recurrence holds: \begin{equation}\label{eq4} P_{n,k}=P_{k}-\sum_{i=k+1}^{n}{n-k \choose i-k}P_{n,i} \end{equation} Expanse Eq. \ref{eq4}, we complete the proof of Lemma \ref{th-mp}. \end{proof} \end{lemma} \subsection{Linear Transformation of Order Statistics} \begin{theorem}[LT of Order Statistics]{For $n$ jointly-identical rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$, there exists a linear transformation from maxima to order statistics: $$ \begin{bmatrix} X_{(n,1)}\\ X_{(n,2)}\\ \vdots \\ X_{(n,n)} \end{bmatrix} = \begin{bmatrix} W_1^{n,1} & W_2^{n,1} & \dots & W_n^{n,1} \\ 0 & W_2^{n,2} & \dots & W_n^{n,2} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & W_n^{n,n} \\ \end{bmatrix} \begin{bmatrix} X_{(1,1)}\\ X_{(2,2)}\\ \vdots \\ X_{(n,n)} \end{bmatrix}. $$ Namely, $X_{(n,k)}=\sum_{i=k}^{n}W_i^{n,k}X_{(i,i)}, 1\leq \forall k \leq n,$ where the const coefficient \begin{equation} W_i^{n,k}=\sum_{j=k}^{i}{n \choose j}A^{n,j}_i \label{eq-w-coe} \end{equation}}\label{the2} \begin{proof} Let $F_{n,k}\equiv P(X_{(n,k)}\leq t)$ be the probability distribution of the $k^{th}$ order statistic. Equivalently, we need to prove \begin{equation} F_{n,k}=\sum_{i=k}^{n}W_i^{n,k}P(X_{(i,i)}\leq t)=\sum_{i=k}^{n}W_i^{n,k}P_{i}. \label{eq-fnk-proof} \end{equation} As $F_{n,k}=\sum_{i=k}^{n} {n \choose i}P_{n,i}$ and $P_{n,k}=\sum_{i=k}^{n}A^{n,k}_i{P_{i}}$ (Lemma \ref{th-mp}), we derive the following recurrence: \begin{equation}\label{eqf} F_{n,k}=\left\{ \begin{array}{lc} P_n&{k=n},\\ F_{n,k+1}+{n \choose k}\sum_{i=k}^{n}{A^{n,k}_iP_{i}}&{1\leq k\leq n-1}. \end{array} \right. \end{equation} Expanse Eq. \ref{eqf}, we complete the proof of Eq. \ref{eq-fnk-proof}. \end{proof} \begin{remark} There is no need to assume the independence of the identical rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$. There may exist other linear transformations of order statistics than the one given by Theorem \ref{the2}. \end{remark} \end{theorem} \begin{definition}[$W$ Coefficient]{For any possible linear transformations from maxima to order statistics, the corresponding const coefficient $W_i^{n,k}$ of $X_{(i,i)}$ is named as a $W$ coefficient.} \begin{remark} The calculation of the $W$ coefficient given by Eq. \ref{eq-w-coe} is not straightforward, as it consists of many items. We use a simple solver (see Appendix) to give $W$ coefficient values. \end{remark} \end{definition} \subsection{Linear Transformation of Expectations} Let $\mathbb E_{n,k}\equiv E[X_{(n,k)}]$ be the expectation of the $k^{th}$ order statistic of rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$. Specially, we use $\mathbb E_{n}$ to denote $\mathbb E_{n,n}$. Then the following theorem holds. \begin{theorem}[LT of Expectations]{For $n$ jointly-identical rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$, there exists a linear transformation from the expectations of maxima to the expectations of order statistics: \begin{equation} \mathbb E_{n,k}=\sum_{i=k}^{n}W_i^{n,k}\mathbb E_{i}, 1\leq \forall k \leq n. \end{equation}}\label{the3} \begin{proof} $E[X_{(n,k)}]=E[\sum_{i=k}^{n}W_i^{n,k}X_{(i,i)}]=\sum_{i=k}^{n}W_i^{n,k}\mathbb E_{i}$ \end{proof} \end{theorem} \section{Approximations for Non-Purging $(n,k)$ Fork-Join Queues}\label{app} We consider a homogenous cluster consisting of $n$ nodes, where each node $i$ ($1\leq i\leq n$) has the same service time distribution when processing sub-tasks of the same job. Each node owns a first-come-first-serving sub-queue $q_i$ ($1\leq i\leq n$) with the assumption of unlimited queue capacity. These $n$ sub-queues constitute a homogenous fork-join queue. Each incoming job consists of $n$ tasks. Let $t_i^j$ be the sojourn time of the $j^{th}$ job's sub-task assigned to node $i$. Then, the stable sojourn time of a sub-task in the sub-queue $q_i$ is $t_i\equiv \lim_{j\to\infty}t_i^j$, and the sojourn time of a job in the $(n,k)$ fork-join queue is the $k^{th}$ order statistic $t_{(n,k)}$ consequently. \begin{lemma}{For an $(n,n)$ fork-join queue, the sub-queues' stable sojourn times $t_{1\mathrel{{.}\,{.}}\nobreak n}$ constitute a family of jointly-identical rvs.}\label{the4} \begin{proof} Recall that all the sub-queues have an unlimited capacity, then the sub-task sojourn time distribution of a sub-queue depends only on the job arrival process and the sub-queue's service time distribution. As these sub-queues are under the same job arrival process and have the same service time distribution, the sub-queues' stable sojourn times $t_{1\mathrel{{.}\,{.}}\nobreak n}$ are identical rvs. By the constitution methodology of fork-join queues, an $(n,n)$ fork-join queue is symmetrical, which means its sub-queues are interchangeable and thus any $k$ arbitrarily chosen sub-queues keep the same joint probability distribution. By definition the stable sojourn times $t_{1\mathrel{{.}\,{.}}\nobreak n}$ are jointly-identical rvs. \end{proof} \end{lemma} \begin{definition}[$(\lambda,\mu)$-Equivalent Queues]{Those basic fork-join queues, purging/non-purging $(n,k)$ fork-join queues and split-merge queues that are under the same job arrival process and with the same sub-queue's service time distribution are called $(\lambda,\mu)$-equivalent queues to each other, where $\lambda$ and $\mu$ are the job arrival rate and the sub-queue's service rate respectively.} \end{definition} \subsection{Approximations for General Fork-Join Queues} This paper uses the term \texttt{general queues} to denote the fork-join queues with identically and generally distributed sub-queues' service times and job inter-arrival times. \begin{theorem}[LT of Sojourn Time]{ The sojourn time of a general non-purging $(n,k)$ fork-join queue can be represented by a linear combination of the sojourn times of the $(\lambda,\mu)$-equivalent basic fork-join queues: \begin{equation} t_{(n,k)}=\sum_{i=k}^{n}W_i^{n,k}t_{(i,i)} \end{equation} where $W_i^{n,k}$ is the corresponding $W$ coefficient. }\label{the5} \begin{proof} The only difference between the $(\lambda,\mu)$-equivalent non-purging $(n,k)$ queue and basic $(n,n)$ queue lies in the job departure process, which has no influence over the job arrival process, the sub-queue's service time distribution and therefore the distribution of the sub-queue length. Consequently, the target non-purging $(n,k)$ queue and its $(\lambda,\mu)$-equivalent $(n,n)$ fork-join queue have the same family of jointly-identical rvs $t_{1\mathrel{{.}\,{.}}\nobreak n}$, and therefore the same order statistic $t_{(n,k)}$. According to Lemma \ref{the4} and Theorem \ref{the2}, $t_{(n,k)}=\sum_{i=k}^{n}W_i^{n,k}t^{sub}_{(i,i)}$, where $t^{sub}_{(i,i)}$ is the maximum stable sojourn time of $i$ arbitrarily chosen sub-queues from the target non-purging $(n,k)$ queue. By the constitution methodology of fork-join queues, the $i$ chosen sub-queues can constitute an $(i,i)$ queue which is $(\lambda,\mu)$-equivalent to the target non-purging $(n,k)$ queue. Therefore, $t_{(i,i)}=t^{sub}_{(i,i)}$ and $t_{(n,k)}=\sum_{i=k}^{n}W_i^{n,k}t^{sub}_{(i,i)}=\sum_{i=k}^{n}W_i^{n,k}t_{(i,i)}$ hold. \end{proof} \end{theorem} \begin{theorem}[LT of Expected Sojourn Time]{The expected sojourn time $\mathbb {NT}_{n,k}\equiv E[t_{(n,k)}]$ of a general non-purging $(n,k)$ fork-join queue can be represented by a linear combination of the expected sojourn times of the $(\lambda,\mu)$-equivalent basic fork-join queues: \begin{equation} \mathbb {NT}_{n,k}=\sum_{i=k}^{n}{W_i^{n,k}\mathbb T_{i}} \end{equation} where $\mathbb T_{i}$ is the expected sojourn time of the $(\lambda,\mu)$-equivalent basic $(i,i)$ fork-join queue and $W_i^{n,k}$ is the corresponding $W$ coefficient. }\label{the6} \begin{proof} $ E[t_{(n,k)}]=E[\sum_{i=k}^{n}W_i^{n,k}t_{(i,i)}]=\sum_{i=k}^{n}W_i^{n,k}\mathbb T_{i} $ \end{proof} \end{theorem} \begin{remark} The independence of the service times of the sub-queues is not required for Theorem \ref{the5} and \ref{the6} to hold. To put Theorem \ref{the6} into practices, we need to find the existing computing methods of $\mathbb T_{i}$. \end{remark} \subsection{Approximations for Exponential Fork-Join Queues} This paper uses the term \texttt{exponential queues} to denote the fork-join queues under a Poisson job arrival process and with iid exponential sub-queues' service time distributions. For exponential $(n,n)$ fork-join queues, there exist two reliable approximation methods: Nelson's approximation\cite{Nelson:1988jk} and Varma's approximation\cite{Varma:1994gm}. Accordingly, we can propose two approximation methods for exponential non-purging $(n,k)$ fork-join queues: The \texttt{Nelson-LT approximation} based on Nelson's approximation and the \texttt{Varma-LT approximation} based Varma's approximation. \subsubsection{The Nelson-LT Approximation} \begin{figure*}[ht!] \centering \includegraphics{pics/Linear-with-Nelson.pdf} \caption{The Nelson-LT Approximations for Exponential Non-Purging $(n,k)$ Fork-Join Queues ($\mu=1$, SIM: Simulation, APP: approximation)} \label{linear-nelson} \end{figure*} For exponential $(n,n)$ fork-join queues, Nelson et al. \cite{Nelson:1988jk} proposed the following approximations: \begin{gather} \mathbb T_1=\frac{1}{\mu(1-\rho)}, \quad\quad \mathbb T_2=\frac{12-\rho}{8}\mathbb T_1\notag \\ \mathbb T_n\simeq\left[\frac{H_n}{H_2}+\frac{4}{11}\left(1-\frac{H_n}{H_2}\right)\rho\right]\mathbb T_2, n\geq 2,\notag \end{gather} where $\lambda$ and $\mu$ are respectively the job arrival rate and the sub-queue's service rate, $\rho\equiv \frac{\lambda}{\mu}$ is called the load factor of the queue, $\mathbb T_n$ is the expected sojourn time of $(n,n)$ basic fork-join queue, and $H_n=\sum_{i=1}^n\frac{1}{i}$ is called the harmonic number. Consequently, our approximations can be specified in the following theorem. \begin{theorem}[Nelson-LT Approximation]{According to Nelson's approximation, the expected sojourn time $\mathbb {NT}_{n,k}$ of an exponential non-purging $(n,k)$ fork-join queue can be approximated as follow: \begin{align}\label{exp-app} \mathbb {NT}_{n,k}\simeq \left\{ \begin{array}{ll} \begin{array}{c} \frac{n}{\mu(1-\rho)}+\frac{12-\rho}{88\mu(1-\rho)}\times \\ \sum_{i=2}^{n}{W_i^{n,1}\left[\frac{11H_i+4\rho(H_2-H_i)}{H_2}\right]}\end{array}&k=1,\\ \frac{12-\rho}{88\mu(1-\rho)}\sum_{i=k}^{n}{W_i^{n,k}\left[\frac{11H_i+4\rho(H_2-H_i)}{H_2}\right]}&k\geq 2. \end{array} \right. \end{align} where $\lambda$ and $\mu$ are respectively the job arrival rate and the sub-queue's service rate, and $\rho\equiv \frac{\lambda}{\mu}$ is the load factor of the target queue. Specially, we replace any negatively approximated $\mathbb {NT}_{n,k}$ with 0. }\label{non-e} \end{theorem} We examine above linear-transformation approximations against the mean sojourn times of jobs sampled from various simulated exponential non-purging fork-join queues (details of simulations in this paper can be found in Appendix). The value of $W$ coefficients used in Eq. \ref{exp-app} are given by Eq. \ref{eq-w-coe}. The results depicted in Fig. \ref{linear-nelson} confirmed the validity of our technique under a moderate value of $n$ ($n\leq 50$) and a relatively large value of $k$ (compared to $n$). The relative errors are calculated as $\frac{APP}{SIM}-1$. We notice that when $k$ is relatively small, the approximation tends to be uncontrollable, which can be due to the fact that the smaller $k$ is, the more items in Eq. \ref{exp-app} are summed, and consequently, the more relative errors introduced by Nelson's approximations are accumulated. These results also confirmed the high-precision merit of Nelson's approximations, since $W$ coefficients tend to be very large with the increase of $n$, for example $W^{25,9}_{16}=13146544125$. As a result, the relative error introduced by Nelson's approximation has to be amplified by the large value of the corresponding $W$ coefficient. \subsubsection{The Varma-LT Approximation} \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{pics/Linear-with-Varma.pdf} \caption{The Relative Errors of the Varma-LT Approximations for Exponential Non-Purging $(n,k)$ Fork-Join Queues} \label{linear-varma} \end{figure*} For exponential non-purging $(n,n)$ fork-join queues, Varma et al. \cite{Varma:1994gm} gave another well-known approximation method based on the so-called light traffic interpolation technique. The expected sojourn time is approximated as $$ \mathbb T_n\simeq\left[H_n+(V_n-H_n)\frac{\lambda}{\mu}\right]\frac{1}{\lambda-\mu}, 0\leq \lambda<\mu $$ where $V_n= \sum_{r=1}^{n} {n \choose r}(-1)^{r-1}\sum_{m=1}^{r}{r\choose m}\frac{(m-1)!}{r^{m+1}}$. As our linear transformation technique is orthogonal to the concrete approximation methods for basic fork-join queues, we replace Nelson's approximation with the above Varma's approximation to try to avoid the approximations' uncontrollability appeared in Theorem \ref{non-e}. Consequently, the new approximations can be specified in the following theorem. \begin{theorem}[Varma-LT Approximation]{According to Varma's approximation, the expected sojourn time of an exponential non-purging $(n,k)$ fork-join queue can be approximated as follow: \begin{equation}\label{eq-lt-varma} \mathbb {NT}_{n,k}\simeq\sum_{i=k}^{n}{W_i^{n,k}\left[H_i+(V_i-H_i)\frac{\lambda}{\mu}\right]\frac{1}{\lambda-\mu}} \end{equation} where $\lambda$ and $\mu$ are respectively the job arrival rate and the sub-queue's service rate. }\label{non-e-varma} \end{theorem} We examine Eq. \ref{eq-lt-varma} against the mean sojourn times of jobs sampled from various simulated non-purging fork-join queues. The employed $W$ coefficients are calculated from Eq. \ref{eq-w-coe}. The results depicted in Fig. \ref{linear-varma} showed that the new approximations are fairly good when $n\leq 10$ and $\rho$ is not too extreme ($\rho\leq 0.9$): The relative error is generally less than 10\%, which is much more controllable than the approximations given by Theorem \ref{non-e}. However, as the Varma's approximation itself becomes out of control when $n\geq 55$, Theorem \ref{non-e} is more valuable in general cases. \section{Bounds for Purging $(n,k)$ Fork-Join Queues}\label{bounds} Unlike in non-purging $(n,k)$ fork-join queues, the sojourn time distribution of a sub-task in purging $(n,k)$ fork-join queues changes when either $n$ or $k$ varies, and thus differs from the sojourn time distribution of a sub-task in the $(\lambda,\mu)$-equivalent basic fork-join queues. As a result, we cannot build similar linear-transformation approximations for purging queues. However, the expected sojourn times of non-purging queues can serve as the upper bounds of the $(\lambda,\mu)$-equivalent purging queues' expected sojourn times. \subsection{The Naive Upper Bounds} \begin{theorem}[Naive Upper Bounds]{The expected sojourn time $\mathbb {PT}_{n,k}$ of a purging $(n,k)$ fork-join queue can be upper bounded as follow: \begin{equation} \mathbb {PT}_{n,k}\leq \sum_{i=k}^{n}{W_i^{n,k}\mathbb T_{i}}\label{uppeq} \end{equation} }\label{Upper-bounds} where $\mathbb T_{i}$ is the expected sojourn time of the $(\lambda,\mu)$-equivalent basic fork-join queue. \begin{proof} The right side of Eq. \ref{uppeq} is the expected sojourn time of the $(\lambda,\mu)$-equivalent non-purging $(n,k)$ fork-join queue. As the expected sub-queue length of a stable purging $(n,k)$ fork-join queue is no longer than that of the $(\lambda,\mu)$-equivalent stable non-purging queue, the expected sojourn time of the purging $(n,k)$ fork-join queue is thus no larger than that of its non-purging $(\lambda,\mu)$-equivalent queue. \end{proof} \end{theorem} \paragraph{Comparing with Existing Stat-of-the-Art Upper Bounds} For purging $(n,k)$ fork-join queues with iid service time distributions and under a Poisson job arrival process, Existing state-of-the-art upper bounds on the expected sojourn time are the \textit{split-merge upper bounds} given by Joshi et al. \cite{Joshi:2017bj}: \begin{equation}\label{josh-upper-bound} \mathbb {PT}_{n,k}\leq E[X_{(n,k)}]+ \frac{\lambda E[X_{(n,k)}^2]}{2(1-\lambda E[X_{(n,k)}])} \end{equation} where $\lambda$ is the job arrival rate and $X_{(n,k)}$ is the $k^{th}$ order statistic of the iid service time rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$. The right side of Eq. \ref{josh-upper-bound} is the expected sojourn time of the $(\lambda,\mu)$-equivalent $(n,k)$ split-merge queue. \begin{corollary}{Split-merge upper bounds become much looser than naive upper bounds when $E[X_{(1,1)}]< E[X_{(n,k)}]$ and $\lambda \rightarrow [\frac{1}{E[X_{(n,k)}]}]^-$.}\label{torhocoro} \begin{proof} When $\lambda \rightarrow [\frac{1}{E[X_{(n,k)}]}]^-$, the bounds given by Eq. \ref{josh-upper-bound} approach $+\infty$. Moreover, the bounds become meaningless when $\lambda \geq \frac{1}{E[X_{(n,k)}]}$. As a contrast, our naive bounds are finite meaningful values, as long as $\lambda < \left[\mu\equiv\frac{1}{E[X_{(1,1)}]}\right]$. Besides, $\mu>\frac{1}{E[X_{(n,k)}]}$. \end{proof} \begin{remark} Apparently there is a range of load factors $\rho \in [\frac{E[X_{(1,1)}]}{E[X_{(n,k)}]},1)$ cannot be bounded by Eq. \ref{josh-upper-bound}, and the larger $k$ is, the smaller bounded-able $\rho$ range becomes, while the naive bounds are applicable as long as $\rho<1$. \end{remark} \end{corollary} \begin{corollary}{Naive upper bounds become much looser than split-merge upper bounds when $k \rightarrow 1$.}\label{to1coro} \begin{proof} When $k \rightarrow 1$, more and more sub-tasks are purged from both the sub-queues and the service stations when the $k^{th}$ finished sub-task is acknowledged by the purging $(n,k)$ queue, as a result of which, the expected sub-queue length becomes shorter and shorter than that of the $(\lambda,\mu)$-equivalent non-purging $(n,k)$ queue. On the contrary, the expected sub-queue length of the target purging $(n,k)$ fork-join queue becomes closer and closer to that of the $(\lambda,\mu)$-equivalent $(n,k)$ split-merge queue. At last, the purging $(n,1)$ fork-join queue equates to the $(n,1)$ split-merge queue, which gives us the following exact closed-form solution: $\mathbb {PT}_{n,1}= E[X_{(n,1)}]+ \frac{\lambda E[X_{(n,1)}^2]}{2(1-\lambda E[X_{(n,1)}])}$. \end{proof} \begin{remark} On the other side of Corollary \ref{to1coro}, when $k \rightarrow n$, the expected sub-queue length of the purging $(n,k)$ fork-join queue becomes closer and closer to that of the $(\lambda,\mu)$-equivalent non-purging queues. At last, the purging $(n,n)$ fork-join queue equates to the $(\lambda,\mu)$-equivalent non-purging $(n,n)$ fork-join queue. \end{remark} \end{corollary} \subsection{The Refined Upper Bounds} \begin{figure*}[ht!] \centering \subfloat[Naive Upper Bounds (Naive) v.s. Split-Merge Upper Bounds (SM)]{\includegraphics{pics/Upper-bounds.pdf}} \hfill \subfloat[Refined Bounds (RF) v.s. Simulations (SIM)]{\includegraphics{pics/refined-bounds.pdf}} \caption{Upper Bounds for Exponential Purging $(25,1),(25,2),...,(25,25)$ Fork-Join Queues ($\mu=1$)} \label{Up-fig} \end{figure*} \begin{theorem}[Refined Upper Bounds]{For a purging $(n,k)$ fork-join queue with iid service time distributions and under a Poisson job arrival process, the expected sojourn time $\mathbb {PT}_{n,k}$ can be upper bounded as follows: \begin{align} \mathbb {PT}_{n,k}\!\!\leq\!\!\left\{ \begin{array}{cl}\label{eq-r} \sum_{i=k}^{n}{W_i^{n,k}\mathbb T_{i}}&\!\!\!\!\!\!\!\!\!\!\!\lambda \geq \frac{1}{E[X_{(n,k)}]},\\ \!\!\!\!\min\!\left(\sum_{i=k}^{n}\!\!W_i^{n,k}\mathbb T_{i},\begin{array}{c}E[X_{(n,k)}]\!+\\ \!\frac{\lambda E[X_{(n,k)}^2]}{2(1-\lambda E[X_{(n,k)}])}\end{array}\!\right)& \!\!\!otherwise. \end{array} \right. \end{align} }\label{r-Upper-bounds} where $\mathbb T_{i}$ is the expected sojourn time of the $(\lambda,\mu)$-equivalent $(i,i)$ fork-join queue, $\lambda$ is the job arrival rate, and $X_{(n,k)}$ is the $k^{th}$ order statistic of the iid service time rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$. \begin{proof} According to Corollary \ref{torhocoro} and \ref{to1coro}, and excluding the split-merge bounds when $\lambda \geq \frac{1}{E[X_{(n,k)}]}$, we derive Eq. \ref{eq-r}. \end{proof} \begin{remark} Although Eq. \ref{eq-r} has extended the bounded-able range of $\rho$ from $[0,\frac{E[X_{(1,1)}]}{E[X_{(n,k)}]})$ to $[0,1)$, there is still an untamed range of $\rho$, since purging $(n,k)$ fork-join queues may still keep stable even when $\rho\geq 1$. \end{remark} \end{theorem} \paragraph{Upper Bounds for Exponential Queues} Specially, we give the refined upper bounds for exponential purging $(n,k)$ fork-join queues. \begin{theorem}[Refined Upper Bounds for Exponential Purging Queues]{For exponential purging $(n,k)$ fork-join queues, the expected sojourn time $\mathbb {PT}_{n,k}$ can be upper bounded as follows: \begin{align}\label{eq-eu} \mathbb {PT}_{n,k}\!\!\leq\!\!\left\{ \begin{array}{c} \underbrace{\!\!\!\!\!\begin{array}{c}\frac{12-\rho}{88\mu(1-\rho)} \sum_{i=k}^{n}{W_i^{n,k}\!\left[\frac{11H_i+4\rho(H_2-H_i)}{H_2}\right]}\end{array}}_{Naive}\\ \text{when }k\geq 2 \text{ and }\rho \geq \frac{1}{H_n-H_{n-k}},\\\\ \!\!\!\!\min\!\left(Naive,\overbrace{\begin{array}{c} \frac{H_n-H_{n-k}}{\mu}+\\ \frac{\rho[(H_{n^2}-H_{(n-k)^2})+(H_n-H_{(n-k)})^2]}{2\mu[1-\rho(H_n-H_{n-k})]} \end{array}}^{Split-Merge}\right)\\ otherwise. \end{array} \right. \end{align} }\label{e-Upper-bounds} where $\lambda$ and $\mu$ are respectively the job arrival rate and the sub-queue's service rate, $\rho\equiv \frac{\lambda}{\mu}$, and $H_{n^2}=\sum^{n}_{i=1}\frac{1}{i^2}$. \begin{proof} The split-merge part of the Eq. \ref{eq-eu} is already given by Theorem 2 of \cite{Joshi:2012cv}. According to Theorem \ref{non-e} and \ref{r-Upper-bounds}, we derive Eq. \ref{eq-eu}. \end{proof} \end{theorem} We make numerical comparisons between naive upper bounds and split-merge upper bounds, and examine refined upper bounds against the mean sojourn times of jobs sampled from various simulated purging fork-join queues. The value of $W$ coefficients used in Eq. \ref{eq-eu} are given by Eq. \ref{eq-w-coe}. We find that: \begin{itemize} \item The split-merge upper bounds become extremely pessimistic with the increase of $k$, but tend to be much tighter than naive bounds when $k$ is small (see Fig. \ref{Up-fig} (a)). These results are consistent with Corollary \ref{torhocoro} and \ref{to1coro}. \item There is still plenty of room of improving the upper bounds when k is relatively large (see Fig. \ref{Up-fig} (b)). \end{itemize} \subsection{Lower Bounds} To complete our work, we review and compare the state-of-the-art lower bounds for purging $(n,k)$ fork-join queues. For purging $(n,k)$ fork-join queues with iid service time distributions and under a Poisson job arrival process, the state-of-the-art lower bounds are the \textit{split-merge lower bounds} given in \cite{Joshi:2017bj}: \begin{equation}\label{eq-sm-b} \mathbb {PT}_{n,k}\geq E[X_{(n,k)}]+\frac{\lambda E[X_{(n,1)}^2]}{2(1-\lambda E[X_{(n,1)}])} \end{equation} where $\lambda$ is the job arrival rate and $X_{(n,k)}$ is the $k^{th}$ order statistic of the iid service time rvs $X_{1\mathrel{{.}\,{.}}\nobreak n}$. For exponential purging $(n,k)$ fork-join queues, there is another staging analysis based lower bound \cite{Joshi:2012cv}: \begin{equation} \mathbb {PT}_{n,k}\geq\frac{H_n-H_{n-k}}{\mu}+\frac{\rho(H_{n(n-\rho)}-H_{(n-k)(n-k-\rho)})}{\mu} \end{equation} where $\lambda$ and $\mu$ are respectively the job arrival rate and the sub-queue's service rate, $\rho\equiv \frac{\lambda}{\mu}$, and $H_{n(n-\rho)}=\sum_{i=1}^{n}\frac{1}{i(i-\rho)}$. This \textit{staging lower bound} is adapted from the staging lower bound for basic fork-join queues proposed in \cite{Varki:2001wc}, which requires a memory-less property of the service time distribution. Accordingly, this bound cannot be applied to general purging queues. \begin{theorem}{\label{th-better-lower} The staging lower bounds are tighter than the split-merge lower bounds.} \begin{proof} The exact form of Eq. \ref{eq-sm-b} for exponential queues can be transformed into: $$\mathbb {PT}_{n,k}\geq\frac{H_n-H_{n-k}}{\mu}+\frac{\rho (H_{n(n-\rho)}-H_{(n-1)(n-1-\rho)})}{\mu}.$$ As $H_{(n-k)(n-k-\rho)})< H_{(n-1)(n-1-\rho)})$ when $k>1$, we derive Theorem \ref{th-better-lower}. \end{proof} \end{theorem} \begin{figure}[tp!] \centering \includegraphics{pics/lower-bound.pdf} \caption{Bounds v.s. Simulations for Exponential Purging $(25,1), (25,2),..., (25,25)$ Fork-Join Queues ($\mu=1, \rho=0.7$)} \label{lower-fig} \end{figure} We examine bounds for exponential purging queues against simulations. Fig. \ref{lower-fig} depicts the large gap between the upper bounds and the lower bounds when $k$ is relatively large, due to which, we can hardly find reasonable approximations of the expected sojourn time of purging $(n,k)$ fork-join queues. \section{Discussion}\label{discuss} Currently, there is an unnegligible limitation when put the new proposed linear transformation technique into practices: The value of $W$ coefficient given by Eq. \ref{eq-w-coe} increases explosively with the increase of n, for example $W^{40,37}_{37}={40 \choose 37}=9880$, $W^{50,37}_{37}={50 \choose 37}=354860518600$, and $W^{100,37}_{37}={100 \choose 37}=3.42002954749393 \times 10^{27}$, as a result of which, the original negligible relative error of $\mathbb T_i$ in Theorem \ref{the6} will be amplified into a huge deviation of the approximated $\mathbb {NT}_{n,k}$. Consequently, the practicability of the linear transformation technique depends on whether we can find high-precision approximated or simulated $\mathbb T_i$. For example, when we use simulated $\mathbb T_i$ to estimate $\mathbb {NT}_{n,k}$, the results are far from acceptable (see Table \ref{simu-based}), and also far behind the Nelson-LT approximations (see Fig. \ref{linear-nelson}). These surprising results can be due to the fluctuation of the simulated $\mathbb T_i$ (see Fig. \ref{simu-nelson}). Fig. \ref{linear-nelson} has depicted that the accuracy of the Nelson-LT approximation is acceptable when $n\leq 50$ and $k$ is relatively large (for example, $k> 37$ when $n=50$). However the approximations are similarly unacceptable when $k$ is relatively small (see Table \ref{nel-based}). \begin{figure}[tp!] \centering \includegraphics{pics/Simu-vs-Nelson.pdf} \caption{Simulations (SIM) v.s. Nelson's Approximations for Basic $(1,1),(2,2),...,(20,20)$ Fork-Join Queues ($\mu$=1)} \label{simu-nelson} \end{figure} \begin{table}[h!] \centering \caption{Approximated $\mathbb {NT}_{n,k}$ Based on Simulated $\mathbb T_i$ ($\mu=1$)} \label{simu-based} \begin{tabular}{l|lllll} $(n,k)$ & $\rho$: 0.05 & $\rho$ :0.4 & $\rho$: 0.8\\ \hline (20,17) & 25.17029838 & -37.93974618 & 70.76161205\\ (20,18) & 0.412468275 & 6.238904164 & 3.024177347\\ (20,19) & 2.772567927 & 3.796009747 & 11.47066833 \end{tabular} \end{table} \begin{table}[h!] \centering \caption{Approximated $\mathbb {NT}_{n,k}$ Based on Nelson's $\mathbb T_i$ ($\mu=1$)} \label{nel-based} \begin{tabular}{l|lllll} $(n,k)$ & $\rho$: 0.2 & $\rho$ :0.4 & $\rho$: 0.8\\ \hline (50,34) & -317.7265625 & -203.25 & -580.0859375\\ (50,35) & 23 & -7.599609375 & -18.87890625\\ (50,36) & -4.269042969 & 4.208007813 & 9.935546875 \end{tabular} \end{table} The fundamental solution of this problem is to scale down $W$ coefficients, a promising research direction needs further efforts. Anyway, this linear transformation technique is capable of estimating/bounding the performance of most of practical non-purging/purging $(n,k)$ fork-join queueing systems, where the replication factor ($n$) rarely exceeds 10, a result of cost-effective tradeoff. For example, the replication factor of either Dynamo or Cassandra is commonly 3. Under such configurations, a write operation in Dynamo/Cassandra will be forked into 3 copies exactly. For such $n\leq 10$ cases, we have proposed the fairly good Varma-LT approximations (Theorem \ref{non-e-varma}). From another perspective, the linear transformation technique can be used to check simulators' precision and to find better closed-form approximations for basic fork-join queues. \section{Related Works}\label{review} \paragraph{Order Statistics} Bertsimas et al. \cite{Bertsimas:2006ge} gave some tight bounds on the expectation of the $k^{th}$ order statistic given the first and second moment information on $n$ real-valued rvs. We gave exact linear transformations for $k^{th}$ order statistic instead of bounds. Shi et al. \cite{Shi:2013gw} proposed a dynamic programming algorithm to compute the order statistics of a family of correlated rvs whose distributions are not required to be identical. This algorithm relies on the existence of computing methods for the distributions of both minimum and maximum of target rvs. As a contrast, our work is more formal and easier to practice for the reveal of the closed-form linear transformation, and only relies on the existence of computing methods for the maximum's distribution. \paragraph{Basic Fork-Join Queues} For $n=2$ fork-join queues under a Poisson job arrival process: 1) Flatto et al. \cite{Flatto1984} gave the queue length distribution for exponential queues in stable state; 2) Baccelli \cite{baccelli1985two} extended Flatto's work to queues with general service time distributions; 3) Nelson et al. \cite{Nelson:1988jk} proposed the exact closed-form solution of the expected sojourn time for exponential queues in stable state. For $n\geq 3$ exponential fork-join queues, the most influential approximation work was proposed by Nelson et al. in 1988 \cite{Nelson:1988jk}, which is based on the fact that the sojourn times $X_{1\mathrel{{.}\,{.}}\nobreak k}$ of sub-tasks $1, 2,...,k$ are associated rvs \cite{Esary:1967eo}, whose maximum can be bounded by the maximum of their iid equivalents. The lower bound is obtained by neglecting queueing effects. The approximation is a linear mixture of the upper and lower bounds. Parameters of the mixture are learned based on the mean sojourn times of jobs sampled from simulated basic fork-join queues. Varki et al. \cite{Varki:2001wc} improved the lower bound by using an staging analysis technique \cite{Trivedi:1982vp} based on the memory-less property of the exponential service time distribution, and use the mean value of Nelson's upper bound and the staging lower bound as the approximation. According to experiments in \cite{Lebrecht:2007tm}, Nelson's approximation is still the most reliable one for exponential queues, compared to following works including \cite{Varma:1994gm} and \cite{Varki:2001wc}. Varma et al. \cite{Varma:1994gm} extended Nelson's approximation to general service time distributions using a light traffic interpolation technique. Thomasian et al. \cite{Thomasian:1994kq} employed linear regression over the statistics of simulated fork-join jobs to find the parameters of their approximation equation for the expected sojourn time. However any change in service time distributions will require for re-simulations and re-regressions. Recently, Rizk et al. \cite{Rizk:2015tn} proposed the first computable bounds on waiting and sojourn time of fork-join queues with general service time distributions by using martingales. However the upper bound is looser than Nelson's when it comes to the exponential service time distribution. Fidler et al. \cite{Fidler:2016tw} considered the multi-stage nature of many fork-join queue networks, and proposed their end-to-end delay bounds. We refer readers to \cite{Thomasian:2014df} for a more comprehensive survey on fork-join queuing systems. To conclude, our work is orthogonal to existing approximation methods for basic fork-join queues. \paragraph{Purging $(n,k)$ Fork-Join Queues} There are some exact quantity analyses \cite{Gardner:2015kb,Lee:2017gi} for purging $(n,1)$ fork-join queues, as it is equivalent to a single queue with $n$ times the service rate. Gardner et al. \cite{Gardner:2015kb} gave comprehensive research on purging $(n,1)$ fork-join queues, with considerations on multi-class jobs, interferences from un-forked jobs and heterogeneous service time distributions. Lee et al. \cite{Lee:2017gi} take the purging overheads into consideration, since the cancellation of running jobs typically incurs unnegligible delays in practice. For purging $(n,k>1)$ fork-join queues, there are even no applicable approximations currently. Joshi et al. \cite{Joshi:2012cv} extended the staging analysis to exponential $(n,k)$ fork-join queues to find the lower bounds. Bounds for queues with general service time distributions are given by \cite{Joshi:2012cv} and \cite{Joshi:2017bj}, by resorting the fork-join queue to the split-merge queue model, where all empty sub-queues are blocked until any $k$ sub-tasks of the current job are completed. As depicted in Fig. \ref{Up-fig} (a), the proposed upper bounds tend to be very rough when increasing $k$ or the load factor $\rho$. \paragraph{Non-Purging $(n,k)$ Fork-Join Queues} A typical use case of non-purging $(n,k)$ fork-join queues is the writing process in Cassandra \cite{Huang:2014gq}. Fidler et al. \cite{Fidler:2016tw} gave non-asymptotic statistical bounds on the sojourn time of non-purging $(n,k)$ fork-join queues. As a contrast, we give proper approximations instead of bounds. \section{Conclusion and Future Work}\label{con} Despite the popularity of $(n,k)$ fork-join queues, there were no practical existing approximations of their expected sojourn times. Only some rough bounds have been given, which tend to be extremely loose when increasing $k$ or the load factor $\rho$. This paper gave the first applicable approximation method for non-purging $(n,k)$ fork-join queues and tackled the uncontrollability of the bounds for purging $(n,k)$ fork-join queues: \begin{itemize} \item A brand-new closed-form linear transformation technique is developed for jointly-identical rvs, which provides a bridge to reduce the sojourn time approximation problem of non-purging $(n,k)$ fork-join queues to that of basic fork-join queues. \item Improvements over upper bounds on the expected sojourn time of purging $(n,k)$ fork-join queues are also gained by resorting the purging queues to their non-purging $(\lambda,\mu)$-equivalents. \end{itemize} Above innovations are examined by simulation experiments and numerically compared to the stat-of-the-arts. Results show that this linear transformation approach is practiced well for exponential $(n,k)$ fork-join queues with moderate $n$ and relatively large $k$. However, as currently found $W$ coefficients (coefficients of the linear combination) increase explosively with the increase of $n$, there is an uncontrollable deviation in new proposed approximations when $n$ is large and $k$ is relatively small. Fortunately, approximations for real-life fork-join systems are unlikely influenced by this problem. In the future, more efforts should be put into: Scaling down $W$ coefficients, improving the approximations for basic fork-join queues with the help of the linear transformation technique, and evaluating the performance of real-life $(n,k)$ fork-join systems as complement to existing experimental methods \cite{Wang:2014tr,Kuhlenkamp:2014te}.
proofpile-arXiv_065-3688
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} Every student of mathematics has experienced uncertainty about conjectures for which there is ``quite a bit of evidence'', such as the Riemann hypothesis or the twin prime conjecture. Indeed, when Zhang \cite{zhang2014bounded} proved a bound on the gap between primes, we were tempted to increase our credence in the twin prime conjecture. But how much evidence does this bound provide for the twin prime conjecture? Can we quantify the degree to which it should increase our confidence? The natural impulse is to appeal to probability theory in general and Bayes' theorem in particular. Bayes' theorem gives rules for how to use observations to update empirical uncertainty about unknown events in the physical world. However, probability theory lacks the tools to manage logical non-omniscience: probability-theoretic reasoners cannot possess uncertainty about logical facts so long as their beliefs respect basic logical constraints. For example, let $\phi$ stand for the claim that the 87,653rd digit of $\pi$ is a 7. If this claim is true, then $(1+1=2) \Rightarrow \phi$. But the laws of probability theory say that if $A \Rightarrow B$ then $\mathrm{Pr}(A) \le \mathrm{Pr}(B)$. Thus, a perfect Bayesian must be at least as sure of $\phi$ as they are that $1+1=2$! Recognition of this problem dates at least back to \cite{Good:1950:weighing}. Many have proposed methods for relaxing the criterion $\mathrm{Pr}(A) \le \mathrm{Pr}(B)$ until such a time as the implication has been proven (see, e.g., the work of \cite{Hacking:1967,Christiano:2014:omniscience}). But this leaves open the question of how probabilities should be assigned before the implication is proven, and this brings us back to the search for a principled method for managing uncertainty about logical facts when relationships between them are suspected but unproven. In this paper we describe what we call the \emph{logical induction criterion} for reasoning under logical uncertainty. Our solution works, more or less, by treating a reasoner's beliefs as prices in a market that fluctuate over time, and requiring that those prices not be exploitable indefinitely by any sequence of trades constructed by an efficient (polynomial-time) algorithm. The logical induction criterion can be seen as a weakening of the ``no Dutch book'' criteria that Ramsey \cite{Ramsey:1931}, de Finetti \cite{DeFinetti:1937:foresight}, Teller \cite{teller1973conditionalization}, and Lewis \cite{lewis1999papers} used to support standard probability theory, which is analogous to the ``no Dutch book'' criteria that von Neumann and Morgenstern \cite{Von-Neumann:1944} and Joyce \cite{Joyce:1999} used to support expected utility theory. Because of the analogy, and the variety of desirable properties that follow immediately from this one criterion, we believe that the logical induction criterion captures a portion of what it means to do good reasoning about logical facts in the face of deductive limitations. \Sec{desiderata} lists desiderata for reasoning under logical uncertainty. \Sec{relatedwork} lists further related work. \Sec{framework} presents an overview of the logical induction framework. \Sec{properties} discusses a collection of properties satisfied by logical inductors. \Sec{discussion} gives concluding remarks. Note on abridgement: Due to space considerations, this paper does not include proofs of claims, and describes some results only at a high level. The formal details of our definitions and theorems, additional properties of logical inductors, proofs of properties, a construction of a logical inductor, and further discussion can be found in \cite{Garrabrant:2016:li}. \section{Desiderata for Reasoning under Logical Uncertainty}\label{sec:desiderata} For historical context, and to further reify the problem, we now review a number of desiderata that have been proposed in the literature as desirable features of ``good reasoning'' in the face of logical uncertainty. \begin{desideratum}[Computable Approximability]\label{des:computable}\label{des:first} The method for assigning probabilities to logical claims (and refining them over time) should be computable. \end{desideratum} \begin{desideratum}[Coherence in the Limit]\label{des:coherent}\label{des:second} The belief state that the reasoner is approximating better and better over time should be logically consistent. \par\rparenthetical{Discussed in \Sec{limitprops}.} \end{desideratum} \begin{desideratum}[Approximate Coherence]\label{des:ic} The belief states of the reasoner over time should be approximately logically consistent. \par\rparenthetical{Discussed in \Sec{timelylearning}.} \end{desideratum} \noindent \Des{ic} dates back to at least Good \cite{Good:1950:weighing}, who proposes a weakening of the condition of coherence that could apply to the belief states of limited reasoners. Hacking \cite{Hacking:1967} proposes an alternative weakening, as do Garrabrant et al. \cite{Garrabrant:2016:ic}. \begin{desideratum}[Learning of Statistical Patterns]\label{des:stats} In lieu of knowledge that bears on a logical fact, a good reasoner should assign probabilities to that fact in accordance with the rate at which similar claims are true. \end{desideratum} \noindent For example, a good reasoner should assign probability $\approx 10\%$ to the claim ``the $n$th digit of $\pi$ is a 7'' for large $n$ (assuming there is no efficient way for a reasoner to guess the digits of $\pi$ for large $n$); see \cite{Savage:1967:personal}. \begin{desideratum}[Calibration]\label{des:calibration} Good reasoners should be well-calibrated. That is, among events that a reasoner says should occur with probability $p$, they should in fact occur about $p$ proportion of the time. \end{desideratum} \begin{desideratum}[Non-Dogmatism]\label{des:nondogmatism} A good reasoner should not have extreme beliefs about mathematical facts, unless those beliefs have a basis in proof. \par\rparenthetical{Discussed in \Sec{limitprops}.} \end{desideratum} \noindent In the domain of logical uncertainty, \Des{nondogmatism} can be traced back to Carnap \cite[Sec. 53]{Carnap:1962:LogicalProbability}, and has been demanded by many, including Gaifman\cite{Gaifman:1982:RichProbabilities} and Hutter \cite{Hutter:2013}. \begin{desideratum}[Uniform Non-Dogmatism]\label{des:pa} A good reasoner should assign a non-zero probability to any computably enumerable consistent theory (viewed as a limit of finite conjunctions). \par\rparenthetical{Discussed in \Sec{limitprops}.} \end{desideratum} \noindent The first formal statement of \Des{pa} that we know of is given by Demski \cite{Demski:2012a}, though it is implicitly assumed whenever asking for a set of beliefs that can reason accurately about arbitrary arithmetical claims (as is done by, e.g., Savage \cite{Savage:1967:personal} and Hacking \cite{Hacking:1967}). \begin{desideratum}[Universal Inductivity]\label{des:solomonoff} Given enough time to think, the beliefs of a good reasoner should dominate any (lower semicomputable) semimeasure. \par\rparenthetical{Discussed in \Sec{limitprops}.} \end{desideratum} \begin{desideratum}[Approximate Bayesianism]\label{des:bayes} The reasoner's beliefs should admit of some notion of conditional probabilities, which approximately satisfy both Bayes' theorem and the other desiderata listed here. \end{desideratum} \begin{desideratum}[Self-knowledge]\label{des:introspection} If a good reasoner knows something, she should also know that she knows it. \rparenthetical{Discussed in \Sec{introspection}.} \end{desideratum} \noindent Proposed by Hintikka \cite{Hintikka:1962:knowledge}, \Des{introspection} is popular among epistemic logicians. This desideratum has been formalized in many different ways; see \cite{Christiano:2013:definability,Campbell:2015:SelfReference} for a sample. \begin{desideratum}[Self-Trust]\label{des:lob}\label{des:penult} A good reasoner thinking about a hard problem should expect that, in the future, her beliefs about the problem will be more accurate than her current beliefs. \rparenthetical{Discussed in \Sec{selftrust}.} \end{desideratum} \begin{desideratum}[Approximate Inexploitability]\label{des:inexp}\label{des:last} It should not be possible to run a Dutch book against a good reasoner in practice. \rparenthetical{See \Sec{criterion} for our proposal.} \end{desideratum} \noindent As noted by Eells \cite{Eells:1990:OldEvidence}, the Dutch book constraints used by von Neumann and Morgenstern \cite{Von-Neumann:1944} and de Finetti \cite{DeFinetti:1937:foresight} are implausibly strong: all it takes to run a Dutch book according to de Finetti's formulation is for the bookie to know a logical fact that the reasoner does not know. Thus, to avoid being Dutch booked by de Finetti's formulation, a reasoner must be logically omniscient. Hacking \cite{Hacking:1967} and Eells \cite{Eells:1990:OldEvidence} call for weakenings of the Dutch book constraints, in the hopes that reasoners that are approximately inexploitable would do good approximate reasoning. This idea is the cornerstone of our framework---we consider reasoners that cannot be exploited by betting strategies that can be constructed by a polynomial-time machine. Logical inductors satisfy desiderata~\ref{des:first} through~\ref{des:last}. In fact, logical inductors are designed to meet only~\Des{computable} (computable approximability) and~\Des{inexp} (approximate inexploitability), from which~\ref{des:second}-\ref{des:penult} all follow (see \cite{Garrabrant:2016:li}). \section{Additional Related Work}\label{sec:relatedwork} The study of logical uncertainty is an old topic. It can be traced all the way back to Bernoulli, who laid the foundations of statistics, and later Boole \cite{boole1854investigation}, who was interested in the unification of logic with probability from the start. Refer to \cite{hailperin1996sentential} for a historical account. Our algorithm assigns probabilities to sentences of logic directly; this thread can be traced back through {\L}o{\'{s} \cite{Los:1955} and later Gaifman \cite{Gaifman:1964}, who developed the notion of coherence that we use in this paper. When it comes to the problem of developing formal tools for manipulating uncertainty, our methods are heavily inspired by Bayesian probability theory, and so can be traced back to Pascal, who was followed by Bayes, Laplace, Kolmogorov \cite{kolmogorov1950foundations}, Savage \cite{savage1954foundations}, Carnap \cite{Carnap:1962:LogicalProbability}, Jaynes \cite{Jaynes:2003}, and many others. Polya \cite{polya1990mathematics} was among the first in the literature to explicitly study the way that mathematicians engage in plausible reasoning, which is tightly related to the object of our study. In addition to Good \cite{Good:1950:weighing}, Savage \cite{Savage:1967:personal}, and Hacking \cite{Hacking:1967}, the flaw in Bayesian probability theory was also highlighted by Glymour \cite{Glymour:1980:OldEvidence}, and dubbed the ``problem of old evidence'' by Garber \cite{Garber:1983:OldEvidence} in response to Glymor's criticism. Eells \cite{Eells:1990:OldEvidence} gave a lucid discussion of the problem, revealed flaws in Garber's arguments and in Hacking's solution, and named a number of other desiderata which our algorithm manages to satisfy; see \cite{zynda1995old} and \cite{sprenger2015novel}. Adams \cite{adams1996primer} uses logical deduction to reason about an unknown probability distribution that satisfies certain logical axioms. Our approach works in precisely the opposite direction: we use probabilistic methods to create an approximate distribution where logical facts are the subject. Some work in epistemic logic has been directed at modeling the dynamics of belief updating in non-omniscient agents; see for example \cite{konolige1983deductive,velazquez2014dynamic,balbiani2016logical}. Our approach differs in that we use first-order logic, and therefore use the recursion theorem to make reflective statements instead of using explicit knowledge or belief operators; the potential paradoxes of self-reference are circumvented by allowing beliefs to be probabilistic. The mechanism used by our logical inductor to update its beliefs is not very transparent, leaving open the possibility of a more principled understanding of the local mechanics of updating probabilities on logical or inductive inferences. Straddling the boundary between philosophy and computer science, Aaronson \cite{Aaronson:2013:PhilosophersComplexity} has made a compelling case that computational complexity must play a role in answering questions about logical uncertainty. Fagin and Halpern \cite{fagin1987belief} also straddled this boundary with early discussions of algorithms that manage uncertainty in the face of resource limitations. (See also their discussions of uncertainty and knowledge. \cite{Fagin:1995:knowledge,Halpern:2003}) \section{The Logical Induction Criterion}\label{sec:framework}\label{sec:criterion} We propose a partial solution to the problem of logical non-omniscience, which we call \emph{logical induction}. Roughly speaking, a \emph{logical inductor} is a computable reasoning process that is not exploitable by any polynomial-time computable strategy for making trades against it, using its probabilities as the prices of shares. In this section we give a high-level overview of the criterion and the main result (details are in \cite{Garrabrant:2016:li}), before giving precise statements in \Sec{properties} of some of the properties satisfied by logical inductors. Very roughly, our setup works as follows. We consider reasoners that assign probabilities to sentences $\mathcal{S}$ written in some formal language $\mathcal{L}$. \begin{definition}[Pricing]\label{def:pricing} A \textbf{pricing} is a computable rational function $\mathbb{P} : \mathcal{S} \to \QQ \cap [0, 1]$. \end{definition} \noindent Here $\mathbb{P}(\phi)$ is interpreted as the probability of~$\phi$. We can visualize a pricing as a list of $(\phi, p)$ pairs, where the $\phi$ are unique sentences and the $p$ are rational-number probabilities, and $\mathbb{P}(\phi)$ is defined to be $p$ if $(\phi, p)$ occurs in the list, and $0$ otherwise. (In this way we can represent belief states of reasoners that can be written down explicitly in a finite amount of space.) The output of a reasoner is then nothing but a sequence of pricings: \begin{definition}[Market]\label{def:marketprocess} A \textbf{market} $\seq{\mathbb{P}}=(\mathbb{P}_1,\mathbb{P}_2,\ldots)$ is a computable sequence of pricings $\mathbb{P}_i : \mathcal{S} \to \QQ \cap [0,1]$. \end{definition} \noindent The pricings $(\mathbb{P}_1,\mathbb{P}_2,\ldots)$ represent the belief states of a reasoner progressively refining their opinions about the logical statements in $\mathcal{S}$. In the background, there is some process producing progressively larger sets of trusted statements: \begin{definition}[Deductive Process]\label{def:dedproc} A \textbf{deductive process} $\seq{\dt} : \NN^+ \to \operatorname{Fin}(\mathcal{S})$ is a computable nested sequence $D_1 \subseteq D_2 \subseteq D_3 \ldots$ of finite sets of sentences. \end{definition} \noindent The deductive process $\seq{\dt}$ can be thought of as a theorem prover for some trusted logical theory $\Gamma$ in the language $\mathcal{L}$. Indeed, we will henceforth assume that $\Gamma = \bigcup_n D_n$. Thus the goal of our reasoner $\seq{\mathbb{P}}$ is to anticipate which statements will be proven or disproven by $\Gamma$, well before the rote proof-search $\seq{\dt}$ decides those statements. As in classical Dutch book arguments for probability theory, in addition to seeing $\mathbb{P}(\phi)=p$ as an assignment of subjective credence to $\phi$, we also view $\mathbb{P}(\phi)$ as a stance with respect to which bets are desirable or not. That is, we interpret $\mathbb{P}(\phi)=p$ to mean that the price of a $\phi$-share according to $\mathbb{P}$ is $p$, where (roughly speaking) a $\phi$-share is worth \$1 if $\phi$ is true. This allows us to set up Dutch book arguments against a reasoner using computable bookies: \begin{definition}[Trader]\label{def:trader} A \textbf{trader} is a sequence $(\trade_1, \trade_2, \ldots)$ where each $\trade_n$ is a trading strategy for day $n$. \end{definition} \noindent Without belaboring the details, a trading strategy for day $n$ is a strategy for responding to the day's market prices $\mathbb{P}_n$ with buy orders and sell orders for shares in sentences from $\mathcal{S}$. (Formally, it is a continuous function from pricings to linear combinations of sentences, expressed in some computable language.) Over time, a trader accumulates cash and stock holdings from the trades it makes against $\seq{\pt}$. The logical induction criterion then demands of market prices $\seq{\pt}$ that no efficiently computable trader can reliably make money by trading against the market prices $(\mathbb{P}_1,\mathbb{P}_2,\ldots)$: \begin{key} \begin{restatable}[The Logical Induction Criterion]{definition}{criterion}\label{def:lic} A market $\seq{\pt}$ is said to satisfy the \textbf{logical induction criterion{}} relative to a deductive process $\seq{\dt}$ if there is no efficiently computable trader that exploits $\seq{\pt}$ relative to $\seq{\dt}$. A market $\seq{\pt}$ meeting this criterion is called a \textbf{logical inductor{} over $\bm{\seq{\dt}}$}. \end{restatable} \end{key} \noindent Again glossing over details, a trader is said to exploit $\seq{\pt}$ relative to $\seq{\dt}$ if the possible values of the trader's holdings from trading against $\seq{\pt}$ are unboundedly high over time, without being unboundedly low, where holdings are evaluated by what truth assignments to $\mathcal{S}$ are propositionally consistent with $D_n$ at time $n$. Here, ``efficiently computable'' (abbreviated e.c.) can be taken to mean computable in time polynomial in $n$, but this is not crucial to the definition. Given the assumption that $\Gamma = \bigcup_n D_n$, we also say that $\seq{\pt}$ is a logical inductor over $\Gamma$. Our key theorem is that this criterion, while gratifyingly strong, is also feasible: \begin{key} \begin{restatable}{theorem}{logindcri}\label{thm:li} For any deductive process $\seq{\dt}$, there exists a computable belief sequence $\seq{\pt}$ satisfying the logical induction criterion{} relative to $\seq{\dt}$. \end{restatable} \end{key} \section{Properties of Logical Inductor{}s}\label{sec:properties} Here is an intuitive argument that logical inductors perform good reasoning under logical uncertainty: \begin{quote} Consider any polynomial-time method for efficiently identifying patterns in logic. If the market prices don't learn to reflect that pattern, a clever trader can use that pattern to exploit the market. Thus, a logical inductor must learn to identify those patterns. \end{quote} \noindent This section will substantiate this argument by stating a number of properties satisfied by logical inductors, corresponding to some of the desiderata discussed in \Sec{desiderata}. Proofs of \Thm{li} and the theorems in this section can be found in \cite{Garrabrant:2016:li}. \subsection{Notation} Throughout, we assume that $\seq{\pt}$ is a logical inductor over the theory $\Gamma$. We also assume that $\Gamma$ represents computations in the technical sense, i.e. we can write terms in $\mathcal{L}$ that stand for computations, and $\Gamma$ proves that those terms evaluate to their correct value (and no other value). We will enclose sentences in quotation marks when they are used as syntactic objects. An underlined symbol should be replaced by the expression it stands for. For example, $\enc{f}(\enc{n})$ stands for a program that computes the function $f$ given input $n$, whereas $\enc{f(n)}$ stands for the numeral $f(n)$ evaluates to. We use an overline to denote sequences of sentences, probabilities, and other objects, as in $\seq{\pt}$ and $\seq{\dt}$; for example, $\seq{\phi}$ is the sequence of sentences $(\phi_1,\phi_2, \dots)$. A sequence $\seq{x}$ is efficiently computable (e.c.) if and only if there exists a computable function $n \mapsto x_n$ with runtime polynomial in $n$. Given any sequences $\seq x$ and $\seq y$, we write \begin{align*} x_n \eqsim_n y_n & \quad\text{for}\quad \lim_{n \to\infty} x_n - y_n = 0,\text{~and}\\ x_n \gtrsim_n y_n & \quad\text{for}\quad \liminf_{n \to\infty} x_n - y_n \ge 0 \end{align*} \subsection{Properties of the limit}\label{sec:limitprops} Firstly, the market prices of a logical inductor{} converge: \begin{restatable}[Convergence]{theorem}{convergence}\label{thm:con}\label{thm:first} The limit ${\BelState_\infty:\mathcal{S}\rightarrow[0,1]}$ defined by \[\BelState_\infty(\phi) := \lim_{n\rightarrow\infty} \BelState_n(\phi)\] exists for all $\phi$. \end{restatable} \begin{restatable}{sketch}{sketchcon} Roughly speaking, if $\seq{\pt}$ never makes up its mind about $\phi$, then it can be exploited by a trader arbitraging shares of $\phi$ across different days. That is, suppose by way of contradiction that $\BelState_n(\phi)$ never settles down, but rather oscillates by a substantial amount infinitely often. Then there is a trader that repeatedly buys a share in $\phi$ when the price is low, and sells it back when the price is high. This trader accumulates unbounded wealth, thereby exploiting $\seq{\pt}$, which contradicts that $\seq{\pt}$ is a logical inductor; therefore the limit $\BelState_\infty(\phi)$ must in fact exist. \end{restatable} \noindent This sketch showcases the main intuition for the convergence of $\seq{\pt}$, but elides a number of crucial details; see \cite{Garrabrant:2016:li}. Next, the limiting beliefs of a logical inductor{} represent a coherent probability distribution: \begin{restatable}[Limit Coherence]{theorem}{limitcoherence}\label{thm:lc} $\BelState_\infty$ is coherent, i.e., it gives rise to an internally consistent probability measure $\mathrm{Pr}$ on the set of all consistent completions $\Gamma':\mathcal{S}\to\BB$ of~$\Gamma$, defined by the formula \[\mathrm{Pr}(\Gamma'(\phi)=1):=\BelState_\infty(\phi).\] \end{restatable} \noindent First formalized by Gaifman \cite{Gaifman:1964}, coherence says that beliefs should satisfy probabilistic versions of logical consistency; for example, the reasoner should assign $\mathrm{Pr}(\phi) \leq \mathrm{Pr}(\psi)$ if $\phi \Rightarrow \psi$, etc. This theorem is proven using methods analogous to standard Dutch book arguments for coherent beliefs, translated into the language of traders. Convergence and coherence together justify that a logical inductor $\seq{\pt}$ approximates a belief state that is consistent with the background theory $\Gamma$. What else is there to say about the limiting beliefs $\BelState_\infty$ of a logical inductor? For starters, $\seq{\pt}$ learns not to assign extreme probabilities to sentences that are independent from $\Gamma$: \begin{restatable}[Non-Dogmatism]{theorem}{restatenondog}\label{thm:nd} If $\Gamma \nvdash \phi$ then $\BelState_\infty(\phi)<1$, and if $\Gamma \nvdash \neg\phi$ then $\BelState_\infty(\phi)>0$. \end{restatable} \noindent Non-dogmatism can be viewed as an inductive property: non-dogmatic beliefs can be easily conditioned on events (sentences) that haven't already been observed (proved or disproved), producing a coherent conditional belief state, whereas conditioning dogmatic beliefs can cause problems. We can push the idea of inductive reasoning much further, following the work of Solomonoff \cite{Solomonoff:1964,Solomonoff:1964a}, Zvonik and Levin \cite{zvonkin1970complexity} and Li and Vit\'anyi \cite{Li:1993} on empirical sequence prediction. They describe an inductive process (known as a universal semimeasure) that predicts as well or better than any computable predictor, modulo a constant amount of error. Although universal semimeasures are uncomputable, we can ask logically uncertain reasoners to copy those successes given enough time to think: \begin{restatable}[Domination of the Universal Semimeasure]{theorem}{restatedus}\label{thm:dus} Let $(b_1, b_2, \ldots)$ be a sequence of zero-arity predicate symbols in $\mathcal{L}$ not mentioned in $\Gamma$, and let $\sigma_{\le n}=(\sigma_1,\ldots,\sigma_n)$ be any finite bitstring. Define \[ \BelState_\infty(\sigma_{\le n}) := \BelState_\infty(\quot{(b_1 \iff \enc{\sigma_1}=1) \land \ldots \land (b_n \iff \enc{\sigma_n}=1)}), \] such that, for example, $\BelState_\infty(01101) = \BelState_\infty(\quot{\lnot b_1 \land b_2 \land b_3 \land \lnot b_4 \land b_5})$. Let $M$ be a universal continuous semimeasure. Then there is some positive constant $C$ such that for any finite bitstring $\sigma_{\le n}$, \[ \BelState_\infty(\sigma_{\le n}) \ge C \cdot M(\sigma_{\le n}). \] \proofin{\ref{app:dus}} \end{restatable} \noindent In other words, logical inductors are a computable approximation to a normalized probability distribution that dominates any lower semicomputable semimeasure. In fact, this dominance is strict: $\BelState_\infty$ will e.g., assign positive probability to sequences that encode completions of Peano arithmetic, which the universal semimeasure does not do.\footnote{This does not contradict the universality of $M$, as $\BelState_\infty$ is higher in the arithmetical hierarchy than $M$.} \subsection{Outpacing deduction}\label{sec:timelylearning} It is not too difficult to define a reasoner that assigns probability~1 to all (and only) the provable sentences, in the limit: simply assign probability 0 to all sentences, and then enumerate all logical proofs, and assign probability~1 to the proven sentences. The real trick is to recognize patterns in a timely manner, well before the sentences can be proven by slow deduction. \begin{restatable}[Provability Induction]{theorem}{restatepi}\label{thm:provind}\label{thm:patfirst} Let $\seq{\phi}$ be an \ec sequence of theorems. Then \[ \BelState_n(\phi_n) \eqsim_n 1. \] Furthermore, let $\seq{\psi}$ be an \ec sequence of disprovable sentences. Then \[ \BelState_n(\psi_n) \eqsim_n 0. \] \end{restatable} \begin{sketch}[\ref{sec:provind} or~\ref{app:provind}] Suppose not. Then there is a trader that buys a share in $\phi_n$ whenever the price is too far below \$1, and then waits for $\phi_n$ to appear in the deductive process $\seq{\dt}$, repeating this process indefinitely. This trader would exploit $\seq{\pt}$, a contradiction. \end{sketch} \noindent In other words, $\seq{\pt}$ will learn to start believing $\phi_n$ by day $n$ at the latest, despite the fact that $\phi_n$ won't be deductively confirmed until day $f(n)$, which is potentially much later. In colloquial terms, if $\seq{\phi}$ is a sequence of facts that can be generated efficiently, then $\seq{\pt}$ inductively learns the pattern, and its belief in $\seq{\phi}$ becomes accurate faster than $\seq{\dt}$ can computationally verify the individual sentences. \begin{quote} \textbf{Analogy: Ramanujan and Hardy.} Imagine that the statements $\seq{\phi}$ are being output by an algorithm that uses heuristics to generate mathematical facts without proofs, playing a role similar to the famously brilliant, often-unrigorous mathematician Srinivasa Ramanujan. Then $\seq{\pt}$ plays the historical role of the beliefs of the rigorous G.H.\ Hardy who tries to verify those results according to a slow deductive process ($\smash{\seq{\dt}}$). After Hardy ($\seq{\pt}$) verifies enough of Ramanujan's claims ($\phi_{\le n}$), he begins to trust Ramanujan, even if the proofs of Ramanujan's later conjectures are incredibly long, putting them ever-further beyond Hardy's current abilities to rigorously verify them. In this story, Hardy's inductive reasoning (and Ramanujan's also) outpaces his deductive reasoning. \end{quote} \noindent To further emphasize the meaning of \Theorem{provind}, consider the famous halting problem of Turning \cite{turing1936computable}. Turing proved that there is no general algorithm for determining whether or not an arbitrary computation halts. Let's examine what happens when we confront logical inductors with the halting problem. \begin{restatable}[Learning of Halting Patterns]{theorem}{restatehalts}\label{thm:halts} Let $\seq{m}$ be an \ec sequence of Turing machines, and $\seq{x}$ be an \ec sequence of bitstrings, such that $m_n$ halts on input $x_n$ for all $n$. Then \[ \BelState_n(\quot{\text{$\enc{m_n}$ halts on input $\enc{x_n}$}}) \eqsim_n 1. \] \proofin{\ref{app:halts}} \end{restatable} \noindent Of course, this is not so hard on its own---a function that assigns probability~1 to everything also satisfies this property. The real trick is separating the halting machines from the non-halting ones. By undecidability, there are Turing machines~$q$ that fail to halt on input~$y$, but such that $\Gamma$ is not strong enough to prove this fact. In this case, $\BelState_\infty$'s probability of~$q$ halting on input~$y$ is positive, by \Theorem{nd}. Nevertheless, $\seq{\pt}$ still learns to stop expecting that those machines will halt after any reasonable amount of time: \begin{restatable}[Learning not to Anticipate Halting]{theorem}{restatedontwait}\label{thm:dontwait} Let $\seq{q}$ be an \ec sequence of Turing machines, and let $\seq{y}$ be an \ec sequence of bitstrings, such that $q_n$ does not halt on input $y_n$ for any $n$. Let $f$ be any computable function. Then \[ \BelState_n(\quot{\text{$\enc{q_n}$ halts on input $\enc{y_n}$ within $\enc{f}(\enc{n})$ steps}}) \eqsim_n 0. \] \proofin{\ref{app:dontwait}} \end{restatable} \noindent These theorems can be interpreted as justifying the intuitions that many computer scientists have long held towards the halting problem: It is impossible to tell whether or not a Turing machine halts in full generality, but for large classes of well-behaved computer programs (such as \ec sequences of halting programs and provably non-halting programs) it's quite possible to develop reasonable and accurate beliefs. The boundary between machines that compute fast-growing functions and machines that never halt is difficult to distinguish, but even in those cases, it's easy to learn to stop expecting those machines to halt within any reasonable amount of time. As a consequence of of \Theorem{dontwait}, a logical inductor will trust their (computable) underlying deductive process $\seq{\dt}$ to remain consistent for arbitrarily long specified periods of time, if in fact $\seq{\dt}$ is consistent. In other words, a logical inductor over the theory $\Gamma$ will learn trust in the finitary consistency of $\Gamma$. One possible objection here is that the crux of the halting problem (and of the $\Gamma$-trust problem) is not about making good predictions, it is about handling diagonalization and paradoxes of self-reference. So let us turn to the topic of $\seq{\pt}$'s beliefs about $\seq{\pt}$ itself. \subsection{Self-knowledge}\label{sec:introspection} Because we're assuming $\Gamma$ can represent computable functions, we can write sentences describing the beliefs of $\seq{\pt}$ at different times. What happens when we ask $\seq{\pt}$ about sentences that refer to itself? \begin{restatable}[Self-knowledge]{theorem}{restateref}\label{thm:ref} Let $\seq{\phi}$ be an \ec sequence of sentences, let $\seq{a}$, $\seq{b}$ be \ec sequences of probabilities. Then, for any \ec sequence of positive rationals $\seq{\delta} \to 0$, there exists a sequence of positive rationals $\seq{{\varepsilon}} \to 0$ such that for all $n$: \begin{enumerate} \item if $\BelState_n(\phi_n)\in(a_n+\delta_n,b_n-\delta_n)$, then \[ \BelState_n(\quot{\enc{a_n} < \enc{\BelState}_\enc{n}(\enc{\phi_n}) < \enc{b_n}}) > 1-\varepsilon_n, \] \item if $\BelState_n(\phi_n)\notin(a_n-\delta_n,b_n+\delta_n)$, then \[ \BelState_n(\quot{\enc{a_n} < \enc{\BelState}_\enc{n}(\enc{\phi_n}) < \enc{b_n}}) < \varepsilon_n. \] \end{enumerate} \proofin{\ref{app:ref}} \end{restatable} \noindent In other words, for any pattern in $\seq{\pt}$'s beliefs that can be efficiently written down (such as ``$\seq{\pt}$'s probabilities on $\seq{\phi}$ are between $a$ and $b$ on these days''), $\seq{\pt}$ learns to believe the pattern if it's true, and to disbelieve it if it's false (with vanishing error). (Recall that the underlines indicate that the underlined expression should be expanded to the appropriate logical formula or term, representing e.g., the source code of an algorithm implementing $\seq{\pt}$.) At a first glance, this sort of self-reflection may seem to make logical inductor{}s vulnerable to paradox. For example, consider the sequence of sentences $\seq{\chi^{0.5}}$ defined using the diagonal lemma by \[ \chi^{0.5}_n := \quot{{\enc{\BelState}_{\enc{n}}}(\enc{\chi^{0.5}_n}) < 0.5} \] such that $\chi^{0.5}_n$ is true iff $\seq{\pt}$ assigns it a probability less than 50\% on day $n$. Such a sequence can be defined by G\"odel's diagonal lemma. These sentences are probabilistic versions of the classic ``liar sentence'', which has caused quite a ruckus in the setting of formal logic \cite{grim1991incomplete,mcgee1990truth,glanzberg2001liar,gupta1993revision,eklund2002inconsistent}. Because our setting is probabilistic, it's perhaps most closely related to the ``unexpected hanging'' paradox---$\chi^{0.5}_n$ is true iff $\seq{\pt}$ thinks it is unlikely on day $n$. How do logical inductors handle this sort of paradox? \begin{restatable}[Paradox Resistance]{theorem}{restatelp}\label{thm:lp} Fix a rational $p\in(0,1)$, and define an \ec sequence of ``paradoxical sentences'' $\seq{\chi^p}$ satisfying \[ \Gamma \vdash{{\enc{\chi^p_n}} \iff \left( {\enc{\BelState}_{\enc{n}}}({\enc{\chi^p_n}}) < \enc{p} \right)} \] for all $n$. Then \[ \lim_{n\to\infty}\BelState_n(\chi^p_n)=p. \] \proofin{\ref{app:lp}} \end{restatable} \noindent In words, a logical inductor responds to paradoxical sentences $\seq{\chi^p}$ by assigning them probabilities that converge on $p$. To understand why this is desirable, imagine that your friend owns a high-precision brain-scanner and can read off your beliefs. Imagine they ask you what probability you assign to the claim ``you will assign probability $<$80\% to this claim at precisely 10am tomorrow''. As 10am approaches, what happens to your belief in this claim? If you become extremely confident that it's going to be true, then your confidence should drop. But if you become fairly confident it's going to be false, then your confidence should spike. Thus, your probabilities should oscillate, pushing your belief so close to 80\% that you're not quite sure which way the brain scanner will actually call the claim, and you think the scanner is roughly 80\% likely to call it true. In response to a paradoxical claim, this is exactly how $\seq{\pt}$ behaves, once it's learned how the paradoxical sentences work. \subsection{Self-Trust}\label{sec:selftrust} We've seen that logical inductors can, without paradox, have accurate beliefs about their own current beliefs. Next, we turn our attention to the question of what a logical inductor{} believes about its \emph{future} beliefs. The coherence conditions of classical probability theory guarantee that, though a probabilistic reasoner expects their future beliefs to change in response to new empirical observations, they don't e.g., believe that their future credence in $\phi$ is, in net expectation, lower than their current credence in $\phi$. For example, if a reasoner $\mathrm{Pr}(-)$ knows that tomorrow they'll see some evidence $e$ that will convince them that Miss Scarlet was the murderer, then they already believe that she was the murderer today: \[ \mathrm{Pr}(\mathrm{Scarlet}) = \mathrm{Pr}(\mathrm{Scarlet}\mid e) \mathrm{Pr}(e) + \mathrm{Pr}(\mathrm{Scarlet}\mid \lnot e) \mathrm{Pr}(\lnot e). \] In colloquial terms, this says ``my current beliefs are \emph{already} a mixture of my expected future beliefs, weighted by the probability of the evidence that I expect to see.'' Logical inductors obey similar coherence conditions with respect to their future beliefs, with the difference being that a logical inductor updates its belief by gaining more knowledge about \emph{logical} facts, both by observing an ongoing process of deduction and by thinking for longer periods of time. To refer to $\seq{\pt}$'s \emph{expectations} about its future self, we need a notion of logically uncertain variables. To avoid needless detail, suffice it to say that logically determined quantities, such as the output of a given computer program, can be represented and manipulated analogously to random variables in probability theory. We can write these variables as terms representing their value; for example, the variable written $\quot{\enc{\BelState}_{\enc{n}}(\enc{\phi})}$ represents the probability assigned to $\phi$ by $\seq{\pt}$ on day $n$. Using the beliefs $\BelState_n$ of $\seq{\pt}$ about $X$ on day $n$, we can define the (approximate) expectation $\EE_n(X)$. We also need to know which future self our logical inductor will defer to: \begin{definition}[Deferral Function]\label{def:deferralfunc} A function $f : \NN^+ \to \NN^+$ is called a \textbf{deferral function} if \begin{enumerate} \item $f(n) > n$ for all $n$, and \item as a function of $n$, $f(n)$ can be computed in time polynomial in $f(n)$. \end{enumerate} \end{definition} \noindent Now we can state the sense in which logical inductors don't expect, on net, their future beliefs to be wrong in any particular direction. \begin{restatable}[No Expected Net Update]{theorem}{restateceu}\label{thm:ceu} Let $f$ be a deferral function, and let $\seq{\phi}$ be an \ec sequence of sentences. Then \[ \BelState_n(\phi_n) \eqsim_n \EE_n(\quot{\enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi_n})}). \] \proofin{\ref{app:ceu}} \end{restatable} \noindent This theorem only says that $\BelState_n$ doesn't expect the beliefs of $\BelState_{f(n)}$ about $\seq{\phi}$ to err in a particular direction. A priori, it is possible that $\BelState_n$ nevertheless believes its future beliefs $\BelState_{f(n)}$ will be grossly misguided. For example, suppose that $\BelState_n$ is very confident that $\BelState_{f(n)}$ will have sufficient time to compute the truth of $\phi$, but will react insanely to this information: \[\BelState_n(\quot{\enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi})=0} \mid \phi) = 1 \] \noindent and \[\BelState_n(\quot{\enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi})=1} \mid \lnot\phi) = 1. \] This is a priori consistent with \Thm{ceu} so long as $\BelState_n$ assigns $\BelState_n(\phi) = 0.5$, but it clearly indicates that $\BelState_n$ does not trust its future beliefs. To instead formalize the idea of a reasoner $\mathrm{Pr}$ that trusts their own reasoning process, let us first consider a self-trust property in the setting of deductive logic: \[ \vdash \square \phi \to \phi. \] This property of deductive systems says that the system proves ``If I prove $\phi$ at some point, then it is true''. However, any sufficiently strong reasoner that satisfies this property for the statement $\phi=\bot$ is inconsistent by G\"{o}del's second incompleteness theorem! The search for logics that place confidence in their own machinery dates at least back to Hilbert \cite{Hilbert:1902}. While G\"{o}del et al. \cite{Godel:1934} dashed these hopes, it is still desirable for reasoners to trust their reasoning process relatively well, most of the time (which humans seem to do). As discussed in \Sec{timelylearning}, logical inductors trust their underlying deductive process $\seq{\dt}$ in a slightly weaker, finitary sense. More interestingly, it turns out that logical inductors also trust their own reasoning process as a whole, including their inductive conclusions, in a manner that we now formalize. Instead of $\vdash \square \phi \to \phi$, we can replace provability with high confidence, and then ask for something like \begin{equation*} \label{eq:st} \mathrm{Pr}_\mathrm{now}(\phi \mid \mathrm{Pr}_\mathrm{later}(\phi) > p) \gtrsim p. \end{equation*} Colloquially, this says that if we tell $\mathrm{Pr}$ that in the future they will place more than $p$ credence in $\phi$, then they update their current beliefs to place at least $p$ credence. In short, $\mathrm{Pr}$ trusts that the outputs of their own ongoing reasoning process will be accurate. Now, in fact property~\ref{eq:st} is not quite desirable as stated (and logical inductors do not satisfy it). Indeed, consider the liar sentence $\chi^p$ defined by \[ \chi^{p} := \quot{ \mathrm{Pr}_{\mathrm{later}} (\chi^{p}) < p}. \] A good reasoner will then satisfy \[ \mathrm{Pr}_\mathrm{now}(\chi^{p} \mid \mathrm{Pr}_\mathrm{later}(\chi^{p}) > p) \eqsim 0, \] contradicting equation~\ref{eq:st}. The issue is that if we give $\mathrm{Pr}_\mathrm{now}$ high-precision access to the probabilities assigned by $\mathrm{Pr}_\mathrm{later}$---for example by conditioning on them---then $\mathrm{Pr}_\mathrm{now}$ can outperform the (unconditioned) beliefs of $\mathrm{Pr}_\mathrm{later}$, in this case by having correct opinions about the liar sentence for $\mathrm{Pr}_\mathrm{later}$. Instead, we have the following self-trust property, which only gives $\BelState_n$ limited-precision access to the beliefs of $\BelState_{f(n)}$: \begin{restatable}[Self-Trust]{theorem}{restatest}\label{thm:st} Let $f$ be a deferral function, $\seq{\phi}$ be an \ec sequence of sentences, $\seq{\delta}$ be an \ec sequence of positive rational numbers, and $\seq{{\prob}}$ be an \ec sequence of rational probabilities. Then \begin{multline*} \EE_n\left(\quot{ \enc{\OneOperator(\phi_n)} \cdot \enc{\ctsind{\delta_n}}\mleft( \enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi_n}) > \enc{p_n} \mright) }\right) \\ \gtrsim_n p_n \cdot \EE_n\left(\quot{ \enc{\ctsind{\delta_n}}\mleft( \enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi_n}) > \enc{p_n} \mright) }\right). \end{multline*} \proofin{\ref{app:st}} \end{restatable} \noindent The indicator variable $\OneOperator(\phi)$ represents 1 if $\phi$ is true and 0 if $\phi$ is false. The continuous indicator variable $\ctsind{\delta}(X>p)$ is an ordinary indicator of the event $X>p$, except that instead of a discontinuity at $X=p$, the value is linear in $X$ on a region of length $\delta$. Thus the self-trust property gives $\BelState_n$ only continuous (limited precision) access to the beliefs of $\BelState_{f(n)}$; except for this subtlety, we could have written the more recognizable (but false and undesirable!) statement \[ \BelState_n\left(\quot{ \enc{\phi_n} \wedge \mleft( \enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi_n}) > \enc{p_n} \mright) }\right) \gtrsim_n p_n \cdot \BelState_n\left(\quot{ \enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi_n}) > \enc{p_n} }\right), \] where the conditional $\BelState_n\left(\quot{ \enc{\phi_n} \mid \enc{\BelState}_{\enc{f}(\enc{n})}(\enc{\phi_n}) > \enc{p_n} }\right)$ has been rearranged to avoid a potential division by 0. \renewcommand{\proofin}[1]{} \section{Discussion}\label{sec:discussion} We have proposed the \emph{logical induction criterion} as a criterion on the beliefs of deductively limited reasoners, and we have described how reasoners who satisfy this criterion (\emph{logical inductors}) possess many desirable properties when it comes to developing beliefs about logical statements (including statements about mathematical facts, long-running computations, and the reasoner themself). That said, there are clear drawbacks to the logical inductor we describe in \cite{Garrabrant:2016:li}: it does not use its resources efficiently; it is not a decision-making algorithm (i.e., it does not ``think about what to think about''); and the properties above hold either asymptotically (with poor convergence bounds) or in the limit. Further, it is unclear whether logical inductors have good beliefs about counterpossibilities, and whether they take advantage of old evidence. These are enticing directions for further research. \renewcommand{\rparenthetical}[1]{} The authors are particularly interested in tools that help AI scientists attain novel statistical guarantees in settings where robustness and reliability guarantees are currently difficult to come by. For example, consider the task of designing an AI system that reasons about the behavior of computer programs, or that reasons about its own beliefs and its own effects on the world. While practical algorithms for achieving these feats are sure to make use of heuristics and approximations, we believe scientists will have an easier time designing robust and reliable systems if they have some way to relate those approximations to theoretical algorithms that are known to behave well in principle. Modern models of rational behavior are not up to this task: formal logic is inadequate when it comes to modeling self-reference, and probability theory is inadequate when it comes to modeling logical uncertainty. We see logical induction as a first step towards models of rational behavior that work in settings where agents must reason about themselves, while deductively limited. \subsection{Acknowledgements} We acknowledge Abram Demski, Benya Fallenstein, Daniel Filan, Eliezer Yudkowsky, Jan Leike, J\'anos Kram\'ar, Nisan Stiennon, Patrick LaVictoire, Paul Christiano, Sam Eisenstat, Scott Aaronson, and Vadim Kosoy, for valuable comments and discussions. We also acknowledge contributions from attendees of the MIRI summer fellows program, the MIRIxLA group, and the MIRI$\chi$ group. This research was supported as part of the Future of Life Institute (futureoflife.org) FLI-RFP-AI1 program, grant~\#2015-144576. \sloppy \bibliographystyle{eptcs}
proofpile-arXiv_065-3695
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} Embedded star clusters offer one of the best opportunities to understand star formation. Within these astrophysical laboratories hundreds of stars are formed in volumes below $1 \mbox{ pc}^3$. Overall, it is estimated that 80\%--90\% of young stellar objects (YSOs) form in embedded clusters (\citealp{2003ARA&A..41...57L}; although the exact fraction is somewhat sensitive to the definition of a cluster, see \citealp{2010MNRAS.409L..54B}). It is now established that the fate of these objects is directly linked to the evolution of the molecular gas, which is responsible for most of the gravitational potential that binds the stars in the cluster. Therefore, it is important to study these objects in their early phases, before they are subject to infant mortality \citep{1984ApJ...285..141L, 2001MNRAS.323..988G}. Operationally, clusters are often defined and identified as groups of stars whose stellar surface density exceeds that of field stars of the same physical type (see, e.g., \citealp{1985prpl.conf..297W, 1991ApJ...371..171L, 2000AJ....120.3139C}).\footnote{In this paper we focus on the optical identification of a cluster, and we explicitly ignore issues such as the gravitational stability of overdensities of stars. Therefore, we will call ``cluster'' any overdensity, irrespective of its boundness.}\@ In this respect, embedded clusters pose special problems because they are buried in dust and gas, and therefore are often not even visible in optical images; moreover, their shape is often elongated and clumpy, reflecting the initial structure of the dense molecular gas \citep{2005ApJ...632..397G, 2016AJ....151....5M}. However, even with infrared observations, discovering deeply embedded clusters and measuring their basic parameters (such as surface density and size) can still be a challenge since the associated dust extinguishes both the cluster members and the field stars behind the cloud (see, e.g., \citealp{2008ApJ...672..861R}). In fact, typical optical or near-infrared observations stellar fields around molecular clouds show underdensities at the location of the clouds because of the effects of extinction on the density of background stars. In such a situation the observed cluster stellar surface density can be comparable or even less than the unobscured field stellar surface density (see Fig.~\ref{fig:1}). \begin{figure} \centering \includegraphics[width=\hsize]{fig01} \caption{A one-dimensional sketch figure that illustrates the difficulties encountered in detecting embedded clusters. The bottom black line shows the extinction profile of a molecular cloud. As a result of the extinction, background stars in the field show an underdensity. Even if an embedded cluster is present, the total observed star surface density does not reveal it, making it virtually impossible to detect the cluster without the use of the technique described in this paper.} \label{fig:1} \end{figure} Different authors have used different techniques to take into account the effects of extinction in the identification and characterization of star clusters. \citet{2000AJ....120.3139C} built density maps in the direction of nearby molecular clouds using the 2MASS Point Source Catalog, and corrected the effect of dust extinction using publicly available CO maps (converted into infrared extinction with a constant X-factor). In his technique, the extinction correction is only applied to field stars: the cluster stellar density is obtained by subtracting an extinction-corrected model of the field stellar density to the observed stellar density. Moreover, the use of CO maps and the largely uncertain X-factor represent a primary source of error. An opposite approach has been adopted by \citet{2013A&A...557A..29C}, who studied star clusters embedded in the Rosette nebula using 2MASS data. In this case, the local extinction was determined directly from the stellar near-infrared colors, and a correction for the effects of extinction was applied to each star. In practice, as noted by the authors themselves, the correction applied would only be correct for background stars, while it is used for all stars (including the foreground, which should not be corrected, and embedded ones, which should be partially corrected). Things are further complicated because molecular clouds are know to have steep gradients in their column densities, and these, if undetected (because of resolution effects, which is surely the case of the Rosette cloud at the 2MASS resolution), would introduce further biases in the extinction map and in the correction to the cluster richness. \citet{2006A&A...445..999C} use yet another technique (similar to \citealp{2002AJ....123.2559C}) to study the properties of IC~348 in Perseus. The basic idea is that two independent measurements of extinction in molecular clouds are available, one from the star color excess (here applied to $H - K$ color), and one from the star number counts. In absence of contaminants, both measurements should agree. The presence of a cluster, however, will affect both the color excess measurement (in a way that depends on the intrinsic color of the cluster members) and the star count method (in a way that depends on the location of the cluster within the cloud). The difference of the two extinction estimates is a proxy for the cluster density. The various assumptions made when applying this technique (in particular, the degree of embedding of the cluster within the cloud) needs to be resolved by calibrating it using independent measurements of cluster richness: this clearly limits its application. In this paper we present a new methodology to identify and characterize clusters in wide-field or all sky, multi-band, infrared imaging surveys. The method is based on the production of extinction-corrected maps of stellar surface density, and takes advantage of the different $H$-band luminosity functions of embedded clusters with respect to that of the background population. Additionally, in contrast to the methods described above, it is able to correct for cluster members unidentified because of extinction in a way that takes into account the position of the cluster within the cloud along the line of sight. The technique is based on a rigorous mathematical framework; this provides a clear advantage, in that we can perform a detailed statistical analysis of the method. But the detailed mathematical derivation of this method might not be of as much interest to astronomers as its implementation. However, we stress that those readers not interested in the detailed mathematical aspects of the derivation can still benefit from the method proposed here, because its application is simple and straightforward: Eq\eqref{eq:30}, with optionally a second expression for the noise estimate, Eq.~\eqref{eq:31}. We also provide a pseudo-code for it in the appendix. This paper is organized as follows. In Sect.~2 we present the general framework and discuss the standard technique generally employed to identify clusters. In Sect.~3 we present our new method and we discuss its practical implementation in Sect.~4. Simple numerical tests used to validate the method are described in Sect.~5, while the results of an application to the Orion molecular complex are discussed in Sect.~6. Finally, we summarize the results obtained in this paper in Sect.~7. \section{The standard technique} \label{sec:technique} Traditionally, star clusters are identified as overdensities in the distribution of stars. Although many different options are available to estimate the local angular density of stars $\sigma$ (see, e.g., \citealp{1963ZA.....57...47V, 1985ApJ...298...80C, 2009ApJS..184...18G}), we consider in this paper the simple ``moving average'' estimate\footnote{Throughout this paper we will use ``hats'' to denote measured quantities. Hence, $\sigma(\vec x)$ is the \textit{true} star density at the angular position $\vec x$, while $\hat\sigma(\vec x)$ is the \textit{measured} star density at the same position.} \begin{equation} \label{eq:1} \hat \sigma(\vec x) = \sum_{n=1}^N W\bigl(\vec x - \vec x_n \bigr) \; . \end{equation} Here $n$ runs on the individual stars, $\{ \vec x_n \}$ are the locations of the various stars, and $W$ is a window function, normalized to unity: \begin{equation} \label{eq:2} \int W(\vec x') \, \mathrm d^2 x' = 1 \; . \end{equation} So, by construction, $W$ has a unit of one over area. As a simple example, suppose that $W$ is taken to be a top-hat function: \begin{equation} \label{eq:3} W(\vec x') = \begin{cases} 1 / (\pi r^2) & \text{if $|\vec x'| < r \; ,$} \\ 0 & \text{otherwise} \; . \end{cases} \end{equation} In this case, the sum of Eq.~\eqref{eq:1} runs just over the $N_r$ stars within an angular distance $r$ from the point of the estimate, and the observed estimates reduces to \begin{equation} \label{eq:4} \hat\sigma = \frac{N_r}{\pi r^2} \; . \end{equation} For a constant density of stars, this quantity is unbiased, in the sense that the mean (ensemble average) $E$ of $\hat\sigma$ is just the true density of stars: \begin{equation} \label{eq:5} \E [ \hat\sigma ] \equiv \langle \hat \sigma \rangle = \frac{\E[N_r]}{\pi r^2} = \sigma \; , \end{equation} and the associated variance is \begin{equation} \label{eq:6} \Var[\hat\sigma] = \frac{\E[N_r]}{(\pi r^2)^2} = \frac{\sigma}{\pi r^2} \; . \end{equation} This equation shows that the error associated with the measured star density decreases as $N_r^{-1/2}$: therefore, if it is known that the density of stars is constant, to determine its value it is sensible to use relatively large window functions $W$. More generally, if the star density is variable, the average measured density can be shown to be a convolution of the true density with the window function $W$: \begin{equation} \label{eq:7} \E [ \hat\sigma ] (\vec x) = \int W(\vec x - \vec x') \sigma(\vec x') \, \mathrm d^2 x' \; , \end{equation} and the associated two point correlation function is \begin{align} \label{eq:8} \Cov & [\hat\sigma] (\vec x, \vec x') \notag\\ & {} \equiv \E \Bigl[ \bigl( \hat\sigma(\vec x) - \E [\hat\sigma] (\vec x) \bigr) \bigl( \hat\sigma(\vec x') - \E [ \hat\sigma ] (\vec x') \bigr) \Bigr] \notag\\ & {} = \int W(\vec x - \vec x'') W(\vec x' - \vec x'') \sigma(\vec x'') \, \mathrm d^2 x'' \; . \end{align} Equation~\eqref{eq:7} shows that $W$ sets the scale for the resolution of the density map; similarly, Eq.~\eqref{eq:8} shows that points close enough in the density map will have correlated noise. Therefore, if one aims at finding changes in the star density (such as a star cluster), the window function should have a scale that is comparable or smaller than the typical size of the variations of the star density. However, this is in tension with the noise properties of Eq.~\eqref{eq:8}, because a small window function implies a large noise. In order to be able to detect a star cluster, the measured density of stars at the location of the cluster must differ from the average density much more than the standard deviation of $\hat\sigma$. Hence, the quantity $\Cov[\hat\sigma](\vec x, \vec x)$ sets the sensitivity of the cluster finding algorithm. In general the true density $\sigma = \sigma_\mathrm{field} + \sigma_\mathrm{cluster}$ is the sum of the field density and of the cluster density, and in some cases $\sigma_\mathrm{cluster} \ll \sigma_\mathrm{field}$. In these conditions the error is dominated by the shot-noise due to the field star population. In reality, many other effects can prevent the discovery and characterization of star clusters: \begin{itemize} \item Extinction by dark nebul\ae, which reduces the surface density of background sources; \item The galactic structure, which induces smooth large-scale variations; \item Differences in the sensitivity across a mosaic image due to changes in the observational conditions; \item Other systematical observational effects, such halos produced by bright objects within the image; \end{itemize} \subsection{Extinction correction} \label{sec:extinct-corr} Among the effects listed above, the first one is particularly important for young clusters, since these objects tend to be deeply embedded and thus can escape a detection; additionally, detected clusters are plagued by large uncertainties in their physical properties (number of stars, mass, and size). For this reason, \citet{2013A&A...557A..29C} developed a technique to perform a simple correction of the results produced by Eq.~\eqref{eq:1}. They noted that the density of background stars observed through a molecular cloud decreases by a factor $10^{-\alpha A}$, where $\alpha$ is the exponential slope of the number counts and $A$ is the extinction, both in the band used for the observation. Therefore, to account for the undetected stars one can just multiply the local density estimate $\hat \sigma(\vec x)$ by $10^{\alpha A(\vec x)}$, where $A(\vec x)$ is the local estimate extinction (i.e., the extinction derived from an extinction map at the location of the $\vec x$). The problem of this approach is that it uses the same correction factor for foreground stars, embedded objects, and background stars, which generally results in an overestimate of the local corrected star density. Additionally, the same correction is applied to young stellar objects (YSOs) and field stars, which however have number count distributions very different. This leaves a large uncertainty on the corrected $\hat \sigma(\vec x)$ and, ultimately, on the characterization of each cluster. \section{Maximum-likelihood approach} \label{sec:maxim-likel-appr} \subsection{Constant extinction, no weighting} \label{sec:constant-extinction} As discussed in the previous section, the error associated to $\hat\sigma$ is often dominated by shot noise due to the field star population. However, as mentioned, YSOs have photometric properties that differ significantly from these of field stars: the former have a $H$-band luminosity function (hereafter HLF) that can be approximated by a Gaussian distribution \citep{2002ApJ...573..366M}, while the latter have a HLF that follows, up to $H \sim \SI{18}{mag}$, an exponential with slope $\alpha \sim 0.33$ (see, e.g., \citealp{2016A&A...587A.153M}). This suggests that we might use suitable cuts in the $H$-band to reduce the number of field stars, while essentially retaining all YSOs, thus gaining in terms of noise. However, this naive procedure is difficult to apply, since both YSOs and field stars are also affected by extinction, which would change the shape and the peaks of the respective HLFs. Therefore, we are forced to use a more systematic approach. Let us first model the observed $H$-band luminosity function for a set of angularly close stars (i.e., in a patch of the sky). We suppose that the true, unextinguished HLF can be described as a mixture of $L$ different HLFs, corresponding to different stellar populations (in the situation considered later on in this paper we will use just two components, one for the field stars and one for the YSOs, but deeper observations might require the inclusion of a third component corresponding to galaxies and following exponential number counts with a slope of $0.6$; see \citealp{1926ApJ....64..321H}). The observed HLF differs from the true one because of extinction and because of incompleteness in the faint end; additionally, we will also have photometric errors, but generally these will be small compared to the typical width of the various HLF components, and for the sake of simplicity are therefore neglected here.\footnote{Photometric errors can be easily included in Eq.~\eqref{eq:9} by replacing $p_i(m - A)$ there with the convolution of this function with the photometric uncertainty. In presence of different uncertainties for each star, one will need to specialize Eq.~\eqref{eq:9} to each star. A similar technique can be used to include the effects of errors on the extinction measurements.}\@ We can thus write finally the observed HLF, measured in units of stars per area per magnitude bin, as \begin{equation} \label{eq:9} \sigma(m) = c(m) \sum_{i=1}^L \sigma_i p_i(m - A) \; , \end{equation} where $A$ is the $H$-band extinction,\footnote{In this section we will take the extinction to be identical for a set of angularly close stars; we will then relax this assumption and use the individual extinction in the direction of each star.} $c(m)$ is the completeness function (i.e.\ the probability to detect a star with apparent magnitude $m$), $p_i(m)$ is the probability distribution for the $i$-th component of the HLF, and $\sigma_i$ is a coefficient that indicates the predominance of the $i$-th component in the HLF. In order to give $\sigma_i$ a simple interpretation, it is useful to take $p_i$ suitably normalized. In particular, we assume that \begin{equation} \label{eq:10} \int c(m) p_i(m) \, \mathrm d m = 1 \; . \end{equation} With this choice, $\sigma_i$ can be identified as the observed angular density of stars for the $i$-th component in absence of extinction. Our goal is to infer the parameters $\{ \sigma_i \}$ from a set of observed $H$-band magnitudes $\{ m_n \}$ in a patch of the sky of area $S$; we will then repeat this operation in different directions in order to build maps of densities for the various components. This will allow us to identify overdensities in each component, such as in the YSO one. To this purpose, we use Eq.~\eqref{eq:9} to write the log-likelihood following the prescriptions of \citet{2013A&A...559A..90L}. The entire analysis is done in the magnitude space, by using $S \sigma$ as the probability distribution density for the magnitudes: \begin{align} \label{eq:11} \ln \mathcal{L} = {} & \sum_{n=1}^N \ln S \sigma(m_n) - \int S \sigma(m) \, \mathrm d m \notag\\ {} = {} & \sum_{n=1}^N \biggl[ \ln S c(m_n) \sum_{i=1}^L \sigma_i p_i(m_n - A) \biggr] \notag\\ & {} - \int S c(m) \sum_{i=1}^L \sigma_i p_i(m - A) \, \mathrm d m \; . \end{align} This likelihood can be used in Bayes' theorem to infer a posterior probability for the densities $\{ \sigma_i \}$. More simply, we can just find the most likely parameters $\{ \sigma_i \}$ using a maximum likelihood approach. To this purpose, we consider \begin{equation} \label{eq:12} \frac{\partial \ln \mathcal{L}}{\partial \sigma_i} = \sum_{n=1}^N \frac{c(m_n) p_i(m_n - A)}{\sigma(m_n)} - S \! \int c(m) p_i(m-A) \, \mathrm d m \; . \end{equation} The maximum likelihood solution is given by the set of densities $\sigma_i$ that maximize $\mathcal{L}$ or, equivalently, $\ln \mathcal{L}$, i.e.\ by the zero of Eq.~\eqref{eq:12}; that is, we need to solve \begin{equation} \label{eq:13} \sum_{n=1}^N \frac{c(m_n) p_i(m_n - A)}{\sigma(m_n)} = S \! \int c(m) p_i(m-A) \, \mathrm d m \; . \end{equation} Unfortunately, the solution of this equation for $L > 1$ cannot be provided in analytic form, and must be obtained by numerical methods. We will discuss below in Sect.~\ref{sec:implementation} simple ways to find it. We can obtain an estimate of the errors associated to the measurements $\sigma_i$ from the Fisher information matrix (see, e.g., \citealp{ModStat}), that we recall is defined as \begin{equation} \label{eq:14} I_{ij} = \E \left[ \frac{\partial \ln \mathcal{L}}{\partial \sigma_i} \frac{\partial \ln \mathcal{L}}{\partial \sigma_j} \right] = - \E \left[ \frac{\partial^2 \ln \mathcal{L}}{\partial \sigma_i \, \partial \sigma_j} \right] \; . \end{equation} The Fisher information matrix is related to the minimum covariance matrix that can be attained by an unbiased estimator, as provided by the Cram\'er-Rao bound: \begin{equation} \label{eq:15} \Cov[\sigma] \ge I^{-1} \; . \end{equation} Since the maximum-likelihood estimator is asymptotically efficient (i.e.\ it attains the Cram\'er-Rao bound when the sample size tends to infinity) and the resulting errors on $\hat\sigma$ tend to a multivariate Gaussian distribution, it is interesting to obtain an analytic result for the information matrix. The Fisher information matrix can be readily evaluated from our data using Eq.~(12) of \citet{2013A&A...559A..90L}: \begin{align} \label{eq:16} I_{ij} = S \! \int \frac{c^2(m) p_i(m-A) p_j(m-A)}{\sigma(m)} \, \mathrm d m \; . \end{align} This relation is interesting from several points of view. First, note that the Fisher matrix contains elements outside the diagonal, unless the probability distributions for the various components do not overlap, i.e.\ have non-intersecting support: this would basically mean that we could immediately tell to which component belongs a star from its unextinguished magnitude. Second, note that all elements of the matrix are non-negative and therefore, in case of $L=2$ components, $\Cov[\sigma] \simeq I^{-1}$ will have non-positive elements off-diagonal: in other words, the measurements of the two densities $\sigma_1$ and $\sigma_2$ will be affected by a negative correlation. This is expected and corresponds to a classification error: if $p_1$ overlaps with $p_2$, we cannot uniquely associate stars to each different population and in general an overestimate of one population is done at the expenses of an underestimate of the other population. It is instructive to consider the form of the information matrix in the special case where $L=1$. When just one component is used, then $I$ is a scalar and it reduces to \begin{equation} \label{eq:17} I = \frac{S}{\sigma} \int c(m) p(m-A) \, \mathrm d m = \frac{S^2}{\E[N]} \; , \end{equation} where $\E[N] = S \sigma$ is the average number of stars observed in the area $S$. Its inverse, $I^{-1}$, is therefore $\E[N] / \sigma^2$, as expected from a simple Poisson statistics. With $L \ge 2$, in principle we can encounter cases where the Fisher information matrix is singular. Given the analytic form of $I$, this happens in particular when the two components have the exact same probability distributions within the support of $c(m)$, i.e.\ $c(m) p_i(m) = c(m) p_j(m)$: in this case, the corresponding rows (or columns) of $I$ are identical. In such a situation, clearly, it is virtually impossible to classify objects as one of the two degenerate components, and therefore the uncertainty on the respective densities $\sigma_i$ and $\sigma_j$ are infinite. For completeness, we also report the expected maximum value of the log-likelihood \begin{equation} \label{eq:18} \E[\mathcal{L}] = \frac{L}{2} + S \! \int \sigma(m) \bigl[ \ln S \sigma(m) - 1 \bigr] \, \mathrm d m \; , \end{equation} and the associated variance \begin{equation} \label{eq:19} \Var[\mathcal{L}] = S \! \int \sigma(m) \ln^2 S \sigma(m) \, \mathrm d m \; . \end{equation} These equations can be used to verify that the chosen model \eqref{eq:9} can account well for the data. \subsection{Differential extinction and spatial weight} \label{sec:diff-extinct-spat} So far, we have assumed that all stars are subject to the same extinction $A$; moreover, we have not weighted stars depending on their angular position as done in Eq.~\eqref{eq:1}. In this section we intend to remove these two limitations and consider the full problem. The simpler approach to include the effects of differential extinction is to consider the \textit{joint} density in magnitudes and positions. In this framework, we can rewrite Eq.~\eqref{eq:11} for log-likelihood as \begin{align} \label{eq:20} \ln \mathcal{L} = {} & \sum_{n=0}^N \ln \sigma(m_n, \vec x_n) - \int \mathrm d m \int \mathrm d^2 x' \, \sigma(m, \vec x') \; . \end{align} In this equation the quantity $\sigma(m, \vec x')$ represents the predicted density of stars with magnitude $m$ at the location $x'$. Similarly to Eq.~\eqref{eq:9}, we write this quantity as a mixture of different densities, corresponding to different stellar populations: \begin{equation} \label{eq:21} \sigma(m, \vec x') = c(m) \sum_{i=1}^L \sigma_i(\vec x') p_i \bigl( m - A(\vec x') \bigr) \; , \end{equation} where $\sigma_i(\vec x')$ represents the density of stars of class $i$ at the sky position $\vec x'$. In order to proceed, we need to model these densities. A simple and consistent way of doing this, is to suppose that $\ln \sigma_i(\vec x')$ can be written as the weighted sum of two terms: one, $\ln \sigma(\vec x)$, associated to the density at the point of interest $\vec x$ (the point where we intend to evaluate the local densities of stellar populations); and one, $\ln \tau_i(\vec x')$, which parametrizes the local changes of the densities. As a result, we write \begin{equation} \label{eq:22} \sigma_i(\vec x') = \bigl( \sigma_i(\vec x) \bigr)^{\omega(\vec x - \vec x')} \bigl( \tau_i(\vec x') \bigr)^{1 - \omega(\vec x - \vec x')} \; . \end{equation} The function $\omega$ describes the spatial correlation between densities at different positions and plays a central role in identifying which stars contribute to the density estimate at $\vec x$. We can now insert Eqs.~\eqref{eq:22} and \eqref{eq:21} in Eq.~\eqref{eq:20} and find the maximum likelihood solution over $\sigma_i(\vec x)$. Calling $A_n \equiv A(\vec x_n)$ the extinction suffered by the $n$-th star, we find \begin{align} \label{eq:23} & \frac{\partial \ln \mathcal{L}}{\partial \sigma_i(\vec x)} = \sum_{n=1}^N \omega(\vec x - \vec x_n) \frac{p_i (m_n - A_n) \sigma_i(\vec x_n) / \sigma_i(\vec x)}% {\sum_j \sigma_j(\vec x_n) p_j(m_n - A_n)} \notag\\ & \quad - \int \mathrm d m \, c(m) \int \mathrm d^2 x' \, \omega(\vec x') p_i\bigl( m - A(\vec x') \bigr) \sigma_i(\vec x') / \sigma_i(\vec x) \; . \end{align} We now make an important assumption. Because of the form of the parametrization \eqref{eq:22}, a solution for the maximum likelihood can only be found if we specify the functional form of the functions $\tau_i(\vec x')$. However, these functions are truly unknown, since they parametrize the local changes of the various densities, which in turn depend on the local density map. Using a statistical approach, however, it is natural to assume that these functions have the same distribution of $\sigma_i(\vec x)$, the quantity we are interested in. As a result, if we take an average of Eq.~\eqref{eq:23}, terms such as $\sigma_i(\vec x') / \sigma_i(\vec x)$ cancel out: \begin{align} \label{eq:24} & \frac{\partial \ln \mathcal{L}}{\partial \sigma_i(\vec x)} = \sum_{n=1}^N \omega(\vec x - \vec x_n) \frac{p_i (m_n - A_n)}{\sum_j \sigma_j(\vec x) p_j(m_n - A_n)} \notag\\ & \quad - \int \mathrm d m \, c(m) \int \mathrm d^2 x' \, \omega(\vec x') p_i\bigl( m - A(\vec x') \bigr) \; . \end{align} Before proceeding, it is useful to consider the solution of the maximum likelihood approach in the simple case where there is a single population of stars and where the extinction vanishes, $A(\vec x') = 0$. We find in this case \begin{equation} \label{eq:25} \frac{\partial \ln \mathcal{L}}{\partial \sigma(\vec x)} = \sum_{n=1}^N \frac{\omega(\vec x - \vec x_n)}{\sigma(\vec x)} - \int \mathrm d^2 x' \, \omega(\vec x') \; , \end{equation} where we have used the normalization property \eqref{eq:10}. The solution of this equation is immediately found as \begin{equation} \label{eq:26} \sigma(\vec x) = \sum_{n=1}^N W(\vec x - \vec x_n) \; , \end{equation} where \begin{equation} \label{eq:27} W(\vec x) = \frac{\omega(\vec x)}{\int \omega(\vec x') \, \mathrm d^2 x'} \end{equation} We have therefore recovered Eq.~\eqref{eq:1}, with the correct normalization \eqref{eq:2} for the weight $W$. In the general case, the maximum-likelihood solution of Eq.~\eqref{eq:24} must be obtained numerically. The errors associated to the solutions can be estimated using the Fisher matrix. In our case, we can obtain the Fisher matrix from \begin{align} \label{eq:28} I_{ij}(\vec x) = {} & \iint \mathrm d m \, \mathrm d^2 x' \, \frac{1}{\sigma(m, \vec x')} \frac{\partial \sigma(m, \vec x')}{\partial \sigma_i(\vec x)} \frac{\partial \sigma(m, \vec x')}{\partial \sigma_j(\vec x)} \notag \\ {} = {} & \iint \mathrm d m \, \mathrm d^2 x' \, c^2(m) \omega^2(\vec x - \vec x') \times \notag\\ & \qquad \frac{p_i \bigl( m - A(\vec x') \bigr) p_j \bigl( m - A(\vec x') \bigr)}{\sigma(m, \vec x')} \frac{\sigma_i(\vec x') \sigma_j(\vec x')}{\sigma_i(\vec x) \sigma_j(\vec x)} \; . \end{align} As before, we can replace $\vec x$ to $\vec x'$ in all arguments of $\sigma_i$, with the justification that the statistical properties of $\sigma$ are invariant upon translation. We will discuss in the next section useful expressions to evaluate this quantity in practical cases. \begin{table*}[bt] \centering \begin{tabular}{lcccc} \toprule Function & Support & $\omega(\vec x)$ & $W(\vec x)$ & Eff. area\\ \midrule Top-hat & $\vec |\vec x| \le s$ & $1$ & $1/(\pi s^2)$ & $\pi s^2$ \\ Conic & $\vec |\vec x| \le s$ & $2 (1 - |\vec x|/s)$ & $3 (1 - |\vec x|/s) / (\pi s^2)$ & $2 \pi s^2 / 3$ \\ Parabolic & $\vec |\vec x| \le s$ & $3 (1 - |\vec x|^2/s^2) / 2$ & $2 (1 - |\vec x|^2/s^2) / (\pi s^2)$ & $3 \pi s^2 / 4$ \\ Gaussian & $\mathbb R^2$ & 2 $\exp\bigl( - |\vec x|^2 / 2 s^2 \bigr)$ & $\exp\bigl( - |\vec x|^2 / 2 s^2 \bigr) / (2 \pi s^2)$ & $4 \pi s^2$ \\ \bottomrule \end{tabular} \caption{Different two-dimensional spatial functions $\omega(\vec x)$ and corresponding weight functions $W(\vec x)$. All functions considered here are axisymmetric and include a spatial scale $s$.} \label{tab:1} \end{table*} \subsection{Implementation} \label{sec:implementation} The algorithm proposed in this paper is essentially the search of the solutions of Eq.~\eqref{eq:24} as a function of the star densities $\{ \sigma_i \}$ corresponding to the various components or star populations. The same procedure must be applied to different patches of the sky, so that maps of the star densities can be obtained. These, in turn, will allow us to identify and characterize star clusters, and in particular embedded ones. Although the practical implementation of the algorithm follows this schema, a number of technical and theoretical aspects must be correctly addressed in order to optimize the detection and make the technique efficient. First, we note that a simple way to obtain the (positive) solutions of Eq.~\eqref{eq:24} is through the use of a recursive formula. To this purpose, multiply both members of this equation by $\sigma_i(\vec x)$, and solve for this quantity, thus obtaining the expression \begin{equation} \label{eq:29} \sigma_i(\vec x) \leftarrow \frac{\displaystyle \sum_{n=1}^N \omega(\vec x - \vec x_n) \frac{\sigma_i(\vec x) p_i(m_n - A_n)}{\sum_{j=1}^L \sigma_j(\vec x) p_j(m_n - A_n)}}{\displaystyle \int \mathrm d m \, c(m) \int \mathrm d^2 x' \, \omega(\vec x') p_i\bigl( m - A(\vec x') \bigr)} \; . \end{equation} Unfortunately, we are unable to use this equation because we only know the extinction at the locations of the stars $A_n \equiv A(\vec x_n)$: this prevents us from evaluating the integral over $\mathrm d x^2$ in the denominator. We can, however, move the denominator inside the sum, and evaluate the second integral by replacing $A(\vec x)$ with $A(\vec x_n)$, the extinction at the direction of each star. Additionally, using the same argument that has been employed in Eq.~\eqref{eq:23}, that is the similarity between $\sigma$ and $\tau$, it is convenient to replace $\sigma_i(\vec x)$ with $\sigma_i(\vec x_n)$ in the numerator. This procedure leads to the iteration \begin{equation} \label{eq:30} \sigma_i(\vec x) \leftarrow \sum_{n=1}^N \frac{\displaystyle W(\vec x - \vec x_n) \frac{\sigma_i(\vec x_n) p_i(m_n - A_n)}{\sum_{j=1}^L \sigma_j(\vec x_n) p_j(m_n - A_n)}}{\int \mathrm d m \, c(m) p_i(m - A_n)} \; . \end{equation} Equation~\eqref{eq:30} is the solution proposed in this paper to estimate the local density of stars. As indicated by the left arrow symbol, we can obtain the set of values $\{ \sigma_i \}$ by starting with some arbitrary (positive) values for these quantities, and then by calculating updated values of $\sigma_i$ by applying Eq.~\eqref{eq:30}. The convergence is usually reached within a few tens of iterations. Note that Eq.~\eqref{eq:30} has a simple interpretation. Let us ignore for a moment the weight $W$, i.e.\ let us assume that all stars have the same weight. The sum in Eq.~\eqref{eq:30} is carried out over all stars in the patch of the sky, but each star is counted only partially (i.e., contributes with a term between zero and unity in the sum): precisely, each star contributes by the computed probability that the star be associated with the $i$-th component. The way this probability is computed is actually a simple application of Bayes' theorem, where $p_i(m_n - A_n)$ plays the role of the likelihood, $\sigma_i(\vec x_n)$ is proportional to the prior that the star is of class $i$, and the sum over $j$ in the denominator is proportional to the evidence. The result of the sum of these terms is divided by the by the result of the integral: this is a correcting factor that takes into account the fact that, because of extinction and incompleteness, we will miss a fraction of stars. Note also that Eq.~\eqref{eq:30} can be also considered as a special case of a $K$-means soft clustering algorithm where the only unknown quantities are the classes $\sigma_i$ \citep[see][]{MacKay}. Before proceeding, it is useful to recall the hypotheses of this algorithm and its strengths. First, we assume that we have some knowledge of the $H$-band luminosity function for the various populations of stars that are likely to be present in the field. In practice, we will generally use two probabilities, one for the field stars, and one for the YSOs. Second, we assume that we have measured the extinction $A_n$ of each star: note that this is \textit{not} the average extinction at the location of the star, which might be very different because of perspective effects: for example, a foreground star in front of a cloud would have $A_n \simeq 0$, while the average extinction would be significant. This way, the algorithm can directly account for foreground contamination: foreground stars will not be corrected in their counts, since the integral in the denominator of Eq.~\eqref{eq:30} will evaluate to unity. Similarly, stars within molecular clouds will be corrected only for the amount of material that is really in front of them. Finally, we stress that the iterative procedure proposed here only find positive solutions for the values $\sigma_i$. Although reasonable, nevertheless this choice inevitably introduces biases in the results: for example, in a region where no YSO is present, because of errors we will still measure small positive values for the density of YSOs. However, numerical tests have shown that the bias amount is limited; moreover, a reduction of the bias is associated to a large increase in the scatter. Therefore, we will force the $\sigma_i$ to be positive and use Eq.~\eqref{eq:30} for the solution. The uncertainties associated to Eq.~\eqref{eq:30} can be computed from the Fisher matrix. For practical applications, it is convenient to rewrite Eq.~\eqref{eq:28} by replacing the integrals over $\mathrm d m$ and $\mathrm d^2 x'$ with a sum over the observed objects. This leads to the approximated Fisher matrix expression \begin{equation} \label{eq:31} I_{ij} = \sum_{n=1}^N \frac{\omega^2(\vec x - \vec x_n) c^2(m_n) p_i(m_n - A_n) p_j( m_n - A_n)}{\sigma^2(m_n, \vec x_n)} \; . \end{equation} In this equation, we take the spatial function $\omega$ to be normalized such that \begin{equation} \label{eq:32} \int \omega(\vec x') \, \mathrm d^2 x' = \int \omega^2(\vec x') \, \mathrm d^2 x' \; ; \end{equation} that is, in terms of $W$, \begin{equation} \label{eq:33} \omega(\vec x) = \frac{W(\vec x)}{\int W^2(\vec x') \, \mathrm d^2 x'} \; . \end{equation} Table~\ref{tab:1} reports a few common choices for the spatial function $\omega(\vec x)$ and the corresponding weight function $W(\vec x)$, both correctly normalized. As usual, the covariance matrix associated with the measurements of the densities $\sigma_i$ can be computed from the inverse of the Fisher matrix, $I^{-1}$. \section{Simulations} \label{sec:simulations} \begin{table}[tb] \centering \begin{tabular}{ccccc} \toprule $\sigma_1$ & $\sigma_2$ & $\langle (I^{-1})_{11} \rangle$ & $\langle (I^{-1})_{12} \rangle$ & $\langle (I^{-1})_{22} \rangle$ \\ $\langle \hat\sigma_1 \rangle$ & $\langle \hat\sigma_2 \rangle$ & $\Var[\sigma_1]$ & $\Cov[\sigma_1,\sigma_2]$ & $\Var[\sigma_2]$ \\ \midrule $5.00$ & $20.00$ & $33.38$ & $-19.13$ & $35.27$ \\ $5.30$ & $19.87$ & $21.37$ & $-12.09$ & $29.60$ \\ \midrule $20.00$ & $ 5.00$ & $47.44$ & $-19.02$ & $22.34$ \\ $19.79$ & $ 5.21$ & $43.31$ & $-13.83$ & $16.33$ \\ \midrule $10.00$ & $10.00$ & $47.44$ & $-19.02$ & $22.34$ \\ $10.05$ & $ 9.95$ & $30.63$ & $-13.97$ & $21.04$ \\ \bottomrule \end{tabular} \caption{Summaries of the results of simulations.} \label{tab:2} \end{table} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig51a} \caption{The distributions of measured densities in a simulation with $\sigma_1 = 5$ and $\sigma_2 = 20$ (histogram), together with the predicted Gaussian distribution obtained according to the Fisher matrix evaluated from Eq.~\eqref{eq:31}. The excess of small values of $\hat\sigma_1$ is due to the constraint that $\hat\sigma \ge 0$ imposed by the algorithm.} \label{fig:3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig51b} \caption{The distribution of measured total density $\hat\sigma_1 + \hat\sigma_2$ in a simulation with $\sigma_1 = 5$ and $\sigma_2 = 20$, together with the predicted Gaussian distribution (derived from the Fisher matrix). The distribution is essentially unbiased; moreover, because of the anticorrelation between $\hat\sigma_1$ and $\hat\sigma_2$, the total density has significantly less scatter than both $\hat\sigma_1$ and $\hat\sigma_2$.} \label{fig:4} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig52a} \caption{The distributions of measured densities in a simulation with $\sigma_1 = 20$ and $\sigma_2 = 5$.} \label{fig:5} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig53a} \caption{The distributions of measured densities in a simulation with $\sigma_1 = 10$ and $\sigma_2 = 10$.} \label{fig:6} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig54a} \caption{The biases in the measured densities for $\sigma_1 = 10$ and $\sigma_2 \in [0, 20]$. Note how the bias essentially vanishes for $\sigma_2 > 5$.} \label{fig:7} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig54b} \caption{The scatter on $\hat\sigma_1$ and $\hat\sigma_2$ for $\sigma_1 = 10$ and $\sigma_2 \in [0, 20]$ (dots), together with the predictions obtained from Eq.~\eqref{eq:31} (lines).} \label{fig:8} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig54c} \caption{Violin plot showing the distribution of measured densities $\hat\sigma_1$ for $\sigma_1 = 10$ and $\sigma_2 \in [0, 20]$. Each elongated structure corresponds to a different value of $\sigma_2$; its width is proportional to the distribution of measured values of $\hat \sigma_1$, i.e.\ effectively it is a histogram displayed vertically. The small red dashes indicate the average values.} \label{fig:9} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig54d} \caption{Violin plot showing the distribution of measured densities $\hat\sigma_2$ for $\sigma_1 = 10$ and $\sigma_2 \in [0, 20]$. The small blue dashes indicate the average values.} \label{fig:10} \end{figure} In order to test our method and verify its robustness we have performed a set of simulations. We have considered a small patch of the sky with two different stellar populations: field stars, with exponentially increasing number counts $p_1(m) \propto 10^{\alpha m}$, with $\alpha = \SI{0.33}{mag^{-1}}$; and YSOs, with Gaussian number counts $p_2(m) \propto \exp\bigl(-(m-m_0)^2 / 2 s^2 \bigr)$, with $m_0 = \SI{12}{mag}$ and $s = \SI{1.65}{mag}$. We have distributed both populations randomly in the small patch of the sky considered and we have added to each star a random extinction drawn from the probability distribution $p_A(A) \propto A^{-2}$, in the range $A \in [0.1, 2.0]\,\si{mag}$. This choice is meant to simulate the effects of differential extinction for objects within molecular clouds. Finally, we have imagined that our observations are complete up to $\SI{15}{mag}$, and that no stars can be observed beyond that magnitude: in other words, we have modeled the completeness function as a Heaviside function $c(m) = H(15 - m)$. This way, our final catalog contains, for each star, the angular position in the sky, the $H$-band magnitude, and the measured extinction. Note that the parameters used here are chosen to simulate a real situation corresponding to the sample application of Sect.~\ref{sec:sample-appl-orion}, that is the Orion molecular cloud complex observed with 2MASS. We have used these data in our algorithm, represented by Eqs.~\eqref{eq:30} and \eqref{eq:31}. As weight function $W(\vec x)$ we have used a Gaussian, and we have chosen the angular units so that \begin{equation} \label{eq:34} \int W^2(\vec x') \, \mathrm d^2 x' = \left[ \int W(\vec x') \, \mathrm d^2 x' \right]^2 \; . \end{equation} This choice guarantees that the effective area of the weight function is unity, i.e.\ the effective number of stars used for the analysis, in absence of extinction, would be just $\sigma_1 + \sigma_2$. In reality, the presence of the extinction reduces this number by a factor that depends on the relative ratio between $\sigma_1$ and $\sigma_2$ (typically, by $\sim 20\%$). We have then performed different simulations, with various choices for $\sigma_1$ and $\sigma_2$, to verify the ability of the algorithm to recover the input densities. Figures~\ref{fig:3}--\ref{fig:6} show the observed distributions of $\hat\sigma_1$ and $\hat\sigma_2$, together with the predicted ones (Gaussian distributions centered on the true values of $\sigma_1$ and $\sigma_2$, with the variances predicted from the Fisher matrix $I$). In general, we can note a few points: \begin{itemize} \item When one of the input densities is below $\sim 7$, there is a clear excess of the corresponding quantity for small measured values. This is a consequence of the fact that the proposed algorithm only returns positive values for the densities. \item Except for the point above, the predicted distributions reproduce very well the measured data. The agreement is particularly good when the input densities are large. \item Overall, the total density $\sigma_1 + \sigma_2$ is better constrained than the individual densities $\sigma_{1,2}$. \end{itemize} Figures~\ref{fig:7} and \ref{fig:8} show the biases and the errors on the measured densities $\hat \sigma_{1,2}$ for $\sigma_1 = 10$ and $\sigma_2$ varying in the interval $[0, 20]$. We can see that there is a measurable bias for $\sigma_2 < 5$, while the results are essentially unbiased above this value. Correspondingly, we observe in the same range $\sigma_2 \in [0, 5]$ a measured scatter in measured densities that is significantly smaller than what predicted from the Fisher matrix. For larger values of the input density the error estimate gets closer to the measured errors, but still remains slightly above. This is actually good, because implies a small overestimate of the error which will make the entire method more robust for cluster detections (that is, it will decrease the number of false positive). In summary, all these simulations confirm that the method works and that the error estimate is quite accurate. \section{Sample application: Orion} \label{sec:sample-appl-orion} \begin{figure*}[t] \centering \includegraphics[width=\hsize]{fig30b1}% \hspace{-\hsize}% \begin{ocg}{fig:30b2off}{fig:30b2off}{0}% \end{ocg}% \begin{ocg}{fig:30b2on}{fig:30b2on}{1}% \includegraphics[width=\hsize]{fig30b2}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30b3off}{fig:30b3off}{0}% \end{ocg}% \begin{ocg}{fig:30b3on}{fig:30b3on}{1}% \includegraphics[width=\hsize]{fig30b3}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30b4off}{fig:30b4off}{0}% \end{ocg}% \begin{ocg}{fig:30b4on}{fig:30b4on}{1}% \includegraphics[width=\hsize]{fig30b4}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30b5off}{fig:30b5off}{0}% \end{ocg}% \begin{ocg}{fig:30b5on}{fig:30b5on}{1}% \includegraphics[width=\hsize]{fig30b5}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30b6off}{fig:30b6off}{0}% \end{ocg}% \begin{ocg}{fig:30b6on}{fig:30b6on}{1}% \includegraphics[width=\hsize]{fig30b99}% \end{ocg}% \caption{% The results of the cluster finding algorithm in Orion~A using the 2MASS Point Source Catalog. The red contours shows all surface density detections $3\sigma$ above the background, while the black contour corresponds to $A_K = \SI{0.3}{mag}$. When displayed in Adobe Acrobat, it is possible to hide the \ToggleLayer{fig:30b2on,fig:30b2off}{\protect\cdbox{Grid}} lines, the black \ToggleLayer{fig:30b3on,fig:30b3off}{\protect\cdbox{Extinction}} contours, the red contours corresponding to the \ToggleLayer{fig:30b4on,fig:30b4off}{\protect\cdbox{Clusters}}, the red dots representing the \ToggleLayer{fig:30b5on,fig:30b5off}{\protect\cdbox{Megeath et al.\ YSOs'}}, and the clusters' \ToggleLayer{fig:30b6on,fig:30b6off}{\protect\cdbox{Names}}.} \label{fig:11} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig30c1}% \hspace{-\hsize}% \begin{ocg}{fig:30c2off}{fig:30c2off}{0}% \end{ocg}% \begin{ocg}{fig:30c2on}{fig:30c2on}{1}% \includegraphics[width=\hsize]{fig30c2}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30c3off}{fig:30c3off}{0}% \end{ocg}% \begin{ocg}{fig:30c3on}{fig:30c3on}{1}% \includegraphics[width=\hsize]{fig30c3}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30c4off}{fig:30c4off}{0}% \end{ocg}% \begin{ocg}{fig:30c4on}{fig:30c4on}{1}% \includegraphics[width=\hsize]{fig30c4}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30c5off}{fig:30c5off}{0}% \end{ocg}% \begin{ocg}{fig:30c5on}{fig:30c5on}{1}% \includegraphics[width=\hsize]{fig30c5}% \end{ocg}% \hspace{-\hsize}% \begin{ocg}{fig:30c6off}{fig:30c6off}{0}% \end{ocg}% \begin{ocg}{fig:30c6on}{fig:30c6on}{1}% \includegraphics[width=\hsize]{fig30c99}% \end{ocg}% \caption{% The results of the cluster finding algorithm in Orion~B using the 2MASS Point Source Catalog (see caption of Fig.~\ref{fig:11} for the legend). Note that NGC~2023 is below the detection threshold and appears only as a weak smudge in this image. When displayed in Adobe Acrobat, it is possible to hide the \ToggleLayer{fig:30c2on,fig:30c2off}{\protect\cdbox{Grid}}, the \ToggleLayer{fig:30c3on,fig:30c3off}{\protect\cdbox{Extinction}}, the \ToggleLayer{fig:30c4on,fig:30c4off}{\protect\cdbox{Clusters}}, the red dots representing the \ToggleLayer{fig:30b5on,fig:30b5off}{\protect\cdbox{Megeath et al.\ YSOs'}}, or the \ToggleLayer{fig:30c6on,fig:30c6off}{\protect\cdbox{Names}}.} \label{fig:12} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig42a2} \\[1mm] \includegraphics[width=\hsize]{fig42a1}% \caption{% A Gaussian-kernel smoothed density map of the \citet{2016AJ....151....5M} YSO list in Orion~A (top) to compare with our density map (bottom).} \label{fig:122} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.49\hsize]{fig42b2}\hfill \includegraphics[width=0.49\hsize]{fig42b1}% \caption{A Gaussian-kernel smoothed density map of the \citet{2016AJ....151....5M} YSO list in Orion~B (left) to compare with our density map (right).} \label{fig:123} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\hsize]{fig40}% \caption{The distribution of $P_\mathrm{YSO}$, the probability that our algorithm assigns to each star to be a YSO, for the \citet{2016AJ....151....5M} YSO candidates and for all other objects in the 2MASS point source catalog.} \label{fig:124} \end{figure} { \setlength{\tabcolsep}{4pt}% \input{clusters.tex} } { \setlength{\tabcolsep}{4pt}% \input{clusters2.tex} } We have applied the method proposed in this paper to 2MASS data of the Orion~A and B molecular cloud complexes. The regions selected ensure that we can verify the reliability of the algorithm proposed here using some of the best studied objects in the sky. In particular, the populations of embedded clusters for both clouds have been the targets of extensive observational campaigns using ground-based, near-infrared \citep{1991ApJ...371..171L}) and millimeter-wave \citep{2016arXiv160906366L} surveys as well as space-based, mid-infrared \textit{Spitzer Space Telescope} \citep{2012AJ....144..192M, 2016AJ....151....5M}, and \textit{Chandra} X-ray \citep{2008ApJ...677..401P} surveys. Additionally, the distance of these clouds ensures that the 2MASS $H$-band data are, in absence of extinction, complete for YSOs: that is, the cluster HLF at the distance of Orion is essentially entirely within the 2MASS $H$-band limiting magnitude ($\sim \SI{15}{mag}$). \begin{figure}[th] \centering \includegraphics[width=\hsize]{fig43} \caption{A low-resolution version of the YSO density in Orion. The grey area at $\ell \simeq 205^\circ$ is the Orion OB Ib association, while the lighter area to the right (around $\ell \simeq 201^\circ$) is OB Ia, containing the 25~Ori cluster (the grey spot at $\ell \simeq 201^\circ$ and $b \simeq -18^\circ$).} \label{fig:13} \end{figure} Figures~\ref{fig:11} and \ref{fig:12} show the density of fiducial YSOs measured in Orion~A and B. These maps have been produced by our pipeline together with \textsc{Nicer} \citep{2001A&A...377.1023L} and \textsc{Nicest} \citep{2009A&A...493..735L} extinction maps from the 2MASS Point Source Catalogue (see \citealp{2011A&A...535A..16L} for details on the data selection and extinction map construction). The algorithm has been run in conditions similar to the simulations described above: that is, we have used two different stellar populations, one associated with the background and characterized by exponential number counts, and one associated with the YSOs and characterized by Gaussian number counts (with parameters consistent with the $H$-band luminosity function of \citealp{2002ApJ...573..366M}). Using an angularly close control field we also measured the distribution of intrinsic colors of stars and the shape of the completeness function: the latter has been modeled using an complementary error function erfc, as described in Appendix~\ref{sec:k-band-probability}. This choice makes it possible to use the entire 2MASS catalogue without any magnitude cut (which would increase the noise in the final data products). The maps produced have a pixel size of \SI{1.5}{arcmin} and a weight function $W \propto \omega$ in turn proportional to a Gaussian with $\mathit{FWHM} = \SI{3}{arcmin}$. We have used a relatively large beam in these maps to increase the sensitivity of our algorithm and to minimize the effects of the biases shown in the simulations described in Sect.~\ref{sec:simulations}, while still retaining in most situations the ability to distinguish different clusters (i.e., avoid confusion effects at the distance of Orion). Since we have at our disposal the covariance map of these measurements, we have assessed the reliability of each density peak in these figures. The red contours in the figures show the areas (larger than 2 pixels) where the local YSO density exceeds \SI{1.5}{stars/pixel}, corresponding approximately to a signal-to-noise ratio larger than 3.5: note how some regions in black in Fig.~\ref{fig:11} do not reach the threshold because of the large error associated with them (mostly due to the high extinction values there). Table~\ref{tab:3} shows the YSO clusters identified in the Orion~A and B areas using our simple prescription, together with the most relevant parameters. In some cases we clearly see that angularly close clusters appear as a single contour in our maps: the simple procedure used here to define clusters, the relatively coarse resolution used, and the cluster morphology itself prevent us from deblending some close objects. An extreme case of this situation might be the ISF (the Integral Shaped Filament) cluster, where the limitations due to angular resolution would make it difficult to resolve and separate smaller clusters if they exist in such a very populous region. We note that the ISF cluster encompasses M42, the Trapezium and ONC clusters as well as an extended population of YSOs along the ISF. The radius $R$ reported in the table corresponds to the radius of a circle that would occupy the same area as the identified cluster, i.e.\ to the connected region of the sky where the inferred density of YSOs exceeds the background by $3\sigma$. At the estimated distance of Orion, \SI{413}{pc} \citep{2009ApJ...700..137R}, $1'$ corresponds to \SI{0.12}{pc}: therefore, the clusters identified have radii spanning from $\sim \SI{2.4}{pc}$ to $\sim \SI{0.15}{pc}$. The well known clusters in these clouds are correctly identified by our procedure. It is interesting to compare Table~\ref{tab:3} with clusters identified independently using much more secure data. Among the ones at our disposal, the recent catalog of embedded YSOs obtained by \citet{2016AJ....151....5M} using the \textit{Spitzer Space Telescope} and the \textit{Chandra} observatory is probably the most secure and complete: we will therefore focus on this catalog. Since our definition of a cluster is based on an somewhat arbitrary parameters (signal-to-noise threshold, minimum number of pixels, no correction for the stars missed at the boundaries), and since different, more-or-less, arbitrary parameters are also used by \citet{2016AJ....151....5M}, we find it more appropriate and fair to make a comparison after we first homogenize the data. Specifically, we take \citet{2016AJ....151....5M} YSO list and we make out of it a density map using a Gaussian kernel of the same size of the one used for our map. Figures \ref{fig:122} and \ref{fig:123} show the results obtained for Orion~A and B, which clearly compares very well with our own maps, derived purely from the 2MASS point source catalog. The most relevant structures are present in both maps and have very similar shapes; the only differences are the noise present in our maps (however at a relatively low level), and the limited coverange of the \textit{Spitzer} derived density maps. The qualitative similarity of these maps can be quantified if compare clusters identified in both maps using the same criteria. Table~\ref{tab:4} shows a list of clusters identified in the smoothed \citet{2016AJ....151....5M} maps using a fix density threshold (set to \SI{1.5}{stars/pixel}). In this table we compare the number of \textit{Spitzer} YSOs with the number of YSOs predicted from the integral of the $\sigma_\mathrm{YSOs}$ over the area of each cluster as defined from Megeath et al.\ density map, together with the computed $1$-$\sigma$ error. It is clear that in almost all cases we find an excellent agreement, although in many cases our estimates are slightly larger than the ones by Megeath et al. We can speculate that this is due to the presence of class~III YSO, which likely would be missed by \textit{Spitzer}. Indeed, a comparison of the two panels of Fig.~\ref{fig:122} shows that the bottom panel, corresponding to our density map, has spatially more extended clusters than the top panel, corresponding to Megeath et al.\ density map. As discussed earlier on, our algorithm is a statistical one and works best when it is applied to a sizeable number of stars. However, we can also push it and associate to each single star a probability of being a YSO: to this purpose, for the $n$-th star we can compute \begin{equation} \label{eq:47} P_i = \frac{\sigma_i(\vec x_n) p_i(m_n - A_n)}{\sum_{j=1}^L \sigma_j(\vec x_n) p_j(m_n - A_n)} \; . \end{equation} Note how this quantity resembles the term within the outer sum of Eq.~\eqref{eq:30}. Figure~\ref{fig:124} shows the distribution of $P_\mathrm{YSO}$ (that is, the distribution in the probabilities assigned to each object to be a YSO) for the \citet{2016AJ....151....5M} YSO candidates and for all the other objects. It is clear how all the other objects have $P_\mathrm{YSO}$ that is concentrated around zero, while the YSO candidates have a much broader distribution that extends to unity. For these latter objects the distribution, in addition to a substal peak at $P_\mathrm{YSO} = 1$, shows a long tail up to small values of $P_\mathrm{YSO}$: this is not unexpected, since our identification is only statistical (and therefore we cannot identify unambigously YSOs). Note also how the relatively low values of $P_\mathrm{YSO}$ for some genuine YSOs in our algorithm are compensated by the small tail in the distribution of field stars (this works because there are many more field stars than YSOs, a fact that is accounted for in the algorithm). \subsection{Sensitivity to the distributed population} Recently, \citealp{2016arXiv160904948K}, have identified a rich and well-defined stellar population of about 2\,000 objects, mostly M stars without extinction or infrared-excesses, as the low-mass counterpart to the Orion OB Ib subgroup (the Orion belt population). This low-mass population is not obviously clustered but instead appears to be distributed across $\sim 10$ square degrees and the authors speculate that it could represent the evolved counterpart of a Orion nebula-like cluster. While more data is needed to test this scenario, it is clear that much can be learned about the origin of stellar OB associations and the dispersal of clusters into the Galactic field if one is able to trace in a robust manner the distribution of the slightly older and more expanded populations surrounding star-forming molecular clouds. We now investigate the extent to which the technique proposed here is suitable for detection of looser, more expanded distributions of young stars, in particular the low-mass counterpart to the Orion OB association presented in \citealp{2016arXiv160904948K}. For this purpose, we have built a lower resolution map of the region, employing a \textit{FWHM} of \SI{30}{arcmin}. Figure~\ref{fig:13} shows that, surprisingly, we are well able to recover the stellar over-density of the Ori~Ib population, and for the first time, the stellar over-density of the Ori~Ia population. These over-densities are likely to be created by low-mass stars as 2MASS is still sensitive to the peak of the IMF for the putative distance and age of these subgroups. An analysis of the substructure seen in the distributed population visible in Figure~\ref{fig:13} above the noise pattern is beyond the scope of this paper, but will best addressed once Gaia parallaxes are generally available. Of relevance for this paper is that the ability of the method to trace the dispersed population from existing all sky data opens a new window on the unsolved problem of the origins of OB association and cluster dispersal in to the field. \section{Conclusions} \label{sec:conclusions} The following items summarize the main results presented in this paper: \begin{itemize} \item We have developed a new method to discover and characterize deeply embedded star clusters. \item The method is able to statistically classify objects as field stars or YSOs and corrects for the effects of differential extinction. \item We have provided expressions for the covariance of the inferred densities and we have validated both the method and the analytic expression for the covariance with a set of simple but realistic simulations. \item We have applied the new method to 2MASS point sources observed in the Orion~A and B and we have shown that we can identify and characterize well protostellar clusters in these regions, as well as detect much looser associations such as the OB 1a and 1b subgroups. \end{itemize} Finally, we note that the method proposed here can be easily extended to multi-band observations by using suitable probability distributions $p_i(\vec m)$ for various populations as a function of the magnitude vector $\vec m$. Its implementation would be essentially identical, with minor modifications to take into account the different effects of extinction on different bands. The natural use of such an extension would be in the context of techniques such as the one proposed by \citet{2016A&A...585A..78J} which are able to recover the extinction from a complex analysis of multi-band data. \begin{acknowledgements} This research has made use of the 2MASS archive, provided by NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.e Additionally, this research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \end{acknowledgements}
proofpile-arXiv_065-3721
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Iterated Boolean Games} We make use of CGS-EPC to introduce iterated boolean games with $\mathsf{LTL}$ goals as studied by Gutierrez \emph{et al.} \cite{GHW13,GutierrezEtAlIC20015}. An {\em iterated boolean game} is a tuple $\langle \mathcal G, \gamma_1, \ldots, \gamma_n \rangle$ such that ($i$) $\mathcal G$ is a CGS-EPC with a trivial protocol (i.e., for every $i \in N$, $s \in S$, $d(i,s) = \mathcal A_i$); and ($ii$) for every $i \in N$, the {\em goal} $\gamma_i$ is an $\mathsf{LTL}$-formula. We can generalise the above to {\em iterated boolean games with shared control} as follows: \begin{definition} An \emph{iterated boolean game with shared control} is a tuple $\langle \mathcal G, \gamma_1, \ldots, \gamma_n \rangle$ such that \begin{itemize} \item [(i)] $\mathcal G$ is a CGS-SPC; \item [(ii)] for every $i \in N$, the {\em goal} $\gamma_i$ is an $\mathsf{LTL}$-formula. \end{itemize} \end{definition} Observe that function $\tau$ is thus no longer trivial. Just like CGS-SPC generalise CGS-EPC, iterated boolean games with shared control generalise standard iterated boolean games. In particular, the existence of a winning strategy can be checked via the satisfaction of an $\mathsf{ATL}^*$-formula: \begin{proposition} An agent $i$ in an iterated boolean game has a winning strategy for goal $\gamma_i$ and state $s$ if and only if formula $\atlop{\{ i \}}\gamma_i$ is satisfied in ($\mathcal G$,s). \end{proposition} \begin{example} Consider an iterated boolean game with shared control for agents $\{1,2\}$ and issues $\{p,q\}$, such that $\Phi_1 = \{p\}$ and $\Phi_2 =\{p,q\}$. Suppose that for all states $s$ the transition function is such that $\tau(s,\alpha)(q) = \alpha_2(q)$, being agent $2$ the only agent controlling $q$, while $\tau(s,\alpha)(p) = 1$ iff $\alpha_1(p) = \alpha_2(p) = 1$. We thus have that $(\mathcal G, s) \models \atlop{ \{1,2\} }\nextop p$ and $(\mathcal G, s) \models \lnot\atlop{ \{1\} }\nextop q$ for all $s$. \end{example} \subsection{Influence Games}\label{sec:influencegames} Influence games model strategic aspects of opinion diffusion on a social network. They are based on a set of variables $\op{i}{p}$ for "agent $i$ has the opinion $p$" and $\vis{i}{p}$ for "agent $i$ uses her influence power over $p$". Agents have binary opinions over all issues; hence $\lnot\op{i}{p}$ reads "agent $i$ has the opinion $\lnot p$". Goals are expressed in $\mathsf{LTL}$ with propositional variables $\{\op{i}{p},$ $\vis{i}{p} \mid i\in N, p\in \Phi \}$. We define an influence game in a compact way below, pointing to the work of Grandi \emph{et al.} \cite{GLNP17} for more details. \begin{definition}\label{def:influence} An \emph{influence game} is a tuple $ IG= \langle N, \Phi, E,$ $ S_0,$ $\{F_{i,\textit{Inf(i)}}\}_{i \in N}, \{\gamma_i\}_{i\in N}\rangle$ where: \begin{itemize} \item $N = \{1, \dots, n\}$ is a set of \emph{agents}; \item $\Phi = \{1, \dots, m\}$ is a set of \emph{issues}; \item $E \subseteq N \times N$ is a directed irreflexive graph representing the \emph{influence network}; \item $S_0 \in \mathcal{S}$ is the \emph{initial state}, where states in $ \mathcal{S}$ are tuples $(\boldsymbol{B}, \boldsymbol{V})$, where $\boldsymbol{B} = (B_1, \dots, B_n)$ is a profile of \emph{private opinions} $B_i : \Phi \to \{0,1\}$ indicating the opinion of agent $i$ on variable $p$, and $\boldsymbol{V} = (V_1, \dots, V_n)$ is a profile of \emph{visibilities} $V_i : \Phi \to \{0,1\}$ indicating whether agent $i$ is using her influence power over $p$; \item $F_{i, \textit{Inf(i)}}$ is the unanimous \emph{aggregation function} associating a new private opinion for agent $i$ based on agent i's current opinion and the visible opinions of $i$'s influencers in \textit{Inf(i)}; \item $\gamma_i$ is agent $i$'s \emph{individual goal}, i.e., an $\mathsf{LTL}$ formula. \end{itemize} \end{definition} Influence games are repeated games in which individuals decide whether to disclose their opinions (i.e., use their influence power over issues) or not. Once the disclosure has taken place, opinions are updated by aggregating the visible opinions of the influencers of each agent (i.e., the nodes having an outgoing edge terminating in the agent's node). We associate to $IG = \langle N, \Phi, E, S_0, \{F_{i,\textit{Inf(i)}}\}_{i \in N}, \{\gamma_i\}_{i\in N}\rangle$ a CGS-SPC $\mathcal G' = \langle N', \Phi'_0, \dots, \Phi'_n, S', d', \tau' \rangle$ by letting $N' = N$; $\Phi'_0 = \{\op{i}{p}\mid i \in \mathcal N, p \in \Phi\}$; $\Phi_i' = \{ \vis{i}{p} \mid p \in \Phi\}$ for $i \in N'$; $S' = 2^{\Phi'}$; $d'(i, s') = 2^{\Phi_i'}$ for $s' \in S'$; and finally for state $s'\in S'$ and action $\alpha'$ we let: \[ \tau'(s', \alpha')(\varphi) \; = \; \left\{ \begin{array}{ll} \alpha'_i(\vis{i}{p}) & \mbox{if } \varphi = \vis{i}{p} \\ F_{i,\textit{Inf(i)}}(\vec{a}, \vec{b})_{|p} & \mbox{if } \varphi = \op{i}{p} \end{array} \right. \] where vectors $\vec{a} = (a_1, \dots, a_{|\Phi|})$ and $\vec{b} = (b_1, \dots, b_{|\Phi|})$ are defined as follows, for $k \in \textit{Inf(i)}$: \begin{align*} a_p &= \begin{cases} 1 & \mbox{if } \op{i}{p} \in s' \\ 0 & \mbox{otherwise } \end{cases} \\ b_p & = \begin{cases} 1 & \mbox{if } \alpha_k(\vis{k}{p}) = 1 \text{ and } \op{k}{p} \in s' \\ 0 & \mbox{if } \alpha_k(\vis{k}{p}) = 1 \text{ and } \op{k}{p} \not\in s' \\ ? & \mbox{if } \alpha_k(\vis{k}{p}) = 0 \end{cases} \end{align*} Vector $\vec{a}$ represents the opinion of agent $i$ over the issues at state $s'$, while vector $\vec{b}$ represents the opinions of $i$'s influencers over the issues, in case they are using their influencing. In particular, `?' indicates that the influencers of $i$ in \textit{Inf(i)} are not using their influence power. \begin{proposition}\label{prop:infgam} Agent $i$ in influence game IG has a winning strategy for goal $\gamma_i$ and state $S_0$ if and only if formula $\atlop{\{ i \}}\gamma_i$ is satisfied in the associated CGS-SPC and state $s'$ corresponding to $S_0$. \end{proposition} \begin{proof}[proof sketch] Let $IG$ be an influence game and let $\mathcal G'$ be the CGS-SPC associated to it. Consider now an arbitrary agent $i$ and suppose that $i$ has a winning strategy in $IG$ for her goal $\gamma_i$ in $S_0$. A memoryless strategy $\sigma_i$ for agent $i$ in an influence game maps to each state actions of type $(\mathsf{reveal}(J), \mathsf{hide}(J'))$, where $J, J' \subseteq \Phi$ and $J \cap J' = \emptyset$. For any state $s$ in $IG$, consisting of a valuation of opinions and visibilities, consider the state $s'$ in $\mathcal G'$ where $B_i(p) = 1$ iff $\op{i}{p} \in s'$ and $V_i(p) = 1$ iff $\vis{i}{p} \in s'$. We now construct the following strategy for $\mathcal G'$: \[ \sigma_i'(s') = \{\vis{i}{p} \mid p \in J \text{ for } \sigma_i(s) = (\mathsf{reveal}(J), \mathsf{hide}(J'))\} \] By the semantics of the $\atlop{\{i\}}$ operator provided in Section \ref{sec:logics}, and by the standard game-theoretic definition of winning strategy, the statement follows easily from our construction of $\mathcal G'$. \end{proof} The above translation allowed to shed light over the control structure of the variables of type $\op{i}{p}$. In fact, we can now see that $\op{i}{p} \in \Phi'_0$ for all $i \in N$ and $p \in \Phi$. \subsection{Aggregation Games}\label{sec:aggregationgames} Individuals facing a collective decision, such as members of a hiring committee or a parliamentary body, are provided with individual goals specified on the outcome of the voting process --- outcome that is jointly controlled by all individuals in the group. For instance, a vote on a single binary issue using the majority rule corresponds to a game with one single variable controlled by all individuals, the majority rule playing the role of the transition function. Similar situations have been modelled as one-shot games called \emph{aggregation games} \cite{GrandiEtAlIJCAI2015}, and we now extend this definition to the case of iterated decisions: \begin{definition}\label{def:aggregation} An \emph{iterated aggregation game} is a tuple $AG=\langle N, \Phi, F, \gamma_1, \dots, \gamma_n\rangle$ such that: \begin{itemize} \item N is a set of \emph{agents}; \item $\Phi = \{p_1, \dots, p_m\}$ are variables representing \emph{issues}; \item $F : \{0,1\}^{N \times \Phi} \to \{0,1\}$ is an \emph{aggregation function}, that is, a boolean function associating a collective decision with the individual opinion of the agents on the issues; \item $\gamma_i$ for $i \in N$ is an \emph{individual goal} for each agent, that is, a formula in the $\mathsf{LTL}$ language constructed over $\Phi$. \end{itemize} \end{definition} Individuals at each stage of an aggregation game only have information about the current valuation of variables in $\Phi$, resulting from the aggregation of their individual opinions. Analogously to Proposition \ref{prop:infgam}, we can obtain the following result: \begin{proposition} An iterated aggregation game $AG$ is an instance of a CGS-SPC. More precisely, agent $i$ in $AG$ has a winning strategy for goal $\gamma_i$ in $s$ if and only if formula $\atlop{\{i\}}\gamma_i$ is satisfied in the associated CGS-SPC in the corresponding state $s'$. \end{proposition} \begin{proof}[proof sketch] Starting from an iterated aggregation game $AG=\langle N, \Phi,$ $ F, \gamma_1, \dots, \gamma_n\rangle$, construct a CGS-SPC $\mathcal G' = \langle N', \Phi', S', d', \tau' \rangle$ as follows. Let $N'=N$; $\Phi'_i=\Phi$ for all $i=1,\dots,n$; and $\Phi'_0=\emptyset$. Hence, each agent controls all variables. Let the set of actions available to each player be $d'(i,s)=2^{\Phi'}$ for all $i$ and $s$, and the transition function $\tau'$ be such that $\tau'(s,\alpha_1,\dots,\alpha_n)=F(\alpha_1,\dots,\alpha_n)$. The statement then follows easily. \end{proof} A notable example of an iterated aggregation game is the setting of iterative voting (see, e.g., \cite{MeirEtAl2010,LevRosenscheinAAMAS2012,ObraztsovaEtAlAAAI2015}). In this setting, individuals hold preferences about a set of candidates and iteratively manipulate the result of the election in their favour until a converging state is reached. Similar situations can easily be modelled as iterated aggregation games, which have the advantage of allowing for a more refined specification of preferences via the use of more complex goals. \subsection{CGS with Exclusive and Shared Control} We first present concurrent game structures with exclusive propositional control CGS-EPC as they have been introduced by Belardinelli and Herzig \cite{BH16}.\footnote{More precisely, the CGS-EPC we consider here as our basic framework correspond to the ``weak'' version defined by Belardinelli and Herzig \cite{BH16}, as opposed to a strong version where $d(i,s) = \mathcal{A}_i$ for every $i \in N$ and $s \in S$.} We then generalise them by relaxing the assumption of exclusive control. \begin{definition}[CGS-EPC]\label{def:CGS-PC} A \emph{concurrent game structure with exclusive propositional control} is a tuple $\mathcal G=\langle N, \Phi_1, \dots,$ $\Phi_n, S, d, \tau \rangle$, where: \begin{itemize} \item $N = \{1, \dots, n \}$ is a set of \emph{agents}; \item $\Phi=\Phi_1\cup \dots \cup \Phi_n$ is a set of \emph{propositional variables} partitioned in $n$ disjoint subsets, one for each agent; \item $S=2^\Phi$ is the set of \emph{states}, corresponding to all valuations over $\Phi$; \item $d: N\times S \to (2^{\mathcal A} \setminus \emptyset)$, for $\mathcal A = 2^{\Phi}$, is the \emph{protocol function}, such that $d(i,s)\subseteq {\mathcal A_i}$ for $\mathcal A_i=2^{\Phi_i}$; \item $\tau: S\times \mathcal A^n \to S$ is the \emph{transition function} such that $\tau(s, \alpha_1,\dots,\alpha_n) = \bigcup_{i \in N} \alpha_i $. \end{itemize} \end{definition} Intuitively, a CGS-EPC describes the interactions of a group $N$ of agents, each one of them controlling (exclusively) a set $\Phi_i \subseteq \Phi$ of propositional atoms. The state of the CGS is an evaluation of the atoms in $\Phi$. In each such state the protocol function returns which actions an agent can execute. The intuitive meaning of action $\alpha_i \in d(i,s)$ is ``assign true to all atoms in $\alpha_i$, and false to all atoms in $\Phi_i \setminus \alpha_i$''. The $idle_s$ action can be introduced as $\{ p \in \Phi_i \mid s(p) = 1 \}$, for every $i \in N$, $s \in S$. With an abuse of notation we write $d(i,s) = \alpha$ whenever $d(i,s)$ is a singleton $\{ \alpha \}$. We equally see each state $s \in S$ as a function $s : \Phi \to \{0,1\}$ returning the truth value of a propositional variable in~$s$, so that $s(p) = 1$ iff $p \in s$. Given $\alpha = (\alpha_1, \dots, \alpha_n) \in \mathcal A^n$, we equally see each $\alpha_i \subseteq \Phi_i$ as a function $\alpha_i : \Phi_i \to \{0,1\}$ returning the choice of agent $i$ for $p$ under action $\alpha$. We now introduce a generalisation of concurrent game structures for propositional control. Namely, we relax the exclusivity requirement on the control of propositional variables, thus introducing concurrent game structures with shared propositional control CGS-SPC. \begin{definition}[CGS-SPC] A \emph{concurrent game structure with shared propositional control} is a tuple $\mathcal G = \langle N, \Phi_0, \dots, \Phi_n,$ $ S, d, \tau\rangle$ such that: \begin{itemize} \item $N$, $S$, and $d$ are defined as in Def.~\ref{def:CGS-PC} with $\mathcal A = 2^{\Phi \setminus \Phi_0}$; \item $\Phi = \Phi_0\cup \Phi_1\cup \dots \cup \Phi_n$ is a set of \emph{propositional variables}, where $\Phi_0 \cup \Phi_1\cup \dots \cup \Phi_n$ is not necessarily a partition and $\Phi_0 = \Phi \setminus(\Phi_1\cup \dots \cup \Phi_n) $; \item $\tau : S \times \mathcal A^n \to S$ is the \emph{transition function}. \end{itemize} \end{definition} Observe that in CGS-SPC the same atom can be controlled by multiple agents, and propositional control is not exhaustive. Additionally, the actions in $\mathcal A$ do not take into account propositional variables in $\Phi_0$ because they are not controlled by anyone (though their truth value might change according to the transition function). The transition function takes care of combining the various actions and producing a consistent successor state according to some rule. Simple examples of such rules include introducing a threshold $m_p \in \mathbb{N}$ for every variable $p$, thus setting $p \in \tau(s,\alpha)$ iff the number of agents $i$ with $p \in \alpha_i$ is greater than $m_p$. This generalises Gerbrandy's consensus games \cite{Gerbrandy06}.\footnote{The definition of $\tau$ as an arbitrary function might seem too general. Nonetheless, such a definition is needed to represent complex aggregation procedures such as those used in the games described in Sections~\ref{sec:influencegames} and \ref{sec:aggregationgames}.} Clearly, CGS-EPC can be seen as a special case of CGS-SPC in which every atom is controlled exactly by a single agent, and therefore $\{\Phi_0, \dots, \Phi_n\}$ is a partition of $\Phi$. Moreover, $\tau$ is given in a specific form as per Definition \ref{def:CGS-PC}. \subsection{Logics for Time and Strategies}\label{sec:logics} To express relevant properties of CGS, we present the Linear-time Temporal Logic $\mathsf{LTL}$ \cite{P77} and the Alternating-time Temporal Logic $\mathsf{ATL}^*$ \cite{AHK02}. Firstly, state formulas $\varphi$ and path formulas $\psi$ in $\mathsf{ATL}^*$ are defined by the following BNF: \begin{eqnarray*} \varphi & ::= & p \mid \neg\varphi \mid \varphi \lor \varphi \mid \atlop{C}\psi\\ \psi & ::= & \varphi \mid \neg \psi \mid \psi \lor \psi \mid \nextop{\psi} \mid \until{\psi}{\psi} \end{eqnarray*} where $p \in \Phi$ and $C \in 2^{N}$. The intuitive reading of $ \atlop{C}\psi$ is ``coalition $C$ has a strategy to enforce $\psi$'', that of $\nextop{\psi}$ is ``$\psi$ holds at the next state'' and that of $\until{\psi}{\varphi}$ is ``$\psi$ will hold until $\varphi$ holds''. The BNF for the language of $\mathsf{ATL}$ consists of all state formulas where $\psi$ is either $\nextop{\varphi}$ or $\until{\varphi}{\varphi}$. On the other hand, the language of $\mathsf{LTL}$ consists of all path formulas in $\mathsf{ATL}^*$, whose state formulas are propositional atoms only. That is, formulas in $\mathsf{LTL}$ are defined by the following BNF: \begin{eqnarray*} \psi & ::= & p \mid \neg \psi \mid \psi \lor \psi \mid \nextop{\psi} \mid \until{\psi}{\psi} \end{eqnarray*} Truth conditions of $\mathsf{LTL}$ and $\mathsf{ATL}^*$ formulas are defined with respect to concurrent game structures, such as the CGS-EPC and CGS-SPC introduced above. In order to do so, we first have to provide some additional notation. The set of {\em enabled joint actions} at some state $s$ is defined as $Act(s) = \{ \alpha \in \mathcal A^n \mid \alpha_i \in d(i, s) \text{ for every } i$ $\in N \}$. Then, the set of {\em successors} of $s$ is given as $Succ(s) = \{ \tau(s, \alpha) \mid \alpha \in Act(s) \}$. Every $Succ(s)$ is non-empty because $d(i,s) \neq \emptyset$. An infinite sequence of states $\lambda = s_0s_1\dots$ is a \emph{computation} or a \emph{path} if $s_{k{+}1}\in Succ(s_k)$ for all $k \ge 0$. For every computation $\lambda$ and $k \ge 0$, $\lambda[k, \infty] = s_k, s_{k{+}1}, \dots$ denotes the suffix of $\lambda$ starting from $s_k$. Notice that $\lambda[k, \infty]$ is also a computation. When $\lambda$ is clear from the context, we denote with $\alpha[k]$ the action such that $\lambda[k{+}1]=\tau(\lambda[k], \alpha[k])$. A \emph{memoryless strategy} for agent $i \in N$ is a function $\sigma_i : S \to \mathcal A_i$ such that $\sigma_i(s) \in d(i,s)$, returning an action for each state. For simplicity, we will assume in the rest of the paper that agents have memoryless strategies. We let $\atlstr{C}$ be a {\em joint strategy} for coalition $C \subseteq N$, i.e., a function returning for each agent $i \in C$, the individual strategy $\sigma_i$. For notational convenience we write $\boldsymbol{\sigma}$ for $\boldsymbol{\sigma}_{N}$. The set $\outset{s}{\boldsymbol{\sigma}_C}$ includes all computations $\lambda = s_0s_1\dots$ such that ($a$) $s_0 = s$; and ($b$) for all $k \ge 0$, there is $\alpha \in Act(s)$ such that $\boldsymbol{\sigma}_C(i)(s_k) = \alpha_i$ for all $i \in C$, and $\tau(s_k, \alpha) = s_{k{+}1}$. Observe that $\outset{s}{\boldsymbol{\sigma}}$ is a singleton. We are now ready to define the truth conditions for $\mathsf{LTL}$ and $\mathsf{ATL}^*$ formulas with respect to ~a CGS-SPC $\mathcal G$. Formulas in $\mathsf{ATL}^*$ are interpreted on states, while formulas in $\mathsf{LTL}$ are interpreted on computations. \begin{center} \begin{tabular}{lcl} $(\mathcal G, s) \models p$ & iff & $s(p) = 1$\\ $(\mathcal G, s) \models \lnot \varphi$ & iff & $(\mathcal G, s) \not\models \varphi$ \\ $(\mathcal G, s) \models \varphi_1 \lor \varphi_2$ & iff & $(\mathcal G, s) \models \varphi_1 \text{ or } (\mathcal G, s) \models \varphi_2$\\ $(\mathcal G,s) \models \atlop{C}\psi$ & iff & for some $\boldsymbol{\sigma}_C$, for all $\lambda \in \outset{s}{\boldsymbol{\sigma}_C}$, $(\mathcal G, \lambda) \models \psi$\\ $(\mathcal G, \lambda) \models \varphi$ & iff & $(\mathcal G, \lambda[0]) \models \varphi$\\ $(\mathcal G, \lambda) \models \lnot \psi$ & iff & $(\mathcal G, \lambda) \not\models \psi$\\ $(\mathcal G, \lambda) \models \psi_1 \lor \psi_2$ & iff & $(\mathcal G, \lambda) \models \psi_1 \text{ or } \mathcal G, \lambda \models \psi_2$\\ $(\mathcal G, \lambda) \models \nextop{\psi}$ & iff & $(\mathcal G, \lambda[1, \infty]) \models \psi$\\ $(\mathcal G, \lambda) \models \until{\psi_1}{\psi_2}$ & iff & for some $i \ge 0$, $(\mathcal G, \lambda[i, \infty]) \models \psi_2$ and $(\mathcal G, \lambda[j, \infty]) \models \psi_1$ for all $0 \le j < i$ \end{tabular} \end{center} We define below the model checking problem for this context. \begin{definition}[Model Checking Problem]\label{def:modelchecking} Given a CGS-SPC $\mathcal G$, a state $s \in S$, and an $\mathsf{ATL}^*$-formula $\varphi$, determine whether $(\mathcal G, s) \models \varphi$. \end{definition} It is well-known that model checking for $\mathsf{ATL}^*$ on general concurrent game structures is 2EXPTIME-complete \cite{AHK02}. Belardinelli and Herzig proved that model checking $\mathsf{ATL}$ on CGS-EPC is $\Delta^P_3$-complete \cite{BH16}. Hereafter we consider the general case of CGS-SPC and $\mathsf{ATL}^*$. \section{Introduction}\label{sec:intro} \input{introduction} \section{Formal Framework}\label{sec:framework} \input{preliminaries} \section{Examples of Shared Control}\label{sec:games} \input{games} \section{Restoring Exclusive Control}\label{main_result} \input{mainresult} \section{Computational Complexity of Shared Control Structures} \label{applications} \input{applications} \section{Conclusion}\label{sec:conclusions} \input{conclusions} \section*{Acknowledgements} The authors are grateful to the three anonymous reviewers for their helpful comments. F.~Belardinelli acknowledges the support of the French ANR JCJC Project SVeDaS (ANR-16-CE40-0021).
proofpile-arXiv_065-3735
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\textbf{Introduction}} \label{sec:intro} The authenticity and integrity of acquired \textit{finger vein} (FV) images play a significant role in the overall security of a finger-vein-based biometric system. In the advent of forgery techniques, it is important to link FV-images to their corresponding acquisition devices. A FV sample image not linked to a proper sensor of the recognition system would raise alarm and stop an eventual autentication process. Therefore, having reliable and trustful algorithms to achieve finger vein authenticity and integrity is vital. Many biometric modalities (e.g. face, fingerprint, palm, finger vein images) are vulnerable to attacks. Presentation attacks, which present spoof artifacts to the biometric sensor and insertion attacks, which bypass the sensor by inserting biometric samples into the transmission process between sensor and feature extraction module are the most important examples for attacking the user interface of the biometric system (see Figure \ref{fig:biometric_attack}). Sensor / camera identification in general can be achieved at different levels, camera model level, brand level, and device level. In biometric systems, we often intend to work on device level to actually uniquely identify the sensor instance having captured a certain sample. Still, sensor model identification is of interest as well \cite{maser2021identifying}: Securing a finger vein recognition system against insertion attacks in case the attacker does not know the employed sensor model and enabling device selective processing of the image data. \begin{figure}[hbt!] \centering \includegraphics[width=6cm,height=3cm]{images/Attack_biometric.png} \caption{Points of insertion and presentation attack in a biometric system.} \label{fig:biometric_attack} \end{figure} \vspace*{-1mm} To identify the source device of an image, many algorithms have been proposed. The most prominent way to deduce sensor information from images is to exploit the image inherent Photo Response Non-Uniformity (PRNU). The PRNU relies on intrinsic characteristics of an image caused by different sensitivity of pixels to light due to inhomogeneity of silicon wafer and imperfection during fabrication of sensors. Lukas \etal in \cite{lukas2006digital} and Fridrich in \cite{fridrich2009digital} propose to compute the residual image noise extracted from a subtraction of image and a denoised image version. Linking between image and desired device is established by evaluating the similarity between the PRNU factor (also called the PRNU finger-print) and residual noise using NCC (Normalized Cross-Correlation). Another more recent way of source camera identification is based on deep learning specifically using CNN-based methods. Ahmed \etal \cite{ahmed2019comparative} proposed a CNN model (three convolutional layers with a Softmax classifier) and compare the CNN-based result to a result based on PRNU, showing the PRNU-based approach to perform better than the proposed CNN approach. Baroffio \etal \cite{baroffio2016camera} obtained good accuracy with a three convolutional layer CNN model on a larger dataset. Tuma \etal \cite{tuama2016camera} obtained good results as well looking into different CNN models, among them AlexNet, GoogleNet, and an architecture proposed in their work. Bondi \etal \cite{bondi2016first} propose a CNN model with four layers of convolution along with an SVM classifier instead of a fully connected layer for classification. Note that so far, CNN-based techniques have been proven to be successful in device model identification, while PRNU-based techniques are able to support both model as well as device level identification. Moving to the biometric domain, Bartlow \etal \cite{5204312} investigated on identifying sensors from fingerprint images using a PRNU-based technique. They examined effect and influence of sensor identification even when one only has access to a limited number of samples. Focusing on the iris subdomain, Kauba \etal \cite{Kauba18c} used PRNU- and image texture-based methods. For the texture-based method, the authors extracted texture descriptors by applying DSIFT, DMD, and LBP. The extracted features are represented by Fisher Encoding and discriminated by SVM. Banerjee \etal \cite{banerjee2017image} evaluate the applicability of different PRNU estimation schemes to deduce sensor information from NIR iris images. Moving from classical approaches to deep learning in iris sensor identification, Marra \etal \cite{marra2018deep} proposed a three layer CNN architecture with a Softmax layer at the end. Shifting from the iris to the FV subdomain, Maser \etal \cite{maser2019prnu,maser2019prnuOAGM} and Söllinger \etal \cite{sollinger2019prnu} applied PRNU-based sensor identification methods on finger vein datasets. Maser \etal proposed a texture-based sensor model identification approach \cite{maser2021identifying}, the features have been extracted by applying several classic and statistical properties of an image such as Histogram, Wavelet variance, Entropy, and LBP, etc.\ SVM is applied to discriminate the sample image origins. In this work, we use their results in comparison to our research results. So far, deep learning based techniques have not been used to identify the origin of vascular sample data at all. In this work, we focus on identifying the origin of finger vein sample images using a CNN-based approach. Besides the evaluation of existing CNN models, we also propose a custom one and show its beneficial properties. This work is structured as follows: In section \ref{sec:cnn_models} we describe the six used CNN models. Section \ref{sec:datast_Exp_design} discusses the properties of the finger vein sample datasets as considered and the setup of the conducted experiments. Next, we discuss and analyze the experimental results in Section \ref{sec:result}, and finally, we end this manuscript with a conclusion in Section \ref{sec:conclution}. \section{\textbf{CNN models' structure}} \label{sec:cnn_models} In this section we discuss briefly five state-of-the-art CNN models, and introduce further a novel CNN model adapted for the target application. To select the most appropriate CNN models our attention is on examining a full range of prominent CNN models with varying properties, from a simple variance of AlexNet to more complex architectures like the Xception model which hopefully gives us a deep understanding which type of model is suitable to learn the patterns of finger vein samples. We also study the complexity of introduced models in Table \ref{tbl:number-of-params-in-cnn-model}. \subsection{\textbf{Marra and Bonidi Models}} \label{ssec:bondi_marra_models} These two models are simple stacked networks (AlexNet family networks, both have been used in camera / sensor identification before). (i) \textbf{Bondi Model:} The CNN model proposed by Bondi \etal \cite{bondi2016first} is a stack of convolutional layers which end with a fully-connected layer. We adopted the model to preserve the given specification as good as possible. However, the only changes we made in their proposed model is that the SVM classifier is replaced by a Softmax layer. The detailed structure of the Bondi model is given in the above mention paper. Referring to Table \ref{tbl:number-of-params-in-cnn-model} Bondi model is a relatively light stacked network. (ii) \textbf{Marra Model:} Marra \etal proposed a network \cite{marra2018deep} which is an AlexNet variant, however, the number of layers has been reduced as compared to its predecessor. Due to having 2 fully-connection layers with 1024 and 2048 neurons, the number of trainable parameters is significantly increased (Table \ref{tbl:number-of-params-in-cnn-model}) which causes high complexity of the model. In the above section, we discussed two relatively shallow CNN networks (Bondi and Mara). Goodfellow \etal \cite{goodfellow2013multi} showed that the increasing depth of a network layer led to better performance. In other words, incrementing the number of layers in a network leads to gain enriched feature maps. Thus, due to the mentioned fact, we wanted to know how various deeper networks will perform on our FV databases. \subsection{\textbf{VGG16 Model}} \label{ssec:vgg16_model} We employ the VGG16 model which has been introduced in \cite{simonyan2014very}. VGG16 is an example of a deep network, and, as a result, this model showed improvements in performance with respect to its predecessor models. Furthermore, VGG16 is designed to enrich the feature maps by expanding layers to have a deeper network compared to simple convolutional layers like described in Section \ref{ssec:bondi_marra_models}. We adopted ConvNet Configuration type B of the VGG16 network which is represented in Table 1 of \cite{simonyan2014very} with a slight modification in the input layer, and and adapted number of classes in $Softmax$. Also, we reduced the number of Convolutional layers with 512 channels from 4 to 2. Even though deeper networks have advantages w.r.t more shallow networks, higher network depth may leed to another problem called \textit{degradation}. To avoid this potential problem, Residual networks have been introduced. \subsection{\textbf{50-Layer ResNet Model (ResNet50)}} \label{ssec:resnet_1_model} The 50-layers Residual network exploits the concepts of deep residual learning. One problem of a deep network is \textit{degradation}, i.e. when a deep network starts converging, accuracy gets often saturated. Having accuracy saturation in a network implies that the model does not optimize well. In addition, a deep network leads to higher training errors. He \etal addressed these problems by introducing a deep residual network (a.k.a ResNet), therefore, we select the Residual network proposed by He \etal in \cite{he2016deep} as a further candidate. In summary, (i) the deep Residual network is easily optimized (training error decrease) as compared to its counterpart stacked networks, and, (ii) gains better accuracy while increasing the network depth. Further, (iii) the complexity of a Residual network is low compared to plain stacked CNN networks (referring to Table \ref{tbl:number-of-params-in-cnn-model}), e.g. A ResNet having 152 layers has less parameters than e.g. a VGG model which has been discussed in Section \ref{ssec:vgg16_model}. Details of the ResNet architecture is given in \cite{he2016deep}. \subsection{\textbf{Xception Model}} \label{ssec:xception_model} We selected a variation of the Inception model called Xception that is proposed by Chollet \cite{chollet2017xception}. The Xception model is claimed to be capable of learning with fewer parameters. The philosophy behind this architecture is to decouple the mapping of cross-channel correlations and spatial correlations in the feature maps of CNNs. To achieve decoupling, the depth-wise separable convolution is applied, which works as follows: A spatial convolution is executed independently over each channel of an input, then a point-wise convolution ($1\times1$) is applied sequentially. The output of channels is projected by depth-wise convolution onto a new channel space. It is important to mention that Xception applies a nonlinearity mapping after each operation in the depth-wise separable convolution process. In summary, the Xception model is a linear stack of depth-wise separable convolution layers with a residual connection. The details of the Xception architecture is given in \cite{chollet2017xception}. \subsection{\textbf{6-layer CNN Model (FV2021)}} \label{ssec:resnet_2_model} To propose a novel network that has the advantage of being small and also exploits the advantage of the most prominent CNN models, we could think of many architectures, and most might also work. However, we propose a small model (eventually also well suited for a mobile device) and aim to achieve the same accuracy as the large CNN models. Thus, one of the advantages of the FV2021 model is having the lowest complexity (Table \ref{tbl:number-of-params-in-cnn-model}). We exploit the advantage of Separable Convolution (SC, as used in the Xception net) instead of the classic convolution layer. As explained before, separable convolution performs a depth-wise spatial convolution (which acts on each input channel separately) followed by a point-wise convolution that mixes the resulting output channels. Thus, in developing FV2021, we took the advantage of cross-channel correlations as well as spatial correlations. Therefore, we applied and exploited the utilization of small receptive as well as $1 \times 1$ convolution filters, which can be seen as a linear transformation of the input channels. The network architecture is composed of two sequential blocks, the first block has a skip connection, but the second block has a residual connection (a connection with a convolution operator). To reduce the computational complexity in the first layer, the number of filters are reduced to 32, and kernel size is $7\times7$ with strides of 2, each convolution layer is followed by a batch normalization and a nonlinearity unit (ReLU). A complete scheme of the proposed architecture is given in Figure \ref{fig:macro_resnet_model}, also with parameters given in each convolution block as follow: Separable Conv. $<$number of filters$>$, $<$receptive field size $>$, $<$s=strides$>$. \vspace*{-1mm} \begin{figure}[hbt!] \centering \includegraphics[width=6cm,height=8cm]{images/ResNet2.png} \caption{Proposed model: FV2021 CNN Architecture, a fully connected layer can be added optionally. } \label{fig:macro_resnet_model} \end{figure}% \vspace*{-1mm} \subsection{\textbf{Complexity of CNN Architectures}} \label{ssec:xception-or-fv2021} Complexity and memory consumption are vital criteria to select an algorithm in general. Besides, in particular when selecting CNN models, the performance, resource consumption, and complexity of the model can be considered as Achilles heel for practical applications. One way to estimate the complexity and resource consumption in a CNN is to calculate the number of trainable parameters which are being used by a CNN architecture. We show the number of total and trainable parameters for each of discussed CNNs architecture in Table \ref{tbl:number-of-params-in-cnn-model}. In addistion, the last column shows the number of weighted layes. Please, note that in each model the last fully-connected layer (FC) includes the \textit{Softmax}. \begin{table}[!htbp] \centering \renewcommand\arraystretch{1.2} \resizebox{0.4\textwidth}{!}{% \begin{tabular}{l||rrr} \multicolumn{1}{c||}{CNN Model} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Total \\ params\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Trainable\\ params\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Number \\ of Layers\end{tabular} \\ \hline \hline Bondi & 2,681,368 & 2,681,304 & 4 Conv + 2 FC \\ Marra & 65,563,720 & 65,563,720 & 3 Conv + 2 FC \\ VGG16 & 55,097,288 & 55,077,064 & 8 Conv + 3 FC \\ ResNet50 & 23,597,832 & 23,544,712 & 50 Conv + 1 FC \\ Xception & 20,877,296 & 20,822,768 & 36 Conv + 1 FC \\ FV2021 & 314,632 & 314,376 & 6 Conv + 1 FC \\ \end{tabular} } \caption{\small Number of total, trainable parameters and weighted layers in CNN models} \label{tbl:number-of-params-in-cnn-model} \end{table} Table \ref{tbl:number-of-params-in-cnn-model} reveals that FV2021 has the minimum number of trainable parameters while other models have an enormous number of parameters, ranging from 2.5 to 65 millions. Thus, the proposed CNN has the lowest complexity in comparison to the other discussed models in this section. We will discuss the sensor identification performance of the CNN architectures in Section \ref{sec:result}. \section{\textbf{ Finger Vein Sample Data \& Experimental Design}} \label{sec:datast_Exp_design} We consider eight different finger vein databases (acquired with distinct prototype near infrared sensing devices) that are well known, in addition they are accessible publicly. In this work we took 120 samples from each database. The databases are as follow: \begin{enumerate} \item SDUMLA-HMT,\item HKPU-FV,\item IDIAP,\item MMCBNU\_6000 (MMCBNU),\item PLUS-FV3-Laser-Palmar (Palmar),\item FV-USM, \item THU-FVFDT,\item UTFVP. \end{enumerate} Information on the size of the original samples and how the samples have been withdrawn from the datasets are given in \cite{sollinger2019prnu,maser2021identifying}. \subsection{\textbf{Finger Vein Region of Interest(ROI)}} \label{ssec:roi} In finger vein recognition features are typically not extracted from a raw sample images but from a region-of-interest that is the portion of an image containing only finger vein texture. In addition, an insertion attack can also be mounted using ROI samples (in case the sensor does not deliver a raw sample to the recognition module but ROI data instead). Thus, we produced cropped image samples (ROI datasets) out of the original samples to be able to test our approach on these data as well. To produce ROI datasets we follow the same approach as it is proposed by Maser \etal in \cite{maser2021identifying}. The original samples, as shown in Fig. \ref{fig:histogram_datasamples}, can be discriminated easily: Besides the differences in size (which can be adjusted by an attacker of course), the sample images can be probably distinguished by the extent and luminance of background. To illustrate this, we display the images' histograms beside each example in Figure \ref{fig:histogram_datasamples}, and those histograms clearly exhibit a very different structure. Thus, we have learned that even texture descriptors have an easy job to identify the origin of the respective original sample images. This is not necessarily the case for ROI data. \begin{figure}[h!] \centering \includegraphics[width=0.49\linewidth]{images/box_plot/Luminance_Uncropped.png} \includegraphics[width=0.49\linewidth]{images/box_plot/Luminance_ROI.png} \caption{Luminance distribution of original and ROI images across all datasets, respectively.} \label{fig: luminance_uncropped} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.49\linewidth]{images/box_plot/Var_Uncropped.png} \includegraphics[width=0.49\linewidth]{images/box_plot/Var_ROI.png} \caption{Variance distribution of original and ROI images across all datasets, respectively.} \label{fig:variance_uncropped} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{.35\textwidth} \vspace{2pt} \centering \raisebox{-\height}{\includegraphics[width=0.45\textwidth]{images/histogram/HKPU_FV_bins128.png}}% \hspace{1pt} \vspace{2pt} \raisebox{-\height}{\includegraphics[width=3.0cm,height=1.7cm]{images/histogram/HKPU-FV_7_1_f2_1.png}}% \hspace{1pt} \raisebox{-\height}{\includegraphics[width=0.45\textwidth]{images/histogram/Hist_UTFVP_bins128.png}}% \vspace{.1ex} \raisebox{-\height}{\includegraphics[width=3.0cm,height=1.7cm]{images/histogram/UTFVP_0001_1_1_120509-135315.png}}% \hspace{1pt} \raisebox{-\height}{\includegraphics[width=0.45\textwidth]{images/histogram/IDIAP_bins128.png}}% \hspace{1pt} \vspace{.5ex} \raisebox{-\height}{\includegraphics[width=3.0cm,height=1.7cm ]{images/histogram/IDIAP_005_L_1.png}} \end{subfigure} \caption{FV Image samples and corresponding histograms of original sample images, from top: (a)\hspace{1pt}HKPU\_FV dataset, (b)\hspace{1pt} UTFVP dataset,(c)\hspace{1pt} SDUMLA dataset, and, (d)\hspace{1pt}IDIAP dataset.} \label{fig:histogram_datasamples} \end{figure} To investigate the differences between raw sample data and ROI data in more detail, we have investigated the range of luminance values and their variance across all datasets. Figures \ref{fig: luminance_uncropped} and \ref{fig:variance_uncropped} display the results in the form of box-plots, where the left box-plot corresponds to the original raw sample data, and the right one to the ROI data, respectively. We can clearly see that the luminance distribution properties have been changed dramatically once we change our focus from original datasets to ROI datasets. For example, original HKPU\_FV samples can be discriminated from FV\_USM, MMCBNUm, PALMAR, UTFVP, and THU\_FVFDT ones by just considering luminance value distribution. For the ROI data, the differences are not very pronounced any more. When looking at the variance value distributions, we observe no such strong discrepancy between original sample and ROI data, still for some datasets variance can be used as discrimination criterion (e.g. Palmar vs. HKPU\_FV in original data, FV\_USM vs. HKPU\_FV in ROI data). Consequently, we expect the discrimination of the considered datasets to be much more challenging when focusing on the ROI data only. \subsection{\textbf{Pipeline setup and preprocessing}} \label{ssec:pipline} Each dataset consists of 120 images, in total 960 images. To enhance the image samples and improve the contrast, we applied Contrast Limited Adaptive Histogram Equalization \textit{(CLAHE)}. The entire data (960 images from eight datasets) are shuffled and then we take randomly 70\% of data for the training set, 10\% for the evaluation set and 20\% as test set. The splitting policy assures that the data samples used during training is never used during validation or testing. Thus, performance reported is not biased since we have empty intersection among training, validation and test sets. In addition, the \textit{Adam} optimizer is applied for all networks except for Bondi and Marra (as per authors recommendation, the SGD has been applied as an optimizer in both models). Furthermore, batch size is set to be 64. To feed the input to CNN models, we normalized the image samples to uniform width and height, the size of patches for uncropped sample and ROI which are fed to networks is of $96\times96\times1$. To compare results of CNN-based approaches with a PRNU-based approach, we use the results given in \cite{sollinger2019prnu}. The authors worked on five patches which have been taken from different locations of image samples. For the comparison we consider that results of patch size $320\times240$ are comparable to our results of original image samples. Similarly, the results of patch size $320\times150$ should be comparable to our results of ROI samples. \subsection{\textbf{Evaluation metrics}} \label{subsec:evaluation_metrics} We use classical measures to rate our sensor identification task, which is basically a multi-class classification problem. We use the area under curve of the receiver operating characteristic ($AUC-ROC$) which relates the false positive rate (FPR) to the false negative rate (FNR). The Analysis of the AUC-ROC is significant as the AUC-ROC shows the ability of the proposed classifier to distinguish classes. In sensor identification, Precision is a further important measurement metric because it indicates the proportions of positives and negatives, and good result can be interpreted as high performance of a classifier. Also, in the field of biometrics, it is vital to verify the correct sensors (i.e. True Positive). In contrast, it would be a catastrophe if the biometric system verifies the wrong sensor (False Positive). Therefore, Precision is more important than Recall and consequently also used to assess our results. \section{\textbf{RESULTS}} \label{sec:result} In this section, we discuss the result of applying six CNN models on sensor identification of original samples (uncropped datasets) as well as cropped samples (ROI data). \subsection{\textbf{Results of the six CNN models}} \label{ssec:results-of-the-six-state-of-the-art-CNN-models} In the following paragraphs, we will analyze the outcomes of the six mentioned CNN models. The first and the second columns of Table \ref{tbl:auc-roc-precision-of-all-models-datasamples}\footnote{Results are rounded to five digits after the decimal point} exhibit the AUC-ROC score of the six applied CNN models on original samples and their corresponding ROI. \begin{table}[!htbp] \renewcommand\arraystretch{1.5} \begin{center} \resizebox{0.50\textwidth}{!}{% \begin{tabular}{ccccc} \multicolumn{1}{l}{\textbf{}} & \multicolumn{2}{c}{\textbf{AUC-ROC}} & \multicolumn{2}{c}{\textbf{Precision}} \\ \cline{2-5} \multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{Uncropped}} & \multicolumn{1}{c}{\textbf{ROI}} & \multicolumn{1}{c}{\textbf{Uncropped}} & \multicolumn{1}{c}{\textbf{ROI}} \\ \hline \hline \multicolumn{1}{c}{\textit{\textbf{Bondi}}} & 0.99997 & 0.99773 & 0.9896 & 0.9873 \\ \multicolumn{1}{c}{\textit{\textbf{Marra}}} & 1.00000 & 0.99856 & 1.0 & 0.9914 \\ \multicolumn{1}{c}{\textit{\textbf{VGG16}}} & 1.00000 & 0.99945 & 1.0 & 0.9964 \\ \multicolumn{1}{c}{\textit{\textbf{ResNet50}}} & 0.99996 & 0.99949 & 0.9948 & 0.9971 \\ \multicolumn{1}{c}{\textit{\textbf{Xception}}} & 1.00000 & 0.99972 & 1.0 & 0.9982 \\ \multicolumn{1}{c}{\textit{\textbf{FV2021}}} & 1.00000 & 0.99970 & 1.0 & 0.9980 \\ \hline\hline \end{tabular}% } \caption{\small Results of applied CNN models on ROI and Original (uncropped) samples} \label{tbl:auc-roc-precision-of-all-models-datasamples} \end{center} \end{table}% As was expected, the achieved AUC-ROC scores on original samples are excellent (the first column). All CNN models demonstrated perfect results. However, the modified Bondi model and ResNet(50-layer) results are slightly lower than 1.00 with a small and narrow gap (0.0001). We have the same situation for ROI datasets (the second column). Almost all models exhibited excellent results. Respectively Xception, FV2021, ResNet50 and VGG16 scores are $ > 0.999$. However, concerning the first four mentioned models in the Table, the Marra model and Bondi model results are inferior. \begin{table}[!htbp] \renewcommand\arraystretch{1.5} \begin{center} \resizebox{0.50\textwidth}{!}{% \begin{tabular}{llllll} \textbf{} & \textbf{\begin{tabular}[c]{@{}l@{}}PRNU\\ NCC\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}PRNU\\ PCE\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Texture\\ Descriptor\\ (WMV)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Deep \\ Learning\\ (Xception)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Deep\\ Learning\\ (FV2021)\end{tabular}} \\ \hline\hline \textbf{\begin{tabular}[c]{@{}l@{}} Original \\ Sample\end{tabular}} & 0.992 & 0.991 & 0.999 & 1.0 & 1.0 \\ \textbf{} & & & & & \\ \textbf{ROI} & 0.998 & 0.997 & 0.994 & 0.99972 & 0.99970 \\ \hline\hline \end{tabular} } \caption{\small Comparing results of PRNU, Texture Descriptor, and Deep Learning (CNN) methods on original and ROI sample data.} \label{tbl:prnu_texture_descript_cnn} \end{center} \end{table} The third column of Table \ref{tbl:auc-roc-precision-of-all-models-datasamples} displays the Precision score of all CNN models on original samples (uncropped datasets). Respectively Xception model, FV2021 model, VGG16 model and Marra model are superior to ResNet50 and Bondi's model. The Precision scores are either $1.0$ or close to $1.0$. As a result, by observing Precision results, we can imply that obtained results by four models are highly reliable and accurate. Moving from the original samples (uncropped) to the ROI, the fourth column of the Table \ref{tbl:auc-roc-precision-of-all-models-datasamples} exhibits the performance of the applied models on the region of interest. By observing Precision scores, respectively the Precision score of Xception model $\simeq$ FV2021 model $>$ ResNet50 model $>$ VGG16 model $>$ Marra model $>$ Bondi model. Thus the Xception model and the proposed FV2021 model improved results slightly as compared to the others. We would like to emphasize that the value of false positives (FP) in ResNet50, VGG16, Marra and Bondi models are relatively high which causes their Precision scores to get lower than this of Xception and FV2021 models. \subsection{\textbf{Comparison of various approaches}} \label{ssec:comparing_prnu_texture_descript_and_cnn} In this section, we compare the performance of various approaches for identifying the FV image origin. As we explained in Section \ref{ssec:pipline}, we compare to the results of a PRNU-based approach which is proposed by Söllinger \etal \cite{sollinger2019prnu} and a texture-based approach which is done by Maser \etal \cite{maser2021identifying}. Table \ref{tbl:prnu_texture_descript_cnn} shows results of these three approaches. We observe the superiority of deep learning (CNN) methods using the proposed FV2021 and the Xception CNN models, respectively, over the PRNU-based approach and the texture-based approach. FV2021 and Xception models compete closely in the race, their results are approximately equal, but due to the significantly lower complexity of FV2021 (which has been approved by analysing the number of trainable parameters in Table \ref{tbl:number-of-params-in-cnn-model}), we take the FV2021 as the superior CNN model. \subsection{\textbf{Single Sensor-based Results}} \label{ssec:Sensor-based-results} In this section, Table \ref{tbl:results-sensor-based-Uncropped} displays results of the employed CNN models for all uncropped datasets (instead of overall results shown before). All sensors are discriminated ideally except than MMCBNU. The ResNet50 and Bondi experienced some difficulties to discriminate the MMCBNU sensor.\\ We observe results of the employed CNN models for all ROI datasets in Table \ref{tbl:results-sensor-based-roi}. The excellent performance of Xception model and FV2021, ResNet50 on all sensors can be seen. among these results only FV2021 was successful to gain the results either at 1.0 or at 0.999 on every sensor. \begin{table}[!htbp] \renewcommand\arraystretch{1.5} \begin{center} \resizebox{0.50\textwidth}{!}{% \begin{tabular}{l||llllll} \textbf{Sensor} & \textbf{Bondi} & \textbf{Marra} & \textbf{VGG16} & \textbf{ResNet50} & \textbf{Xception} & \textbf{FV2021} \\ \hline \hline \textbf{UTFVP} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \textbf{FV\_USM} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \textbf{PALMAR} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \textbf{SDUMLA} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \textbf{THU\_FVFDT} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \textbf{IDIAP} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \textbf{MMCBNU} & 0.9997 & 1.0 & 1.0 & 1.0 & 0.96 & 1.0 \\ \textbf{HKPU-FV} & 1.0000 & 1.0 & 1.0 & 1.0 & 1.00 & 1.0 \\ \end{tabular} } \caption{ Displaying results (AUC-ROC score) of applied CNN models on all uncropped FV databases.} \label{tbl:results-sensor-based-Uncropped} \end{center} \end{table} \begin{table}[!htbp] \renewcommand\arraystretch{1.5} \begin{center} \resizebox{0.50\textwidth}{!}{% \begin{tabular}{l||llllll} \textbf{Sensors} & \textbf{Bondi}& \textbf{Marra}& \textbf{VGG16} & \textbf{ResNet50} & \textbf{Xception}& \textbf{FV2021} \\ \hline \hline \textbf{UTFVP} & 0.9885 & 0.9985 & 0.9987 & 0.9997 & 0.9985 & 0.9992 \\ \textbf{FV\_USM} & 0.9997 & 0.9997 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ \textbf{PALMAR} & 1.0000 & 0.9997 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ \textbf{SDUMLA} & 0.9992 & 0.9994 & 1.0000 & 1.0000 & 0.9994 & 1.0000 \\ \textbf{THU\_FVFDT} & 0.9991 & 0.9964 & 0.9997 & 0.9997 & 1.0000 & 0.9993 \\ \textbf{IDIAP} & 0.9975 & 0.9987 & 0.9997 & 0.9982 & 1.0000 & 0.9990 \\ \textbf{MMCBNU} & 0.9995 & 1.0000 & 0.9976 & 0.9997 & 1.0000 & 0.9997 \\ \textbf{HKPU-FV} & 0.9980 & 0.9990 & 0.9992 & 1.0000 & 1.0000 & 1.0000 \\ \end{tabular} } \caption{ Displaying results (AUC-ROC score) of applied CNN models on all ROI FV databases.} \label{tbl:results-sensor-based-roi} \end{center} \end{table} \section{\textbf{Conclusion}} \label{sec:conclution} In this research, we studied the results of using five $state-of-the-art$ CNN models and a novel CNN model (FV2021) for sensor identification on the ROI as well as the original finger vein samples. Finger vein samples are taken from eight databases. As a result, the performance of the proposed FV2021 and Xception models are superior to other CNN models. Then we compare CNN-based results with other results including PRNU correlation-based and texture descriptor-based research. The CNN-based results show slightly better performance. Besides, the two top performing CNN architectures perform very closely in terms of sensor identification accuracy but due to much lower model complexity, we recommend the proposed FV2021. The achieved result by FV2021 is excellent, i.e., the AUC-ROC score for ROI data is 0.9997 and for original samples it is at 1.0. {\small
proofpile-arXiv_065-3744
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{Introduction} The phenomenon of {\em revivals} in linear dispersive periodic problems, also called in the literature {\em Talbot effect} or {\em dispersive quantisation}, has been well-studied and is by now well-understood. It was discovered first experimentally in optics, then rediscovered several times by theoretical and experimental investigations. While the term has been used systematically and consistently by many authors, there is no consensus on a rigorous definition. Several have described the phenomenon by stating that a given periodic time-dependent boundary value problem exhibits {\em revival at rational times} if the solution evaluated at a certain dense subset of times, see \eqref{Rational Time} below, is given by finite superpositions of translated copies of the initial profile. We will call this the {\em periodic revival property}. In particular, when the initial condition has a jump discontinuity at time zero, these discontinuities are propagated and remain present in the solution at each rational time. This behaviour at rational times should be contrasted with the behaviour at generic time, when the solution is known to be continuous as soon as the initial condition is of bounded variation. Hence generically, while the dispersive propagation has a smoothing effect on any initial discontinuity, this smoothing does not occur at rational times. Moreover, at generic times and for appropriate initial data, the solution while continuous is nowhere differentiable. In fact its graph has a fractal dimension greater than 1 \cite{berry1996integer, rodnianski2000fractal}. There is therefore a dichotomy between generic times and the measure-zero set of rational times, as suggested also by the provocative title of \cite{kapitanski1999does}. In this paper we examine the role of boundary conditions in supporting some form of revival phenomenon. In order to illustrate the range of revival behaviour concretely, we focus on two specific linear PDEs of particular significance both from the mathematical point of view and in terms of applications. Namely, we will consider the linear Schr\"odinger equation with zero potential \begin{align} & iu_t(x,t)+u_{xx}(x,t)=0, \label{linS} \tag{LS} \end{align} and the Airy equation, also know as Stokes problem, \begin{align} &u_t(x,t) -u_{xxx}(x,t)=0. \label{airy} \tag{AI} \end{align} Both these PDEs will be posed on the interval $[0,2\pi]$ and we set specific boundary conditions either of pseudo-periodic or of Robin type. These represent two essentially different types of boundary conditions. Indeed, in the pseudo-periodic case the boundary conditions couple the ends of the interval, just as in the periodic case, while in the Robin\ case, the boundary conditions are uncoupled. The type of revival property that we observe in the two cases strongly reflects this difference. Let $n$ denotes the order of spatial derivative in the PDE, hence $n=2$ for \eqref{linS} and $n=3$ for \eqref{airy}. In the first part of the paper, following the work of \cite{olver2018revivals}, we will consider specific types of {\em pseudo-periodic} boundary conditions of the form\begin{equation} \beta_k\partial_x^k u(0,t)=u(2\pi,t),\quad \beta_k\in\mathbb{C},\;\;k=0,1,...,n-1 . \label{ppbc} \tag{PP} \end{equation} Of particular interest will be the case when all $\beta_k$ are equal, that is {\em quasi-periodic} boundary conditions of the form \begin{equation} \beta \partial_x^ku(0,t)=u(2\pi,t),\quad \beta\in\mathbb C,\;\;k=0,1,...,n-1 . \label{qpbc} \tag{QP} \end{equation} In the second part of the paper, we will consider Schr\"{o}dinger's equation \eqref{linS} with the specific Robin boundary conditions given by \begin{equation} b u(x_{0},t) = (1-b) \partial_{x}u(x_{0},t),\ x_{0} = 0, \ \pi,\quad b \in [0,1]. \label{rbc} \tag{R} \end{equation} The case $b=0$ corresponds to Neumann and $b=1$ to Dirichlet boundary conditions. For these special cases, the solution of the boundary value problem is obtained by even or odd extensions from the solution of an associated periodic problem. However, for $0<b<1$ the boundary value problem behaves very differently from a periodic one. It is well established that the periodic problem for any linear dispersive equations exhibits periodic revival (see Theorem~\ref{Linear Evolution Problems} below). Moreover, subject to consistency conditions on the coefficients $\beta_k$, in \cite{olver2018revivals} it was shown that the periodic revival property holds in general for the equation \eqref{linS}-\eqref{ppbc}. Below, we give a new proof of the latter. Our arguments elucidate the mathematical reason for the persistence of the periodic revival property for the linear Schr{\"o}dinger equation \eqref{linS}, when subject to the fairly general class of boundary conditions \eqref{ppbc}. In particular, we show that all pseudo-periodic boundary conditions can be solved in terms of certain associated {\em periodic} problems. This is the content of Proposition \ref{Non-Self-Adjoint Correspondence Theorem} which also enables us to deduce from existing results for the periodic case that, at irrational times, any initial discontinuity is smoothed out. To be precise, even when the initial profile has jump discontinuities, the solution at irrational times becomes a continuous (though nowhere differentiable) function of the space variable. The spectral properties of pseudo-periodic and other non-periodic boundary value problems for \eqref{airy} were first examined in \cite{pelloni2005spectral}, where an explicit general formula for the solution was given. Below we show that, in stark contrast with \eqref{linS}, the quasi-periodic Airy equation, \eqref{airy}-\eqref{qpbc}, in general does not exhibit any form of revival at rational times. Indeed, the periodic revival property holds in this case {\em only for values of the quasi-periodicity parameter}, $\beta=e^{2\pi i \theta}$ such that $\theta\in\mathbb Q$. Remarkably, the latter defies the na\"ive expectation that the revival property carries onto the case of higher order PDEs, when the boundary conditions support it for the second order case. It also suggests that the general pseudo-periodic case for third-order PDEs, generically, will not exhibit revivals. In Section~\ref{Airy's Quasi-Periodic Problem} we prove the following result. \begin{theorem} \label{Airy Correspondence Theorem} Fix $\theta \in [0,1)$ and consider Airy's equation \eqref{airy} with initial condition $u_{0}\in L^{2}(0,2\pi)$ and quasi-periodic boundary conditions \eqref{qpbc} where $\beta = e^{i2\pi\theta}$. Let $p$ and $q$ be co-prime and let \begin{equation} \label{Airy correspondence IC} v^{(p,q)}_{0}(x) = \mathcal{R}_{3}(p,q)\left[u_{0}(x) e^{-i\theta x}\right], \end{equation} where $\mathcal{R}_{3}(p,q)$ is the third order revival operator defined below in \eqref{Revival Operator}. Then, the solution at rational time $t_{\mathrm{r}}= 2\pi\frac{p}{q}$ is given by \begin{equation} \label{Airy Correspondence} u(x,t_{\mathrm{r}}) = e^{-i t_{\mathrm{r}}\theta^{3}} e^{i\theta x} \mathcal{T}_{3\theta^{2}t_{\mathrm{r}}} v^{(p,q)}(x,3\theta t_{\mathrm{r}}). \end{equation} Here $\mathcal{T}_{s}$ is the translation operator (see \eqref{Periodic Translation Operator}) and $v^{(p,q)}(x,t)$ denotes the solution of the periodic problem for Schr\"{o}dinger equation with initial condition $v^{(p,q)}_{0}$. \end{theorem} It is clear from representation \eqref{Airy Correspondence} that we can expect revivals for the Airy quasi-periodic problem only when $\theta\in\mathbb{Q}$, see Corollary~\ref{Airy QP Revival}. Indeed, if $\theta\not\in\mathbb{Q}$, then the time $3 \theta t_{\mathrm{r}}$ is an irrational time for the solution of a periodic problem of the Schr\"{o}dinger equation, which is therefore a continuous function of $x$. We are not aware of any previous result in the literature concerning the failure of any form of revival to hold for a linear dispersive PDEs with coupling boundary conditions. \smallskip We devote the final part of the paper to the linear Schr{\"o}dinger equation \eqref{linS} with the Robin type boundary conditions \eqref{rbc}. In this case the boundary conditions do not couple the ends of the interval, in contrast with all other situations considered here. We show that only a weaker form of revival holds, leading us to reconsider what constitutes revival for a linear dispersive evolution equation. Specifically, we show that, while the solution is not given by finitely many translates of the initial condition, the presence of a periodic term in the solution representation guarantees that the dichotomy between the persistence versus regularisation of discontinuities at rational versus generic time still holds. Our main statement can be formulated as follows. \begin{theorem} \label{Robin Revival Corollary} Consider the linear Schr\"{o}dinger equation \eqref{linS} with initial condition $u_{0} \in L^{2}(0,\pi)$ and Robin boundary conditions \eqref{rbc} with $b\not= 0, 1$. Let $p$ and $q$ be co-prime and $\mathcal{R}_2(p,q)$ be the second order revival operator defined below by \eqref{Revival Operator}. Let \begin{equation*} f_{1}(x) = \sqrt{\frac{\pi}{2}}\frac{b}{(1-b)(e^{\frac{2\pi b}{1-b} - 1})} e^{\frac{b}{1-b}x}, \quad x\in (0,2\pi). \end{equation*} Then, the solution at rational time $t_{\mathrm{r}}=2\pi\frac{p}{q}$ is given by \begin{equation} \begin{aligned} \label{Robin Revival} u(x, t_{\mathrm{r}}) = &2\sqrt{\frac{2}{\pi}} \langle u_{0},e^{\frac{b}{1-b}(\cdot)}\rangle_{L^{2}(0,\pi)} e^{i\frac{b^{2}}{(1-b)^{2}}t_{\mathrm{r}}} f_{1}(x) + \mathcal{R}_2(p,q)\left[u_0^+(x)\right] \\ &+ \mathcal{R}_2(p,q)\left[2f_1*(u_0^--u_0^+))(x)\right], \quad x\in(0,\pi). \end{aligned} \end{equation} where $u_0^{\pm }(x)$ are the even/odd extension in $(0,2\pi)$ of the initial condition, and $*$ denotes the $2\pi$-periodic convolution. \end{theorem} We conjecture that this weaker form of revival is generic in the case of boundary conditions that do not couple the interval endpoints. Our observations in this case complement those reported in \cite{boulton2020new} illustrating a new kind of revival phenomenon. \subsection*{Periodic revival} The original terminology seems to have originated from the experimentally observed phenomenon of \emph{quantum revival}~\cite{berry2001quantum, vrakking, yeazell}. This describes how an electron that is initially concentrated near a single location of its orbital shell is found concentrated again, at certain specific times, near a finite number of orbital locations. This led pure mathematicians to pose the question in terms of whether a quantum particle {\em knows} the time, \cite{kapitanski1999does}. A precursor of the phenomenon was observed as far back as 1834 in optical experiments performed by Talbot \cite{talbot1836lxxvi}. This motivated the pioneering work of Berry and collaborators \cite{berry1996quantum, berry1996integer, berry2001quantum}, on what they called the {\em Talbot effect} in the context of the linear free space Schr\"odinger equation. The concept was extended later to a class of linear dispersive equations that included the linearised Korteweg--deVries equation, first by Oskolkov, \cite{oskolkov1992class} and subsequently rediscovered by Olver \cite{olver2010dispersive}, who called the effect \emph{dispersive quantisation}. It was later extended by Erdo\u{g}an and Tzirakis (see the monograph \cite{erdougan2016dispersive} and references therein). An exhaustive introduction to the history and context of this phenomena can be found in the recent survey \cite{smith2020revival}. Questions have also been addressed on the fractal dimension of the solution profile at irrational times, hence almost everywhere in time, some of them resolved by Rodnianski in \cite{rodnianski2000fractal}. In a different direction, Olver and Chen in \cite{chen2013dispersion} and \cite{chen2014numerical} observed and confirmed numerically the revival and fractalisation effect in some non-linear, integrable and non-integrable, evolution problems. A number of their observations have been rigorously confirmed in \cite{erdogan2013talbot}, \cite{chousionis2014fractal}, \cite{erdougan2019fractal} by Erdo\u{g}an, Tzirakis, Chousionis and Shakan, and more recently in \cite{boulton2020new} for linear integro-differential dispersive equations. The direct link between our present findings and this periodic framework can be put into perspective by following \cite[\S2.3]{erdougan2016dispersive}, as we briefly summarise. Consider general linear dispersive equations of the form \begin{equation} \label{Linear Evolution Problems} u_{t}(x,t) +i P(-i\partial_{x})u(x,t)=0,\quad x\in [0,2\pi],\ t>0. \end{equation} where $P(\cdot)$ is a polynomial of degree $n$ with integer coefficients. Consider purely periodic boundary conditions, \emph{i.e.} \eqref{qpbc} with $\beta=1$. For initial datum $u(x,0)=u_0(x)\in\,L^2(0,2\pi) = L^{2}$, the solution is given in terms of the eigenfunction expansion \begin{equation} \label{Linear Evolution Problems solution} u(x,t) =\sum_{m\in\mathbb Z}\widehat{u_{0}}(m)e^{-iP(m) t}e_m(x),\qquad \widehat{u_{0}}(m)=\langle u_0,e_m\rangle, \end{equation} where \begin{equation} \label{Periodic Eigenpairs 2} e_{m}(x) = \frac{e^{imx}}{\sqrt{2\pi}},\qquad \langle f,g\rangle=\int_0^{2\pi} f(x)\overline{g(x)} dx,\;\;f,g\in\,L^2. \end{equation} The family $\{e_m\}_{m\in\mathbb Z}$ is the orthonormal family of eigenfunctions of the self-adjoint periodic operator $P(-i\partial_{x})$. Note that the latter has a compact resolvent. If $u_0$ is continuous and periodic, the expression \eqref{Linear Evolution Problems solution} is also a continuous periodic function of $x$ and $t$. The case of equations \eqref{linS} and \eqref{airy}, corresponding to $P(k)=k^n$ with $n=2,3$, are among the simplest linear evolution equations, but they are important as they also appear as the linear part of important nonlinear PDEs of mathematical physics, namely the nonlinear Schr\"odinger and KdV equations respectively. We focus now on the countable set of rational times, defined as follows. \begin{definition} We say that $t>0$ is a \emph{rational time} for the evolution problem \eqref{Linear Evolution Problems} if there exist co-prime, positive integers $p,q\in \mathbb N$ such that \begin{equation}\label{Rational Time} t=2\pi \frac p q. \end{equation} \end{definition} A self-contained proof of the following general result can be found in \cite[Theorem~2.14]{erdougan2016dispersive}. This result says that, at these rational times, the solution of the periodic problem for \eqref{Linear Evolution Problems} has an explicit representation in terms of translates of $u_0$. \begin{theorem}[Periodic Revival] \label{ET Theorem} Consider equation \eqref{Linear Evolution Problems}, with initial condition $u(x,0)=u_0(x)\in L^{2}$ and purely periodic boundary conditions, \eqref{ppbc} with all $\beta_{k}=1$. At rational time $t$ given by \eqref{Rational Time} the solution $u(x,t)$ is given by \begin{equation} \label{ET Formula} u\left(x,2\pi\frac{p}{q}\right) = \frac{1}{q}\sum_{k=0}^{q-1} G_{p,q}(k) u_{0}^{\ast} \left(x - 2\pi \frac{k}{q}\right), \end{equation} where $u_{0}^{*}$ is the $2\pi$-periodic extension of $u_{0}$, see \eqref{Periodic Extension} below. The coefficients $G_{p,q}(k)$ are given by \begin{equation} \label{ET Gauss Sum} G_{p,q} (k) = \sum_{m=0}^{q-1}e^{-2\pi iP(m)\frac{p}{q}} e^{2\pi i m\frac{k}{q}}. \end{equation} \end{theorem} Note that the functions $G_{p,q} (k)$ in \eqref{ET Gauss Sum} are periodic number-theoretic functions $(\operatorname{mod}q)$, \emph{c.f.} \cite[\S27.10]{nist2010} of Gauss type, but they are not Gauss sums, as the coefficients $e^{-2\pi iP(m)\frac{p}{q}}$ are not Dirichlet characters. The representation given in Theorem \ref{ET Theorem} describes explicitly the ``revival'' of the initial condition at rational times, as translated copies of it which are the building blocks of the solution representation. This is in contrast with the behaviour at generic, irrational times. For such times, the solution is continuous and indeed can be shown to have fractal behaviour as soon as the initial condition is sufficiently rough. To be more precise, the following and Theorem~\ref{ET Theorem} complement one another for the case of \eqref{linS}, see \cite{rodnianski2000fractal}. \begin{theorem}\label{Fractalisation LS} Let $P(k)=k^2$ and assume that the hypotheses of Theorem~\ref{ET Theorem} hold true. Assume, additionally, that $u_0(x)$ is of bounded variation. Then, the solution for any value of $t$ that is not of the form \eqref{Rational Time} is a continuous function of $x$. Moreover, if \[ u_0\notin \bigcup_{s>1/2} H^s(0,2\pi), \] where $H^s$ denotes the standard Sobolev space of order $s$, then for almost every $t$, the solution is nowhere differentiable, and the graph of the real part of the solution has fractal dimension $3/2$. \end{theorem} A similar result holds in general for equation \eqref{Linear Evolution Problems}, see \cite{erdougan2016dispersive}. \section{Revival operators} This section is devoted to the notion of \emph{revival operators}, that can be regarded as the basic building blocks of the revival formula \eqref{ET Formula} for solutions of linear dispersive PDEs whose polynomial dispersion is of the form $P(k)=k^l$, $l\in\mathbb N$. They provide a compact notation, e.g. for the statement of Theorem~\ref{ET Theorem}. More significantly, while it is straightforward to compute the corresponding Fourier representation, they give the crucial link for extending the revival results to more general pseudo-periodic problems from known cases, such as the linear Schr\"odinger equation, to higher order case, in particular the Airy equation. Here and everywhere below, we will denote by $f^{\ast}$ the $2\pi$-periodic extension to $\mathbb R$ of a function $f$ defined on $[0,2\pi]$. Explicitly, \begin{equation} \label{Periodic Extension} f^{\ast} (x) = f(x - 2\pi m), \qquad 2\pi m\leq x < 2\pi (m+1), \quad m\in\mathbb{Z}. \end{equation} Because of the role of specific translation operators in what follows, we set our notation with the following definition. \begin{definition}[Periodic translation operator] Let $s\in\mathbb{R}$. The \emph{periodic translation operator} $\mathcal{T}_{s}:L^{2} \rightarrow L^{2}$ is given by \begin{equation} \label{Periodic Translation Operator} \mathcal{T}_{s}f(x) = f^{\ast} (x-s), \qquad x\in[0,2\pi). \end{equation} \end{definition} Note that $\mathcal{T}_s$ are isometries. In our scaling, the Fourier coefficients of $\mathcal{T}_{s}f$, $f\in L^2$, turn out to be \begin{equation} \label{Periodic Translation Operator FC} \widehat{\mathcal{T}_{s}f}(m) = \int_{0}^{2\pi} \mathcal{T}_{s}f(x) \overline{e_m(x)}dx = e^{-ims} \widehat{f}(m). \end{equation} Revival operators are formed as finite linear combinations of specific translation operators. \begin{definition}[Periodic revival operator] \label{Periodic Revival Operator} Let $p$ and $q$ be integers and co-prime. Let $\ell \in \mathbb{N}$. The \emph{periodic revival operator} $\mathcal{R}_{\ell}(p,q): L^{2}\rightarrow L^2$ of order $\ell$ at $(p,q)$ is given by \begin{equation} \label{Revival Operator} \mathcal{R}_{\ell}(p,q) f = \frac{\sqrt{2\pi}}{q}\sum_{k=0}^{q-1} G^{(l)}_{p,q}(k)\mathcal{T}_{\frac{2\pi k}{q}}f, \quad G^{(l)}_{p,q}(k)=\sum_{m=0}^{q-1} e^{-im^{\ell}\frac{2\pi p}{q}} e_{m}\left(\frac{2\pi k}{q}\right), \end{equation} where $e_{m}(x)$ are the normalised eigenfunctions of the $2\pi$-periodic problem given in \eqref{Periodic Eigenpairs 2}. \end{definition} As we shall see next, from the Fourier representation, it follows that all periodic revival operators are isometries. \begin{lemma} \label{Revival Operator Lemma} Let $p$ and $q$ be integers and co-prime. Let $\ell \in \mathbb{N}$. Then, $\mathcal{R}_{\ell}(p,q)$ given by \eqref{Revival Operator} is an isometry of $L^2$. Moreover, for all $f \in L^2$ we have \begin{equation} \label{Revival Operator Lemma 1} \langle \mathcal{R}_{\ell}(p,q)f,e_{j}\rangle = e^{-ij^{\ell}\frac{2\pi p}{q}} \widehat{f}(j). \end{equation} \end{lemma} \begin{proof} In order to deduce that $\mathcal{R}_{\ell}(p,q)$ is an isometry on $L^2$, it is enough to prove \eqref{Revival Operator Lemma 1}. From the left hand side of \eqref{Revival Operator} and from \eqref{Periodic Translation Operator FC}, it follows that \begin{equation*} \begin{aligned} \langle \mathcal{R}_{\ell}(p,q)f,e_{j}\rangle = \frac{\sqrt{2\pi}}{q} \sum_{k=0}^{q-1} G^{(l)}_{p,q}(k) \langle \mathcal{T}_{\frac{2\pi k}{q}}f,e_{j}\rangle = \widehat f(j)\frac{\sqrt{2\pi}}{q} \sum_{k=0}^{q-1}e^{-ij\frac{2\pi k}{q}}G^{(l)}_{p,q}(k).\end{aligned} \end{equation*} Subtitute the right hand side of \eqref{Revival Operator} to get, \begin{equation*} \langle \mathcal{R}_{\ell}(p,q)f,e_{j}\rangle = \frac{\widehat{f}(j)}{q} \sum_{k=0}^{q-1} \sum_{m=0}^{q-1} e^{-im^{\ell}\frac{2\pi p}{q}} e^{i(m-j)\frac{2\pi k}{q}}=\frac{\widehat{f}(j)}{q} \sum_{m=0}^{q-1}e^{-im^{\ell}\frac{2\pi p}{q}} \sum_{k=0}^{q-1} e^{i(m-j)\frac{2\pi k}{q}}. \end{equation*} Now, if $m\not\equiv j$ $(\operatorname{mod} q)$, then there exists $z\in\mathbb{Z}$ not a multiple of $q$, such that $m-j = z_{1}q+z$ for $z_{1}\in\mathbb{Z}$. Hence, \begin{equation*} \sum_{k=0}^{q-1} e^{i(m-j)\frac{2\pi k}{q}} = \sum_{k=0}^{q-1} e^{iz_{1}q\frac{2\pi k}{q}} \ e^{iz\frac{2\pi k}{q}} = \sum_{k=0}^{q-1}\left(e^{i2\pi\frac{z}{q}}\right)^{k} = \frac{1 - \left(e^{i2\pi\frac{z}{q}}\right)^{q}}{1-e^{i2\pi\frac{z}{q}}} = 0. \end{equation*} On the other hand, whenever $m \equiv j$ $(\operatorname{mod}q)$, we have $m-j = z_2q$ for $z_2\in\mathbb{Z}$ and so \begin{equation*} \sum_{k=0}^{q-1} e^{i(m-j)\frac{2\pi k}{q}} = \sum_{k=0}^{q-1} e^{iz_2q\frac{2\pi k}{q}} = q. \end{equation*} Moreover, in this case we know that, for any $\ell\in\mathbb{N}$, $m^{\ell} \equiv j^{\ell}$ $(\operatorname{mod} q)$, and so $m^{\ell} = j^{\ell} + z_3q$ for some other $z_3\in\mathbb{Z}$, hence \begin{equation*} e^{-im^{\ell}\frac{2\pi p }{q}} = e^{-ij^{\ell}\frac{2\pi p }{q}} e^{-i z_3q \frac{2\pi p}{q}} = e^{-ij^{\ell}\frac{2\pi p }{q}}. \end{equation*} Therefore, as $m$ runs from $0$ to $q-1$, we find that \begin{equation*} \langle \mathcal{R}_{\ell}(p,q)f,e_{j}\rangle = \widehat{f}(j) e^{-ij^{\ell}\frac{2\pi p}{q}} , \end{equation*} as claimed. \end{proof} The proof of the lemma above relies on elementary arguments and depends on the specific form of the eigenfunctions $e_m(x)$ and their periodicity. This is in fact at the heart of the periodic revival phenomenon. It suggests strongly that such phenomenon depends crucially on periodicity and will not survive if other boundary conditions are prescribed. The investigation of the validity of this statement is the motivation for this work. By immediate substitution, Theorem~\ref{ET Theorem} applied to the linear Schr\"{o}dinger and Airy equations can be reformulated in terms of revival operators. \begin{lemma} \label{Sch Airy Periodic Revivals Lemma} Let $u_{0}\in L^2$ and assume periodic boundary conditions, \eqref{qpbc} with $\beta=1$. At rational time $t=2\pi\frac p q$, the solution to the periodic problem for equation \eqref{linS} starting at $u_0$ is given by \begin{equation}\label{R2ls} u\left(x,2\pi\frac{p}{q}\right) = \mathcal{R}_{2}(p,q)u_{0}(x) \end{equation} and the solution to the periodic problem for equation \eqref{airy} starting at $u_0$ is given by \begin{equation}\label{R2ai} u\left(x,2\pi\frac{p}{q}\right) = \mathcal{R}_{3}(p,q)u_{0}(x). \end{equation} \end{lemma} \section{Pseudo-periodic problems for the linear Schr\"odinger equation}\label{pseudo ls} In this section we give an alternative proof of the results reported in \cite{olver2018revivals}, by deriving a new representation of the solution of the problem \eqref{linS}-\eqref{ppbc}, namely \begin{equation} \label{Schrodinger Pseudo-periodic Problem} \begin{aligned} &iu_t+u_{xx}=0, \quad u(x,0) = u_{0}(x)\in\, L^2,\\ &\beta_{0} u(0,t) = u(2\pi,t), \quad \beta_{1} u_x(0,t) =u_x(2\pi,t), \end{aligned} \end{equation} where $\beta_{0}, \ \beta_{1}\in\mathbb{C}$ satisfy \[ \arccos\left(\frac{1+\beta_0\beta_1}{\beta_0+\beta_1}\right)\in\mathbb{R}. \] The latter condition ensures that all the eigenvalues of the underlying (closed) spatial operator are real and that this operator has a family of eigenfunctions which is complete in $L^2$, forming a bi-orthogonal basis. Moreover, this family reduces to an orthonormal basis, i.e. the operator is self-adjoint, if and only if $\overline{\beta_0}\beta_1=1$. For details, see \cite{olver2018revivals}. Our goal is to show that the solution of \eqref{Schrodinger Pseudo-periodic Problem} can be written as the sum of four terms, each obtained as the solution of a {\em periodic} problem. These four periodic problems start from an initial condition obtained by a suitable transformation of the given initial $u_{0}(x)$. In order to construct a solution of \eqref{Schrodinger Pseudo-periodic Problem}, we consider the bi-orthogonal basis $\{\phi_{j},\psi_{\ell}\}_{j,\ell\in\mathbb{Z}}$ formed by the eigenfunctions of the spatial operator and their adjoint pairs. The spectral problem is given by \begin{equation} \label{Non-Self-adjoint eigenvalue problem} -\phi''(x) = \lambda \phi(x), \quad \beta_{0}\phi(0)=\phi(2\pi), \ \beta_{1}\phi'(0)=\phi'(2\pi). \end{equation} As shown in \cite{olver2018revivals}, the eigenvalues $\{\lambda_j\}_{j\in \mathbb{Z}}$ are given by \begin{equation} \label{Non-Self-adjoint Eigenvalues} \lambda_{j} = k_{j}^{2}, \quad k_{j} = (j+k_{0}), \quad k_{0} = \frac{1}{2\pi}\arccos(\frac{1+\beta_{0}\ \beta_{1}}{\beta_{0}+\beta_{1}}). \end{equation} and the corresponding eigenfunctions are \begin{equation} \label{Non-Self-adjoin eigefunctions} \phi_{j}(x) = \frac{1}{\sqrt{2\pi \tau}}(e^{ik_{j}x} + \Lambda_{0}e^{-ik_{j}x}), \end{equation} where \begin{equation} \label{Non-Self-adjoint Various CST} \tau = \frac{(\gamma^2 + 1)(\beta_{0}\beta_{1}+1)-2\gamma(\beta_{0}+\beta_{1})}{(\beta_{0} \gamma - 1)(\beta_{1}\gamma - 1)}, \ \Lambda_{0} = \frac{\gamma - \beta_{0}}{\beta_{0} - \gamma^{-1}}= \frac{\gamma - \beta_{1}}{\gamma^{-1} - \beta_{1}}, \end{equation} and \begin{equation} \label{Non-Self-Adjoint Gamma} \gamma = e^{ik_{j}2\pi} = e^{i2\pi k_{0}} = \frac{1+\beta_{0}\ \beta_{1}}{\beta_{0}+\beta_{1}} + i \sqrt{1 - \left(\frac{1+\beta_{0}\ \beta_{1}}{\beta_{0}+\beta_{1}}\right)^{2}}. \end{equation} We require also the eigenfunctions of the adjoint spectral problem \begin{equation} \label{Dual eigenvalue problem} -\psi''(x) = \lambda \psi(x), \quad \psi(0)=\bar{\beta_{1}}\psi(a), \ \psi'(0)=\bar{\beta_{0}}\psi'(a). \end{equation} These are given by \begin{equation} \label{Dual eigefunctions} \psi_{j}(x) = \frac{1}{\sqrt{2\pi\tau}}(e^{ik_{j}x} + I_{0}e^{-ik_{j}x}) \end{equation} where $\tau$ is as in \eqref{Non-Self-adjoint Various CST} and \begin{equation} \label{Dual Various CST} I_{0} = \frac{\gamma - 1/\bar{\beta_{1}}}{1/\bar{\beta_{1}} - \gamma^{-1}}. \end{equation} The family $\{\phi_{j}\}_{j\in\mathbb{Z}}$ is a complete system of $L^2$. Then, for any fixed time $t\geq 0$ and initial $v_{0}\in L^2$, the solution to \eqref{Schrodinger Pseudo-periodic Problem} is given by the spectral expansion \begin{equation} \label{Non-Self-Adjoint Solution} \begin{aligned} &u(x,t) = \sum_{j\in\mathbb{Z}} \langle u_{0} , \psi_{j}\rangle e^{-ik_{j}^{2}t} \phi_{j} (x) \\ & = \frac{1}{2\pi\tau}\sum_{j\in\mathbb{Z}} \Big(\int_{0}^{2\pi}u_{0}(y)e^{-ik_{j}y} dy + \bar{I}_{0}\int_{0}^{2\pi} u_{0}(y)e^{ik_{j}y}dy\Big) e^{-ik_{j}t^{2}} \big(e^{ik_{j}x} + \Lambda_{0} e^{-ik_{j}x}\big). \end{aligned} \end{equation} Our alternative proof that this problem exhibits the periodic revival phenomenon will rely on the existence of revivals for suitable periodic problems. Given $u_{0}\in L^2$, we define $v_{0}$, $w_{0}\in \,L^2$ as \begin{equation} \label{Non-Self-adjoint initial condition} v_{0} (x) = u_{0}(x) e^{-i k_{0}x}, \quad w_{0}(x) = u_{0}(x) e^{i k_{0}x}, \end{equation} where $k_0\in\mathbb{R}$ is defined in \eqref{Non-Self-adjoint Eigenvalues}. For any $f\in L^2$, we will denote by the symbol $f^{\natural}(x)$ the reflection of $f(x)$ with respect to $x=\pi$, namely \begin{equation} \label{Reflected initial condition} f^{\natural}(x) = f(2\pi-x). \end{equation} \begin{proposition} \label{Non-Self-Adjoint Correspondence Theorem} The solution $u(x,t)$ of \eqref{Schrodinger Pseudo-periodic Problem} admits the following representation, \begin{equation} \label{Non-Self-adjoint Correspondence} \begin{aligned} u(x,t) = \frac{e^{-i k_{0}^{2}t}}{\tau} \Big\{& e^{ik_{0}x} \mathcal{T}_{2k_0t} v(x,t) + \Lambda_{0} e^{-ik_{0}x} \mathcal{T}_{-2k_0t} v^{\natural}(x,t) \\ & + \bar{I}_{0} e^{ik_{0}x} \mathcal{T}_{2k_0t} w^{\natural}(x,t) + \Lambda_{0}\bar{I}_{0} e^{-ik_{0}x} \mathcal{T}_{-2k_0t} w(x,t) \Big\}, \end{aligned} \end{equation} where $\mathcal{T}_{s}$ is the translation operator defined by \eqref{Periodic Translation Operator}, the constants $\tau$, $\Lambda_{0}$ are given in \eqref{Non-Self-adjoint Various CST} and $I_{0}$ by \eqref{Dual Various CST}. Here $v,w,v^{\natural},w^{\natural}$ are the solutions of the periodic problem, \emph{i.e.} $\beta_0=\beta_1=1$, with initial conditions as follows, \begin{itemize} \item $v(x)$ denotes the solution corresponding to initial condition $v_0(x)$ \item $w(x)$ denotes the solution corresponding to initial condition $w_0(x)$ \item $v^{\natural}(x)$ denotes the solution corresponding to initial condition $v^{\natural}_0(x)$ \item $w^{\natural}(x)$ denotes the solution corresponding to initial condition $w^{\natural}_0(x)$. \end{itemize} \end{proposition} Before giving a proof, we highlight the important consequence of this proposition. Substituting the expression \eqref{R2ls} for the solution of the periodic problem in \eqref{Non-Self-adjoint Correspondence}, one obtains revival for the pseudo-periodic linear Schr\"odinger equation. \begin{corollary}[Pseudo-periodic revival property] \label{Non-Self-adjoint Pseudo-periodic Revival} The solution of the pseudo-periodic problem \eqref{Schrodinger Pseudo-periodic Problem} at rational times, is given by \begin{equation} \label{Non-Self-adjoint Revival} \begin{aligned} u\left(x, 2\pi\frac{p}{q}\right) = &\frac{e^{-i\frac{2\pi k_{0}^{2}p}{q}}}{\tau} \Big \{ e^{i k_{0}x} \left[\mathcal{T}_{\frac{4\pi k_{0}p}{q}}\mathcal{R}_{2}(p,q)\right] e^{-i k_{0}x}u_{0}(x) \\ &+e^{-ik_{0}x}\left[ \Lambda_{0} e^{-i k_{0}2\pi} \mathcal{T}_{-\frac{4\pi k_{0}p}{q}} \mathcal{R}_{2}(p,q)\right] e^{i k_{0}x}u_{0}^{\natural}(x) \\ & + e^{i k_{0}x}\left[\bar{I}_{0}e^{i k_{0}2\pi} \mathcal{T}_{\frac{4\pi k_{0}p}{q}} \mathcal{R}_{2}(p,q)\right]e^{-i k_{0}x}u_{0}^{\natural}(x) \\ &+ e^{-i k_{0}x} \left[ \Lambda_{0}\bar{I}_{0}\mathcal{T}_{-\frac{4\pi k_{0}p}{q}} \mathcal{R}_{2}(p,q)\right]e^{i k_{0}x}u_{0}(x) \Big\}. \end{aligned} \end{equation} \end{corollary} \begin{remark}\label{v0expr} In expression (\ref{Non-Self-adjoint Revival}), the solution is given explicitly in terms of a finite number of translated copies of $u_0(x)e^{\pm ik_0x}$. Note that the final result is then multiplied by $e^{\mp ik_0x}$, and hence the solution is indeed given in terms of a finite linear combination of translated copies of $u_0(x)$. This can be verified by substituting the expression for $\mathcal{R}_{2}(p,q)$ in the first part of formula (\ref{Non-Self-adjoint Revival}), to obtain $$ e^{i k_{0}x} \mathcal{T}_{\frac{4\pi k_{0}p}{q}}\mathcal{R}_{2}(p,q) e^{-ik_0x}u_{0}(x)=e^{ik_0^2\frac{4\pi p}{q}}\mathcal{T}_{\frac{4\pi k_{0}p}{q}}\tilde{\mathcal{R}}_{2}(p,q)u_0(x), $$ where $\tilde{\mathcal{R}}_{2}(p,q)$ differs from ${\mathcal{R}}_{2}(p,q)$ only in that each term $e_{m}(\frac{2\pi k}{q})$ is replaced by $e_{m}(\frac{2\pi k}{q} (1+\frac{k_0}{m}))$. The other three terms in the expression (\ref{Non-Self-adjoint Revival}) for the solution can be handled similarly. \end{remark} \begin{proof}[Proof of Proposition \ref{Non-Self-Adjoint Correspondence Theorem}] Consider each of the terms in the series \eqref{Non-Self-Adjoint Solution}. Using the definition (\ref{Non-Self-adjoint initial condition}) of $v_0$ and $w_0$, we have \begin{equation} \label{NSAT2} \begin{aligned} \int_{0}^{2\pi}u_{0}(y)\frac{e^{-ik_{j}y}}{\sqrt{2\pi}}dy + \bar{I}_{0}\int_{0}^{2\pi} u_{0}(y)\frac{e^{ik_{j}y}}{\sqrt{2\pi}}dy= \widehat{v_{0}}(j) + \bar{I}_{0}\widehat{w_{0}}(-j). \end{aligned} \end{equation} Recall that $k_j=k_0+j$. Moreover, we have the elementary but key relation, \begin{equation} \label{NSAT3} e^{-ik_{j}^{2}t} = e^{-i k_{0}^{2}t} \ e^{-2k_0jt} \ e^{-ij^{2}t} \end{equation} and for the eigenfunctions \begin{equation} \label{NSAT4} \frac{e^{ik_{j}x}}{\sqrt{2\pi}} + \Lambda_{0} \frac{e^{-ik_{j}x}}{\sqrt{2\pi}} = e^{ i k_{0}x} e_{j}(x) + \Lambda_{0} e^{-ik_{0}x}e_{-j}(x). \end{equation} Here the $e_j(x)$ are the periodic eigenfunctions. By substituting \eqref{NSAT2}, \eqref{NSAT3} and \eqref{NSAT4} in \eqref{Non-Self-Adjoint Solution} we obtain \begin{equation} \label{NSAT5} \begin{aligned} u(x,t) = \frac{e^{-ik_{0}^{2}t}}{\tau} &\sum_{j\in\mathbb{Z}} e^{-i2k_0jt} e^{-ij^{2}t} \Big( e^{ik_{0}x}\widehat{v_{0}}(j) e_{j}(x) + \Lambda_{0} e^{-ik_{0}x}\widehat{v_{0}}(j) e_{-j}(x) \\ & + \bar{I}_{0} e^{ik_{0}x}\widehat{w_{0}}(-j) e_{j}(x) + \Lambda_{0} \bar{I}_{0} e^{-ik_{0}x}\widehat{w_{0}}(-j) e_{-j}(x)\Big). \end{aligned} \end{equation} Each term in \eqref{NSAT5} is the solution of a periodic problem. Indeed, from \eqref{Periodic Translation Operator FC} it follows that for $f \in\, L^2$, $\mathcal{T}_sf(x)=\sum_{j\in\mathbb Z}e^{-ijs} \widehat{f}(j)e_j(x)$, hence we have \begin{equation} \label{NSAT6} \sum_{j\in\mathbb{Z}} e^{-i2k_0jt} e^{-ij^{2}t} e^{ik_{0}x}\widehat{v_{0}}(j)e_{j}(x) = e^{ik_{0}x} \mathcal{T}_{2k_0t} \Big(\sum_{j\in\mathbb{Z}}\widehat{v_{0}}(j)e^{-ij^{2}t}e_{j}(x) \Big) = e^{ik_{0}x} \mathcal{T}_{2k_0t} v(x,t), \end{equation} where $v(x,t)$ solves the periodic equation with initial condition $v_{0}(x)$. Similar calculation for the remaining terms yields the representation \eqref{Non-Self-adjoint Correspondence}. \end{proof} Note that for the self-adjoint case, $\beta_0\bar{\beta_1}=1$, the following reduction of \eqref{Non-Self-adjoint Correspondence} is valid, \begin{equation} \label{Self-adjoint Correspondence} \begin{aligned} u(x,t) = \frac {e^{-i k_{0}^{2}t}}{1+|\Lambda_0|^2}\Big\{& e^{ik_{0}x} \mathcal{T}_{2k_0t} v(x,t) + \Lambda_{0} e^{-ik_{0}x} \mathcal{T}_{-2k_0t} v^{\natural}(x,t) \\ & + \bar{\Lambda}_{0} e^{ik_{0}x} \mathcal{T}_{2k_0t} w^{\natural}(x,t) +| \Lambda_{0}|^2e^{-ik_{0}x} \mathcal{T}_{-2k_0t} w(x,t) \Big\}, \end{aligned} \end{equation} with all notation as in Proposition \ref{Non-Self-Adjoint Correspondence Theorem}. \subsection{The quasi-periodic case} We now describe the specific form of the solution of the quasi-periodic boundary value problem for \eqref{linS}, corresponding to $\beta_0=\beta_1=\beta$ in \eqref{Schrodinger Pseudo-periodic Problem}. This specific case appears to be of importance for the study of the vortex filament equation with non-zero torsion \cite{de2020evolution}. The self-adjoint case corresponds to $|\beta|^2=1$ and it has been studied in the context of quantum revivals, as well as experimentally, in \cite{xue2014observation}. Set $\beta = e^{2\pi i\theta}$ for $\theta\in (0,1)$ in \eqref{Schrodinger Pseudo-periodic Problem}. For $k_0$ and $\Lambda_0$ as in (\ref{Non-Self-adjoint Various CST}), we have $$ \cos (2\pi k_0)=\frac {1+\beta^2}{2\beta}=\frac{1+e^{4\pi i \theta}}{2e^{2\pi i\theta}}=\cos(2\pi \theta). $$ So we pick $$ k_0=\theta,\qquad \gamma=e^{2\pi i \theta}=\beta \quad \text{and} \quad \Lambda_0=\frac{\gamma-\beta}{\beta-\gamma^{-1}}=0. $$ Substituting these values into \eqref{Self-adjoint Correspondence}, yields the significantly reduced expression, \begin{equation} \label{quasi-periodic Self-adjoint Correspondence} u(x,t) = e^{-i\theta^{2}t} e^{i\theta x} \mathcal{T}_{2\theta t} v(x,t), \end{equation} where $v(x,t)$ the solution of the periodic problem with initial condition $v_0(x)$ as in \eqref{Non-Self-adjoint initial condition}. In particular, at rational times we obtain the representation formula \begin{equation} \label{quasiperrev} u\left(x,2\pi\frac pq\right)=e^{-i\theta^{2}2\pi\frac pq} e^{i\theta x} \mathcal{T}_{4\pi\theta \frac pq}\mathcal{R}_{2}(p,q) e^{-i\theta x}u_{0}(x). \end{equation} \begin{remark}The comment made in Remark~\ref{v0expr}, applies also to the revival expression \eqref{quasiperrev}. The latter can also be obtained directly, by expanding the solution in terms of the eigenfunctions of the associated spatial operator and their adjoint pair. \end{remark} \section{Quasi-periodic problems for the Airy equation}\label{Airy's Quasi-Periodic Problem} We now turn to the time evolution problem for the Airy equation with quasi-periodic boundary conditions, defined by \eqref{airy}--\eqref{qpbc} with $\beta= e^{i2\pi\theta}$ for $\theta\in[0,1)$, namely \begin{equation} \label{Airy QPP} \begin{aligned} &u_t(x,t) - u_{xxx}(x,t)=0, \qquad u(x,0) = u_{0}(x), \\ &e^{i2\pi\theta} \partial_{x}^{m} u(0,t) = \partial_{x}^{m}u(2\pi,t), \quad m = 0,1,2. \end{aligned} \end{equation} We give the proof of Theorem \ref{Airy Correspondence Theorem}, which describes the solution of \eqref{Airy QPP} in terms of the solution of a periodic problem for the linear Schr\"odinger equation. The spatial operator $i\partial_x^3$ with the given boundary conditions is self-adjoint. Moreover, unlike the general quasi-periodic boundary conditions, we can find the eigenpairs of this operator explicitly. Because of this, it is possible to argue in similar fashion as in Section~\ref{pseudo ls}. This leads to the conclusion that, in contrast to the linear Schr\"odinger equation, it is not possible to establish a direct correspondence between the solution of \eqref{Airy QPP} and the solution of one or more periodic problems {\em evaluated at the same time}. The correspondence that we establish in Theorem \ref{Airy Correspondence Theorem}, connects the solution of the Airy equation at a rational time $t_{\mathrm{r}}$ to the solution of an associated problem for the linear Schr\"odinger equation evaluated at a time $t_{\theta}$ that depends on $t_{\mathrm{r}}$ and on $\theta$. As a consequence, we show below that revivals for problem \eqref{Airy QPP} arise if and only if $\theta\in\mathbb{Q}$. The eigenvalue problem is now given by \begin{equation} \label{Non-Self-adjoint eigenvalue problem airy} -\phi'''(x) = i\lambda \phi(x), \quad e^{i2\pi\theta}\phi(0)=\phi(2\pi), \,e^{i2\pi\theta}\phi'(0)=\phi'(2\pi), \, e^{i2\pi\theta}\phi''(0)=\phi''(2\pi). \end{equation} Hence, it is straightforward to compute that the eigenvalues are given by \begin{equation} \label{Airy Eigenvalue} \lambda_{m} = k_{m}^{3}, \qquad k_{m} =m + \theta, \quad m\in \mathbb{Z} \end{equation} and the corresponding normalized eigenfunctions by \begin{equation} \label{Airy Eigenfunctions} \phi_{m} (x)= \frac{e^{ik_{m}x}}{\sqrt{2\pi}}=e^{i\theta x}e_m(x), \quad m\in \mathbb{Z}. \end{equation} Thus, for any fixed time $t\geq 0$ and initial $u_{0} \in L^2$, the solution to \eqref{Airy QPP} is \begin{equation} \label{Airy QP Solution} u(x,t) = \sum_{m\in\mathbb{Z}} \langle u_{0} , \phi_{m}\rangle e^{-ik_{m}^{3}t} \phi_{m} (x). \end{equation} We are now ready to prove Theorem~\ref{Airy Correspondence Theorem}. \begin{proof}[Proof of Theorem \ref{Airy Correspondence Theorem}] According to \eqref{Airy Eigenfunctions}, \begin{equation} \label{ACT3} \langle u_{0}, \phi_{j}\rangle = \int_0^{2\pi}u_0(x)e^{-i\theta x}\overline{e_j(x)}dx=\widehat{w_{0}}(j), \quad w_0(x)=u_0(x)e^{-i\theta x}. \end{equation} The exponential term $e^{-ik_{j}^{3}t_{\mathrm{r}}}$ can be written as \begin{equation} \label{ACT4} e^{-ik_{j}^{3}t_{\mathrm{r}}} = e^{-i(j+\theta)^{3}t_{\mathrm{r}}}= e^{-i\theta^{3}t_{\mathrm{r}}} e^{-i j^3t_{\mathrm{r}}} e^{-i j3\theta^2 t_{\mathrm{r}}} e^{-ij^{2}3\theta t_{\mathrm{r}}}. \end{equation} Substituting all this into \eqref{Airy QP Solution} for the solution of \eqref{Airy QPP}, we find \begin{equation} \label{ACT5} \begin{aligned} u(x,t_{\mathrm{r}}) &= \sum_{j\in\mathbb{Z}} \langle u_{0} , \phi_{j}\rangle e^{-ik_{j}^{3}t_{\mathrm{r}}} \phi_{j} (x)\\ &=\sum_{j\in\mathbb{Z}}\widehat{w_{0}}(j) e^{-i\theta^{3}t_{\mathrm{r}}} e^{-i j^3t_{\mathrm{r}}} e^{-i j3\theta^2 t_{\mathrm{r}}} e^{-ij^{2}3\theta t_{\mathrm{r}}} e^{i\theta x}e_j(x)\\ &=e^{-i\theta^{3}t_{\mathrm{r}}}e^{i\theta x}\sum_{j\in\mathbb{Z}}\widehat{w_{0}}(j) e^{-i j^3t_{\mathrm{r}}} e^{-i j3\theta^2 t_{\mathrm{r}}} e^{-ij^{2}3\theta t_{\mathrm{r}}} e_j(x) \\ & = e^{-i\theta^{3}t_{\mathrm{r}}}e^{i\theta x} \mathcal{T}_{3\theta^2t_{\mathrm{r}}} \Big(\sum_{j\in\mathbb{Z}}\widehat{w_{0}}(j) e^{-i j^3t_{\mathrm{r}}} e^{-ij^{2}3\theta t_{\mathrm{r}}} e_j(x) \Big). \end{aligned} \end{equation} For the last equality we have used the Fourier representation \eqref{Periodic Translation Operator FC} of the translation operator $\mathcal{T}_{s}$. Now, by virtue of Lemma~\ref{Revival Operator Lemma}, $$ \widehat{w_0}(j) e^{-ij^{3}t_{\mathrm{r}}}=\langle \mathcal{R}_{3}(p,q)w_0,e_{j}\rangle =\langle v^{(p,q)}_{0}, e_j\rangle=\widehat{v^{(p,q)}_{0}}(j), $$ where the function $v^{(p,q)}_{0}(x)$ is given by \eqref{Airy correspondence IC}. Substituting this final identity into \eqref{ACT5}, gives \begin{equation} \label{ACT6} u(x,t_{\mathrm{r}}) =e^{-i\theta^{3}t_{\mathrm{r}}}e^{i\theta x} \mathcal{T}_{3\theta^2t_{\mathrm{r}}} \Big(\sum_{j\in\mathbb{Z}}\widehat{v^{(p,q)}_{0}}(j) e^{-ij^{2}3\theta t_{\mathrm{r}}} e_j(x) \Big) = e^{-i\theta^{3}t_{\mathrm{r}}}e^{i\theta x} \mathcal{T}_{3\theta^2t_{\mathrm{r}}} v^{(p,q)}(x, 3\theta t_{\mathrm{r}}). \end{equation} as claimed. \end{proof} The fundamental difference with the case of the linear Schr\"odinger equation analysed in the previous section, lies in the fact that the solution of the quasi-periodic problem for the Airy equation corresponds to the solution of a suitable periodic problem but {\em evaluated at a different time}. Indeed, Theorem \ref{Airy Correspondence Theorem} states that the solution of \eqref{Airy QPP} at time $t=t_{\mathrm{r}}$ is obtained via the solution of a periodic problem for the Schr\"odinger equation evaluated at time $t=3\theta t_{\mathrm{r}}$. If $\theta\notin\mathbb Q$, this is an irrational time, for which the fractalisation result of Theorem \ref{Fractalisation LS} applies. From this it follows that, the quasi-periodic Airy problem exhibits revivals at rational times if and only if $\theta\in\mathbb Q$. To be more precise, we have the following two possibilities. \begin{enumerate} \item \underline{Case $\theta\in\mathbb Q$}. The time $t=3\theta t_{\mathrm{r}}$ is a rational time for Schr\"{o}dinger's periodic problem. Hence Airy's quasi-periodic problem will exhibit revivals at any rational time $t_{\mathrm{r}}$. \item \underline{Case $\theta\notin\mathbb Q$}. The time $t=3\theta t_{\mathrm{r}}$ is irrational for Schr\"{o}dinger's periodic problem. It follows that the solution of Airy's quasi-periodic problem at rational times $t_{\mathrm{r}}$ is a continuous but nowhere differentiable function, and there is no revival at rational times in this case. \end{enumerate} We now establish a representation formula for the solution at rational times, which implies the validity of the revival phenomenon observed in \eqref{Airy QPP} in the case $\theta\in\mathbb Q$. The proof of the next statement is a direct consequence of combining Theorem~\ref{Airy Correspondence Theorem} with Lemma~\ref{Sch Airy Periodic Revivals Lemma}. \begin{corollary}[Quasi-Periodic Revival] \label{Airy QP Revival} Let $(p,q),\,(c,d)$ be pairs of co-prime positive integers, with $c<d$. Set $\theta_{\mathrm{r}}= c/d <1$. Let $u_{0}\in L^2$. For $\theta=\theta_{\mathrm{r}}$, the solution $u(x,t)$ of the linear Airy equation with initial condition given by $u_0$ and quasi-periodic boundary conditions \eqref{Airy QPP}, at rational time $t_{\mathrm{r}} = 2\pi\frac{p}{q}$ is given by, \begin{equation} \label{Airy QP Revival Formula} u(x,t_{\mathrm{r}}) = e^{i\theta_{\mathrm{r}} x} \left[e^{-i \theta_{\mathrm{r}}^{3}t_{\mathrm{r}}} \mathcal{T}_{3\theta_{\mathrm{r}}^2t_{\mathrm{r}}} \mathcal{R}_{2}(3cp,dq) \mathcal{R}_{3}(p,q)\right] e^{-i\theta_{\mathrm{r}} x}u_{0}(x). \end{equation} \end{corollary} The comment made in Remark~\ref{v0expr}, applies also to the revival expression \eqref{Airy QP Revival Formula}. Indeed, the latter has an alternative representation in terms of the eigenfunctions $\phi_m(x)$ of the spatial quasi-periodic operator given by \eqref{Airy Eigenfunctions}. This alternative representation is the direct analogue of the representation in Theorem \ref{ET Theorem} for the periodic case, with a modified revival operator $\widetilde{\mathcal{R}_3}$ defined in terms of the eigenfunctions of the quasi-periodic problem directly. We state this representation, without proof. It can be obtained from algebraic manipulations of the expression \eqref{Airy QP Revival Formula}, or directly following the lines of the proof of Lemma~\ref{Revival Operator Lemma}. \begin{proposition} Let $p,\,q,\,c,\,d$, $u_0(x)$ and $\theta_{\mathrm{r}}$ be as in the previous statement. Let $u(x,t)$ denote the solution of Airy's quasi-periodic problem \eqref{Airy QPP}, with $\theta=\theta_{\mathrm{r}}$. The solution $u(x,t_{\mathrm{r}})$ at rational time $t_{\mathrm{r}} = 2\pi\frac{p}{q}$ admits the representation \begin{equation} \label{Airyaltrep} u(x,t_{\mathrm{r}})=\frac{\sqrt{2\pi}}{d^2 q}\sum_{k=0}^{d^2q -1} \sum_{m=0}^{d^2 q-1}e^{-i(m+\frac c d)^3t}\phi_m \left(\frac{\pi k}{dq}\right) \tilde{u}_0\left(x-\frac{\pi k}{2dq}\right). \end{equation} Here $\phi_m(x)$ are the eigenfunctions of the spatial operator given by \eqref{Airy Eigenfunctions} and $\tilde{u}_0(x)$ is the quasi-periodic extension of $u_0$, $$ \tilde{u}_0(x)=e^{2\pi i \frac{c}{d} m} u_0(x-2\pi m),\qquad 2\pi m\leq x<2\pi (m+1). $$ \end{proposition} In appendix~\ref{Numerical Examples}, we illustrate with several numerical examples the revival behaviour described by the results in this section. \begin{remark} By an induction argument, the above results can generalise to higher order equations with a monic dispersion relation $P(k)=k^p$, $p\geq 4$. \end{remark} \section{The Linear Schr{\"o}dinger equation with Robin boundary conditions} In this final section we consider the linear Schr\"{o}dinger equation \eqref{linS} posed on $(0,\pi)$, but now we impose the Robin boundary conditions \eqref{rbc}. Namely, the problem we consider is \begin{equation} \label{Robin Problem} \begin{aligned} & i u_{t} + u_{xx}=0, \quad u(x,0) = u_{0}(x)\in L^2(0,\pi),\\ &b u(x_{0},t) = (1-b) \partial_{x}u(x_{0},t),\ x_{0} = 0, \ \pi,\quad b \in [0,1], \end{aligned} \end{equation} and we give the proof of Theorem \ref{Robin Revival Corollary}, whose results describes the behaviour of the solution of \eqref{Robin Problem} at rational times. A routine calculation shows that the eigenvalues and the normalised eigenfunctions of the spatial operator are as follows. When $0<b<1$, there is one negative eigenvalue, which depends on the parameter $b$, given by \[ \lambda_{b} = - m_{b}^{2}<0 ,\quad m_{b} = \frac{b}{1-b}, \] with associated normalised eigenfunction \[ \phi_{b} (x) = A_{b} e^{m_{b}x}, \qquad A_{b} = \sqrt{\frac{2m_{b}}{e^{2\pi m_{b}}-1}}. \] The rest of the spectrum is the sequence of eigenvalues, independent of $b$, given by $ \lambda_{j} = j^{2}>0$, $j\in\,\mathbb{N} $ with associated normalised eigenfunctions, \[ \phi_{j} (x) = \frac{1}{\sqrt{2\pi}} \left[e^{ijx} - \Lambda_{j} e^{-ijx}\right], \qquad \Lambda_{j} = \frac{b-(1-b)i j}{b+(1-b)ij}. \] Note that the cases $b\rightarrow1$ and $b\rightarrow0$ correspond to Dirichlet and Neumann boundary conditions respectively. It is a routine calculation to verify that, by taking the even or odd extension, these can be treated as periodic problems posed on the double-length interval $(0,2\pi)$. In order to simplify the presentation we set the following notation. For $f\in L^2(0,\pi)$, the \emph{even} and \emph{odd} extensions of $f$ to the segment $[0,2\pi]$ are denoted by \begin{equation} \label{EvenOdd extension initial condition} f^{\pm }(x) = \begin{cases} f(x), & 0\leq x<\pi, \\ \pm f(2\pi-x), & \pi \leq x<2\pi, \end{cases} \end{equation} and we write the $2\pi$-periodic convolution of $f,\,g\in L^2(0,2\pi)$, as \begin{equation} \label{Periodic Convolution} f\ast g (x) = \frac{1}{\sqrt{2\pi}} \int_{0}^{2\pi} {f}^{*}(x-y) g^*(y) dy, \quad x\in(0,2\pi), \end{equation} where the symbol $\ast$ on top of functions denotes the $2\pi$-periodic extension as in \eqref{Periodic Extension}. Finally, as in the statement of Theorem \ref{Robin Revival Corollary}, we define \begin{equation} \label{f1f2} f_{1}(x) = \sqrt{\frac\pi 2}\frac{ m_{b}}{e^{2\pi m_{b}}-1} e^{m_{b}x},\qquad x\in(0,2\pi). \end{equation} We first state a representation of the solution of \eqref{Robin Problem} in terms of the solutions of five periodic problems for \eqref{linS}, each with an initial condition specified by an explicit transformation of $u_{0}$. Four of these initial conditions are obtained as the $2\pi$-periodic convolution of an explicit exponential $2\pi$-periodic function with corresponding odd or even $2\pi$-periodic extensions of the initial data. \begin{proposition}\label{RCP} \label{Robin Connection Proposition} Let $u_{0}\in L^{2}(0,\pi)$, and consider the following solutions to the $2\pi$-periodic problem for equation \eqref{linS}: \begin{itemize} \item $n(x,t)$ denotes the solution corresponding to initial condition $n_{0}(x) = u_{0}^{+}(x)$ \item $h(x,t)$ denotes the solution corresponding to initial condition $h_{0}(x) = (f_{1}+f_{1}^\natural)\ast u_{0}^{+}(x)$ \item $v(x,t)$ denotes the solution corresponding to initial condition $v_{0}(x) = (f_{1}^\natural - f_{1})\ast u_{0}^{+}(x)$ \item $z(x,t)$ denotes the solution corresponding to initial condition $z_{0}(x) =(f_{1}-f_{1}^\natural)\ast u_{0}^{-}(x)$ \item $w(x,t)$ denotes the solution corresponding to initial condition $w_{0}(x) = (f_{1}+f_{1}^\natural) \ast u_{0}^{-}(x)$, \end{itemize} where $f_1(x)$ is defined by \eqref{f1f2} and $\natural$ denotes reflection as given in \eqref{Reflected initial condition}. Then, at each $t\geq 0$ the solution $u(x,t)$ to the Robin problem \eqref{Robin Problem} is given by \begin{equation} \label{Robin Connection} u(x,t) =\langle u_{0},\phi_{b}\rangle_{L^{2}(0,\pi)} e^{im_{b}^{2}t} \phi_{b}(x) + n(x,t) - h(x,t) +v(x,t) + z(x,t) + w(x,t). \end{equation} \end{proposition} We omit the proof of this proposition, which is entirely analogous to the proof of Proposition~\ref{Non-Self-Adjoint Correspondence Theorem}. Various numerical examples which illustrate revival and non-revival for \eqref{Robin Problem} are given in Appendix~\ref{NumforA}. The proof of Theorem \ref{Robin Revival Corollary} is an immediate consequence of Proposition \ref{RCP}, which expresses the solution of this problem, in \eqref{Robin Revival}, as the sum of three terms: \[ \begin{aligned} u(x, t_{\mathrm{r}}) = &2\sqrt{\frac{2}{\pi}} \langle u_{0},e^{\frac{b}{1-b}(\cdot)}\rangle_{L^{2}(0,\pi)} e^{i\frac{b^{2}}{(1-b)^{2}}t_{\mathrm{r}}} f_{1}(x) + \mathcal{R}_2(p,q)\left[u_0^+(x)\right] \\ &+ \mathcal{R}_2(p,q)\left[2f_1*(u_0^--u_0^+))(x)\right], \quad x\in(0,\pi). \end{aligned} \] The three components on the right hand side of the equation correspond to the following: \begin{itemize} \item[*] The first term is a rank one perturbation and represents the contribution of the negative eigenvalue $\lambda_b$. \item[*] The second terms is the periodic revival of the (even extension of the) given initial condition. \item[*] The last term is the periodic revival of a continuous function. \end{itemize} As a consequence of this representation, we conclude that \eqref{Robin Problem} exhibits a {\em weaker form of revivals}. While the solution is not simply obtained as a linear combination of translated copies of the initial condition, the second term in \eqref{Robin Revival} ensures that the {\em functional class of the initial condition is preserved at rational times}. In particular, whenever $u_{0}$ has a finite number of jump discontinuities, then the same will be true for the solution at rational times, and the dichotomy between the solution behaviour at rational or irrational times is present. We may say that the quantum particle that solves the linear Schr\"odinger equation with Robin boundary conditions still {\em knows the time}. \section*{Conclusions}\label{Conclusions} The main goal of this work was to examine a variety of boundary conditions for the linear Schr\"odinger and Airy equations, and identify how the revival phenomenon depends on these boundary conditions. The starting point was the periodic case, for which it is known that the solution at rational times can be obtained as a finite linear combination of translated copies of the initial condition, and the dichotomy between revival at rational times and fractalisation at irrational times is well established. We analysed pseudo-periodic conditions, which couple the two ends of the interval of definition, and Robin-type boundary conditions imposed separately at the two ends. We derived two main new results. One that establishes the constraints on the validity of the revival property for the third-order Airy equation. The other that describes a new, weaker form of revival for the case of Robin conditions. More specifically, we confirmed that in the second-order case of the linear Schr\"odinger equation, every pseudo-periodic problem admits revival, by expressing its solution in terms of a purely periodic problem. We then show, by virtue of this new expression, that the revival property is more delicate for the third-order case of the Airy equation. In fact, it does not even hold in general for quasi-periodic boundary conditions. The rational/irrational time dichotomy, typical of the revival phenomenon, holds in this case only for rational value of the quasi-periodicity parameter. The particular case of Robin boundary conditions that we have chosen, revealed a new weaker form of revival phenomenon, which is worth further investigation. In this case, while the rational/irrational time dichotomy still holds, it is not true anymore that the solution at rational times is simply obtained by a finite linear combinations of copies of the initial profile. It is worth highlighting that the validity of a form of revival in this case is due to the presence of one term in the solution representation that is due to a purely periodic problem. This new manifestation of revival complements the one recently reported in \cite{boulton2020new} for the case of periodic linear integro-differential equations. The latter displays a rational/irrational time dichotomy similar to the present one, but the representation of the solution is more involved. Our analysis strongly support the conjecture that periodicity, and the number-theoretic properties of the purely exponential series that represent periodic solutions, are essential to any revival phenomenon. Future work will aim to confirm this conjecture, by extending consideration to general linear, constant coefficients boundary conditions for the both Sch\"rodinger and Airy equation. In the latter case, there exists boundary conditions for which the associate spatial operator does not admit a complete basis of eigenfunctions - an example of such conditions are the pseudo-Dirichlet conditions $u(0,t)=u(2\pi,t)=u_x(2\pi,t)=0$, see \cite{fokas2005transform, pelloni2005spectral}. While preliminary numerical evidence suggests that at rational and irrational times the solution of this boundary value problem behaves fundamentally differently, the analysis for these types of boundary conditions requires a different approach. The equations we have considered are the linear part of important nonlinear equations of mathematical physics, the nonlinear Schr\"odinger and KdV equations respectively. In work of Erdo\u{g}an, Tzirakis, Chousionis and Shakan, see \cite{erdogan2013talbot, chousionis2014fractal,erdougan2013global,erdougan2019fractal}, the dichotomy between the behaviour at rational and irrational times has been established rigorously for the periodic problem for these nonlinear equations. We expect that our result for the pseudo-periodic case would extend to the nonlinear case in an analogous manner. This would also provide theoretical foundation for recent results on the vortex filament equation with non-zero torsion \cite{de2020evolution}, a problem that can be represented in terms of the solution of a quasi-periodic problem for the Schr\"odinger equation. \section*{Acknowledgements} We thank David Smith for his useful comments and suggestions on the contents of this paper. BP and LB are also grateful for the invitation to Yale-NUS College in January 2020 for a workshop funded by grant IG18-CW003, in which discussions leading to part of this work began. GF is being supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by EPSRC (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh. \addcontentsline{toc}{section}{References} \printbibliography
proofpile-arXiv_065-3746
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and background} In the context of human-robot interaction, a great effort is directed towards the development of the robot ability to understand implicit signals and subtle cues that naturally characterize human movements. This comes to have critical importance in situations where robots are used in unconstrained environments, for instance in manufacturing, helping human operators to lift loads or assisting elderly. In typical human-human interaction a considerable amount of information is exchanged through non-verbal signals, such as the attitude of an action, its tempo, the direction of the gaze and the body posture. It has been proved that people are able to correctly estimate the weight of an object, simply observing another person lifting it \cite{sciutti:weight}. Recent research confirmed that the same information could be transmitted by a humanoid robot controlling the vertical velocity of its lifting movements \cite{PalinkoSciutti}. Humans manage to easily manipulate objects they have never used before: at first, by inferring their properties such as the weight, the stiffness and the dimensions also from the observation of others manipulating them; at a later time, using tactile and force feedback to improve the estimation. Replicating this behaviour in robotic systems is challenging. However, preliminary results have been achieved in estimating objects physical properties, relying on inference-based vision approaches \cite{billard:weight}.\\ The interaction with humanoid robots is particularly critical: driven by their appearance, humans tend to attribute those robots human-like abilities and, if their expectations fall short, the interaction may fail \cite{sandini}. Humans strongly rely on implicit signals to cooperate; therefore, in this context, to obtain seamless human-robot collaboration, humanoid robots need to correctly interpret those implicit signals \cite{legibility}. Furthermore, if we consider a scenario where the robot acts as helper or partner in an unconstrained environment, it acquires great importance to endow it with the ability of correctly estimating the characteristics of the handled objects; as a consequence, the robot can plan a safe and efficient motion action. In this study, we give particular attention to how a robot could assess an object features just by seeing it transported by a human partner. Inferring those properties from the human kinematics during the manipulation of the objects, rather than from their external appearance, grants the ability of generalizing over previously unseen items. \subsection{Rationale} \label{Rationale} Suppose to transport a glass full to the brim with water: the effort required to safely manage it without spilling a drop resembles the challenging scenario of porting an electronic device that could be damaged. If we want a robot to perform the same action, the first step would be to give the robot the capability of recognizing the intrinsic difficulty of the task; if we consider a hand-over task between a human and a robot, the latter should be aware that it is about to receive an object that requires a certain degree of carefulness in the handling. Moreover, an assessment of the weight of the object would allow an efficient lift. These features could be estimated from the human motion and ideally should be available before the end of the observed action, to trace the human abilities and to allow the robot to prepare for the possible handover. Differently from the weight, the concept of carefulness is not trivial. Previous studies have dealt with delicate objects, but focused more on robotic manipulation: the difficulty in the addressed tasks was given from the stiffness or the deformability of the item; tactile sensors where used for estimating the necessary force to apply a proper grasp \cite{sanchezGrip,grip}. In our study we consider the carefulness necessary to move an item from a wider perspective. Indeed, not only the object stiffness but also its fragility, the content about to be spilled, or its sentimental value may lead a person to perform a particularly careful manipulation. In those real-life cases we would like the robot to successfully estimate the carefulness required just by observing the human kinematics. As a proof of concept, we recorded some transportation movements involving glasses which differed for weight and carefulness levels and some kinematic features, derived from the human motion, were used to train classifier algorithms. These features were obtained for comparison both from a motion capture system and a robot camera. We hypothesize that, from the knowledge of kinematic features descriptive of the human movement: \textit{\textbf{(H1)}} it is possible to infer if carefulness is required to manipulate an object and \textit{\textbf{(H2)}} it is possible to discriminate a lighter object from a heavier one. To validate our hypothesis we have collected a dataset of human motions while performing a simple transporting task; then we have trained state-of-the-art classifiers to determine if it is possible to distinguish the carefulness associated with an object and its weight, exclusively observing the human motion. \section{Experimental setup} \label{experimental_setup} The experimental setup used to collect the data consisted of a table, a chair, two shelves (placed on different sides of the table) facing the volunteer, a scale, a keyboard with only one functioning key, and four plastic glasses (see Fig. \ref{fig:setup}). \\ \begin{figure}[h] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.99\linewidth]{images/shelf_reach.jpg} \caption{Lateral view} \label{fig:shelfreached} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.99\linewidth]{images/scale_reach.jpg} \caption{Top view} \label{fig:scalereached} \end{subfigure} \caption{Two views of the experimental setup with the volunteers in two grasp positions: on the shelf (\ref{fig:shelfreached}) and on the scale (\ref{fig:scalereached})} \label{fig:setup} \end{figure} \begin{table}[h] \begin{center} \caption{Glasses features and abbreviations} \vspace{0.1cm} \label{tab:table1} \begin{tabular}{c|c|c} \textbf{Abbreviation } & \textbf{ Weight (gr) } & \textbf{ Carefulness level}\\ \hline \hline W1C1 & 167 & low (no water)\\ W2C1 & 667 & low (no water)\\ W1C2 & 167 & high (full of water)\\ W2C2 & 667 & high (full of water)\\ \end{tabular} \end{center} \end{table} \\The four glasses were characterized by two levels of associated carefulness and two weights, as shown in Table \ref{tab:table1}. The high level of carefulness was achieved filling the glass to the brim with water, while for the low level no water was placed in the glass. The different weights, instead, were obtained by inserting in the glasses a variable number of coins and screws; for the object with high level of carefulness the weight of the water was taken into account. Each object was weighted to guarantee a difference of 500 gr between light an heavy glasses. Glasses were identical in shape and appearance, and their transparency was chosen so that participants could clearly see the content of the glass and appropriately plan their movements . As displayed in Fig. \ref{fig:setup}, four positions were defined in each shelf, two on the top and two on the bottom level. These predefined positions were identified by a letter on a label. Participants seated at the table and had to perform a structured series of reaching, lifting and transportation movements of the four glasses. The experiment started with all the four glasses on the shelves, the volunteer with their arms resting on the table and their right hand in the resting pose, marked with a blue cross (see Fig. \ref{fig:scalereached}). During the experiment, the volunteers used their right hand to interact with the objects and their left to press the key of the keyboard. The experiment was structured as following: \begin{itemize} \item The volunteer pressed the key of the keyboard and a synthetic voice indicated the position on the shelf of the object to be transported. The position was referred to using the corresponding letter. \item The volunteer performed a reaching action toward the specified position and grasped the glass (see Fig. \ref{fig:shelfreached}). \item The volunteer performed a transportation action moving the glass from the shelf to the scale. \item The volunteer released the glass and returned to the resting pose. \item The volunteer pressed a second time the key and the synthetic voice indicated a position on the shelf where the glass should be transported. Of course, this time the selected position on the shelf was empty. \item The volunteer performed a reaching action towards the scale and grasped the glass (see Fig. \ref{fig:scalereached}). \item The volunteer performed a transportation action moving the glass from the scale to the final position on the shelf. \item The volunteer released the glass and returned to the resting pose. \end{itemize} The participants repeated this sequence 8 times to familiarize with the task, while the main experiment consisted of 32 repetitions. A table containing the shelf initial and final poses for each repetition was previously designed to guarantee a good coverage of all the possible combinations of shelf positions and glasses. Each volunteer performed exactly the same experiment.\\ The experiment was conducted thanks to 15 healthy right-handed subjects that voluntarily agreed to participate into the data collection (7 females, age: $28.6\pm 3.9$). All volunteers are members of our organization but none is directly involved in our research. \subsection{Sensors} The data used in this study was collected during the experiments previously described using a motion capture system from Optotrak, as ground truth, and one of the cameras of iCub. During the experiments other sensors have been used to collect data but their analysis is not in the scope of this paper. The humanoid robot iCub was placed opposite to the table and recorded the scene through its left camera, with a frame rate of 22 Hz and a resolution of the image of 340 x 240 pixels. The robot was just a passive observer and no interaction with the participants took place during the experiment. The Optotrak Certus\textsuperscript{\textregistered}, NDI, motion capture (MoCap) system recorded the kinematic of the human motion through active infrared markers at a frequency of 100 Hz. The markers were placed on the right hand, wrist and arm. For the following analysis only a subset of the hand and wrist markers were considered (see Fig. \ref{fig:fig2}). The data coming from the different sensors was synchronized through the middleware YARP \cite{yarp} that gave to each sample a YARP timestamp. By pressing the key on the keyboard at the end of every trial the data coming from the MoCap were automatically segmented in different log files and the actual timestamp saved in a separate file. Successively the timestamps associated with the key pressures have been used to segment the data recorded by the robot camera. \begin{figure}[ht] \centering \includegraphics[scale = 0.04, angle =-90]{images/markers_hand_high.jpg} \caption{Detail of the markers position on the right hand: those circled in red were interchangeably used to compute the features in each trial} \label{fig:fig2} \end{figure} \paragraph{Motion capture system data} The data acquired by the motion capture system consisted in the tridimensional coordinates of each marker with respect to the reference coordinate frame of the Optotrak. Occlusions limited the MoCap visibility for specific part of the human movement. In our experiment the main source of occlusion was given by the presence of the shelves, in particular for the lower right positions. To partially overcome this problem, after a preliminary analysis, we chose to consider for each trial the most visible marker among a subset of four as representative of the movement. Indeed, during the transportation movements the hand could be assimilated to a rigid body. The four considered markers were placed respectively on the metacarpophalangeal joints of the index and of the little finger, on the diaphysis of the third metacarpal and on the smartwatch in correspondence of the radial styloid (see the markers circled in red in Fig. \ref{fig:fig2} for reference). Two different interpolations, inpaintn \cite{inpaintn} and interp1 of MATLAB ver. R2019b, have been used to reconstruct the data that are missing because of the occlusions. The data was filtered with a second order low pass Butterworth filter with a cutoff frequency of 10 Hz. Some trials have been excluded from the data set because of inconsistencies in the segmentation among the acquired sensors or because of errors of the subjects into pressing the key at the right moment, i.e. when their right hand was laying on the table in the resting position. Overall only 1.25\% of the total acquired trials have been removed. Since our hypothesis is that it is possible to distinguish the features of the object that is being transported, it was necessary to isolate the transportation movement in every trial. To do so we took advantage of the experiment design. Indeed each trial presented three clearly identifiable phases: a reaching action, from the resting pose to the position occupied by the glass (either on the shelf or on the scale), a transportation movement and finally the departing (see Fig. \ref{fig:segmentation}). Our segmentation assumed that the start and end of the transportation phase is associated with a pick in the norm velocity of the hand. Therefore, the segmentation was performed by placing a threshold of 5\% on the second peak of the norm of the velocity, after filtering it with a fourth order filter with a cutoff frequency of 5 Hz. The resulting data were then down-sampled to obtain the same frame rate as the camera of the robot. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{images/segmentation.png} \caption{Example of the velocity patterns from motion capture (in blue) and optical flow data (in red). The peaks characterizing the three phases of the trial (reaching, transportation and departing) are visible} \label{fig:segmentation} \end{figure} \paragraph{Camera data and optical flow extraction} As motion descriptor, from the saved raw images of the robot camera (see Fig. \ref{fig:optiFlow} for an example) we chose to compute the Optical Flow (OF), following an approach already tested \cite{vignolo:OF,vignolo:OF2}. In this method, the optical flow is computed for every time instant using a dense approach \cite{farn:OF}, which estimates the apparent motion vector for each pixel of the image. The magnitude of the optical flow is thresholded to consider only those parts of the image where the change is significant. A temporal description of the motion happening in the derived region of interest is then computed averaging the optical flow components. On the velocity extracted, a second order low-pass Butterworth filter with cutoff frequency of 4 Hz was applied to remove the noise (see Fig. \ref{fig:segmentation}). \begin{figure} \centering \begin{subfigure}[b]{.325\textwidth} \includegraphics[width=1\linewidth]{images/frontal_censored.jpg} \caption{View from the iCub perspective} \label{fig:icubCam} \end{subfigure} \begin{subfigure}[b]{.325\textwidth} \includegraphics[width=1\linewidth]{images/of_right.png} \caption{OF moving towards the right of the image} \label{fig:ofRight} \end{subfigure} \begin{subfigure}[b]{.325\textwidth} \includegraphics[width=1\linewidth]{images/of_left.png} \caption{OF moving towards the left of the image} \label{fig:ofLeft} \end{subfigure} \caption{Example of iCub view of the scene and the extracted OF. The colors codify for the direction of the movement: red is for motion towards the right part of the image (\ref{fig:ofRight}), blue for motion towards the left (\ref{fig:ofLeft})} \label{fig:optiFlow} \end{figure} \section{Data pre-processing} \label{feature_extraction} The same set of motion representations was extracted during a pre-processing phase from both the motion capture data and the optical flow: the velocity $\mathbf{V}_{i}(t)$, the curvature $C_{i}(t)$, the radius of curvature $R_{i}(t)$ and the angular velocity $A_{i}(t)$ \cite{ActionObservation}. Their analytical expression is stated in Table \ref{tab:table2}. Such features can be computed for every time instant and by collecting them it is possible to progressively gather an increasing amount of information about the observed movement. This would then grant the robot the ability of discriminating online the characteristics of the object handled by the human partner. As shown in \cite{vignolo:OF,vignolo:OF2}, these data representations have been successfully used to discriminate online between biological and non-biological motion and to facilitate coordination in human-robot interaction \cite{Rea}. In addition, kinematics properties, such as velocity, have been shown to be relevant in human perception of object weight \cite{velWeight}. Extracting those features during the pre-processing, instead of directly feeding the classification algorithms with raw data, allows to better compare the performance achieved with the two sources of data. Indeed, a precise control over the information used during the learning process is granted. \begin{table}[h!] \caption{Motion features computed from motion capture and optical flow data} \centering \label{tab:table2} \renewcommand\arraystretch{2} \begin{tabular}{l|l} \textbf{Motion feature} & \textbf{ Analytical expression } \\ \hline \hline Tangential velocity & $\mathbf{V}_{i}(t) = (u_{i}(t),v_{i}(t),\Delta _{t})$ \\ \hline Tangential velocity magnitude\hspace{0.2cm} & $V_{i}(t)=\sqrt{u_{i}(t)^{2}+v_{i}(t)^{2}+\Delta _{t}^{2}}$\\ \hline Acceleration & $\mathbf{A}_{i}(t)=(u_{i}(t)-u_{i}(t-1),v_{i}(t)-v_{i}(t-1),0)$ \\ \hline Curvature & $C_{i}(t)=\frac{\left \|\mathbf{V}_{i}(t)\times \mathbf{A}_{i}(t) \right \|}{\left \| \mathbf{V}_i(t)) \right \|^{3}} $ \\ \hline Radius of curvature & $R_{i}(t)=\frac{1}{C_{i}(t)} $ \\ \hline Angular velocity & $A_{i}(t)=\frac{V_{i}(t)}{R_{i}(t)} $ \\ \hline \end{tabular} \end{table} \subsection{Dataset} \label{dataset} As we have detailed before some sequences had to be removed for inconsistencies in the segmentation. This lead to a slightly unbalanced data set, containing more examples for specific classes. Indeed, class W1C1 had 235 sequences, class W2C1 239, class W1C2 238 and class W2C2 had 236. Although cardinally the difference is minimum, to preserve the balance of the dataset we decided to fix the maximum number of sequences for each class to 235 and we have randomly selected the sequences for W2C1, W1C2 and W2C2. Notice that the four classes were characterized only by the weight and the carefulness level. Therefore other variables, such as the initial and final position of the glass and the direction of the movement, are not considered in the classification.\\ Due to the characteristics of the glasses, the duration of the transport movement varied consistently among the trials (i.e. the duration of the movement is consistently longer when the moved glass is full of water, belonging to the high carefulness class). To obtain sequences with the same number of samples for each trial, the segmented sequences were re-sampled, using the interp1 function of MATLAB. The number of samples was selected considering the class associated with the shorter duration of the transport phase, W1C1, and computing the median value among all its trials. The resulting value was 32. Therefore, our dataset was composed of two data structures: one derived from the MoCap data and the other one from the OF. Both structures had dimensions $940\, (trials) \times 32\, (frames) \times 4\, (features)$.\\ The re-sampling can be performed only knowing the start and end of the transportation phase. Since in an online scenario this information is not available, a further attempt was performed exploiting the ability of certain models to handle temporal sequences of different lengths. In this case, instead of re-sampling, a common zero-padding and masking technique were adopted. Therefore, the shorter temporal sequences were completed with zero values and those values were then ignored during the training, while the length of the longest transport movements was preserved. The shape of the data structures after the zero padding was: $940\, (trials) \times 132\, (frames) \times 4\, (features)$. \section{Classifiers} \label{classifier} As introduced in Sect. \ref{Rationale}, the goal of the classification is to discriminate between the two possible features of the transported glasses: \textit{\textbf{(H1)}} the carefulness level associated with the object and \textit{\textbf{(H2)}} the weight. Therefore, we decided to approach the problem using two binary classifiers, one for each feature, implemented in Python using Keras libraries \cite{keras}. As mentioned in Sect. \ref{dataset} two models were tested: the first one relied on re-sampled features, while the second one used the original data with variable lengths. \subsection{Convolutional, Long-Short-Term-Memory and Deep Neural Network}\label{CNN} Previous literature suggests that the combined use of Convolutional Neural Network (CNN), Long-Short Term Memory (LSTM) and Deep Neural Networks (DNN) is a good solution for classifying time dependent data, such as speech or motion kinematics \cite{mymodel,lstm}. Therefore, our first model was inspired by \cite{mymodel} and consisted of two time distributed 1-D convolutional layers (that took as input 4 subsequences of 8 frames each), a max pooling and flatten layers, a 100 neurons LSTM, a 100 neurons Dense layer and a 2 neurons output layer with a sigmoidal activation function. A Leave-One-Out approach was adopted, to test the ability of the model to generalize over different participants. Thus, for each one of the 15 folds, the data from 14 subjects were used as training set and the data of the fifteenth participant as test set. The 20\% of the data for each training set was kept for validation, and early stopping was implemented according to the validation loss function (with a patience parameter set to 5 epochs): this allowed to obtain good accuracy without incurring in overfitting. The batch size was fixed to 16. The model was fit with ADAM optimization algorithm and categorical cross-entropy as loss function. With respect to the model described in \cite{mymodel} some regularizers were added to avoid overfitting and make the network less sensitive to specific neurons weights. A L1-L2 kernel regularization was added to the two 1D convolutional layers $(l1=0.001, l2=0.002)$ and a L2 kernel regularizer $(l2=0.001)$ was added to the fully connected DNN layer; moreover, 0.5 dropouts were introduced. \subsection{Long-Short-Term-Memory and Deep Neural Network}\label{LSTM} The second model was implemented to test the possibility of generalizing over temporal sequences of variable length. To implement such an approach the data were padded with zeroes, as mentioned in the previous Section. Since the required masking layer was not supported by the Keras implementation of the CNN layer, we decided to opt for a simpler model: a 64 neurons LSTM, followed by a 32 neurons dense layer and a 2 neurons output layer with a sigmoidal activation function; also in this case L1-L2 regularization and 0.5\% dropout were used to avoid overfitting. The optimization algorithm, the loss function and the validation approach with early stopping were the same as before. By using this model the possibility of learning independently from the length of the temporal sequence was granted. This represents a further step towards the implementation of the same algorithm on the robot; indeed, no previous knowledge on the duration of the movement would be required to perform the classification, since the model is trained on variable temporal sequences. \section{Results}\label{results} Results for the classifiers performance are presented for both the weight and the carefulness features and for both the considered source of data: the motion capture and the optical flow from the robot camera. \subsection{Carefulness level} \label{carefSection} The performances in the classification of the carefulness level with the model presented in Sect. \ref{CNN} are reported in Table \ref{tab:carefRes}. \begin{table}[h] \centering \caption{Model accuracy (\%, mean and standard deviation) on carefulness level classification with the CNN-LSTM-DNN model. In brackets are the results when volunteer 8 was included in the data set\\} \label{tab:carefRes} \begin{tabular}{l|c|c} & \textbf{Motion capture} & \textbf{Optical flow}\\ \hline \hline \textit{Training} & $92.15 (92.00)\pm2.14 (3.42)$ & $94.03 (92.18)\pm1.05 (1.00)$ \\ \textit{Test} & $91.68 (90.97)\pm5.00 (11.12)$ & $90.54 (89.43)\pm6.56 (7.59)$ \\ \end{tabular} \end{table} \begin{figure}[H] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.99\linewidth]{images/mocap_accuracy.png} \caption{MoCap accuracy} \label{fig:moc_acc} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.99\linewidth]{images/of_accuracy.png} \caption{OF accuracy} \label{fig:of_acc} \end{subfigure} \caption{Accuracy in the carefulness classification with CNN-LSTM-DNN model for the validation set for each fold. Accuracy from motion capture (\ref{fig:moc_acc}) and from optical flow features (\ref{fig:of_acc})} \label{fig:optoCaref} \end{figure} \noindent When performing the Leave-One-Out cross validation we noticed that the classification accuracy associated to volunteer 8 was significantly lower than the average (\textbf{MoCap test} \textit{all:} $90.97\pm11.12$ \textit{vol8:} 51.62; \textbf{OF test} \textit{all:} $89.43\pm7.59$ \textit{vol8:} 77.42). Examining the experiment videos we noticed that the volunteer 8 was particularly careful even when handling the glasses not containing water. Our impression was confirmed after computing the median duration of the not careful movements among the subjects. The duration for volunteer 8 ($2.04\pm0.18$ seconds, median and median absolute deviation) differed significantly from the ones of the other participants, as for the rest of the group the median duration resulted $1.47\pm0.15$ seconds (Kruskal-Wallis test: $\chi^{2}(14, N=480)=136.8, p<.01 $). In Table \ref{tab:carefRes} we have reported in brackets the results when including this subject in the dataset. As can be observed, when the participant was included the accuracy variance on the test increased significantly for each the sensing modalities.\\ Figure \ref{fig:optoCaref} shows the trend of the accuracy over the epochs for the validation set of each one of the folds. Comparing the graphs for the two sources of data ((a) motion capture, (b) optical flow) it can be noticed how the first one reaches an accuracy above the $80\%$ in less than 10 epochs, while, using the features from the optical flow, more training is necessary to reach the same level of accuracy (over 20 epochs). Furthermore, the accuracy trend of the motion capture features is more stable.\par Similarly, the carefulness classification performance with the model presented in Sect. \ref{LSTM}, fed with the original temporal sequences of variable lengths, is shown in Table \ref{tab:carefRes_LSTM}. As before, the variability in the test accuracy reduced when volunteer 8 is excluded from the dataset, and the overall accuracy improved for both the sensing modalities. With this model, compared to the values in Table \ref{tab:carefRes}, the accuracy achieved with the MoCap data is higher, while the one of the OF slightly reduced. \begin{table}[h] \centering \caption{Model accuracy (\%, mean and standard deviation) on carefulness level classification for simpler LSTM-DNN model. In brackets the results considering volunteer 8} \label{tab:carefRes_LSTM} \begin{tabular}{l|c|c} & \textbf{Motion capture} & \textbf{Optical flow}\\ \hline \hline \textit{Training} & $96.57 (94.32)\pm1.19 (1.77)$ & $92.10 (90.39)\pm4.58 (2.56)$ \\ \textit{Test} & $95.17 (92.66)\pm5.56 (8.49)$ & $88.38 (86.50)\pm8.68 (10.75)$ \\ \end{tabular} \end{table} \subsection{Weight} \label{Results_weight} In Table \ref{tab:weiRes} are shown the results for the classification of the weight achieved with re-sampled data on the first implemented model. In this case, volunteer 8 did not present any peculiarity and therefore it was included in the dataset. As we can observe in Table \ref{tab:weiRes}, the accuracy with the motion capture data is above 60\% and is higher than the one obtained from the optical flow. \begin{table}[ht] \centering \caption{Model accuracy (\%, mean and standard deviation) on weight classification with the CNN-LSTM-DNN model, fed with re-sampled data\\} \label{tab:weiRes} \begin{tabular}{l|c|c} & \textbf{Motion capture} & \textbf{Optical flow}\\ \hline \hline \textit{Training} & $64.10\pm2.34$ & $55.24\pm2.37$ \\ \textit{Test} & $61.83\pm7.16$ & $54.47\pm4.29$ \\ \end{tabular} \end{table} \begin{comment} \hline \hline \textbf{Without Participant 8} & &\\ \textit{Training} & $64.866\pm3.461$ & $57.703\pm2.251$ \\ \textit{Test} & $60.752\pm6.789$ & $56.485\pm4.325$ \\ \end{comment} \\ \\Finally, Table \ref{tab:weightRes_LSTM} reports the accuracy for the weight classification with the LSTM-DNN model, fed with the original temporal sequences of different lengths. In this case the performance was comparable between the data from the two sensing modalities. \begin{table}[h] \centering \caption{Model accuracy (\%, mean and standard deviation) on weight level classification for the second model, LSTM-DNN} \label{tab:weightRes_LSTM} \begin{tabular}{l|c|c} & \textbf{Motion capture} & \textbf{Optical flow}\\ \hline \hline \textit{Training} & $54.95\pm2.66$ & $55.30\pm1.95$ \\ \textit{Test} & $54.75\pm5.27$ & $53.29\pm3.59$ \\ \end{tabular} \end{table} \noindent We have noticed that, despite adopting the same approach, the accuracy on the weight classification is not as satisfying as the one achieved for the carefulness. A possible explanation of these results could be related to the different effect that weight may have on different transport movements. Possibly the weight influence varies if the transportation is from top to bottom or vice-versa. Furthermore, the presence of water in some of the glasses may have led the subjects to focus mainly on the carefulness feature, unconsciously overlooking the weight difference. Therefore, we add two specifications of the second hypothesis: \textit{\textbf{(H2.1)}} the influence of the weight during transportation is dependent on the trajectory of the motion; \textit{\textbf{(H2.2)}} when an object is associated with an high level of carefulness, the weight has a limited influence on the transportation movement. Both hypotheses were tested with the first model, which gave better results for the weight classification. Concerning the first hypothesis, we reduced the variability in the movements and tried to discriminate the weight in the subset of transport movements from the scale towards the shelves (\textbf{MoCap:} \textit{Tr}: $68.90\pm2.68$ \textit{Test}: $63.42\pm8.96$; \textbf{OF:} \textit{Tr}: $59.10\pm4.27$ \textit{Test}: $55.17\pm6.24$); there is a slight improvement for both the data sources compared to the values in Table \ref{tab:weiRes}. Notice that the trajectories still have a discrete amount of variability since the position to reach on the shelf could be left or right, high or low. The second hypothesis was investigated by testing the weight discrimination within the subset of objects which required the same carefulness level: low (\textbf{MoCap:} \textit{Tr}: $64.49\pm5.24$ \textit{Test}: $61.93\pm6.86$; \textbf{OF:} \textit{Tr}: $62.52\pm3.53$ \textit{Test}: $56.84\pm6.77$) or high (\textbf{MoCap:} \textit{Tr}: $62.72\pm3.65$ \textit{Test}: $59.03\pm8.73$; \textbf{OF:} \textit{Tr}: $57.92\pm1.31$ \textit{Test}: $53.48\pm7.63$). For both the tests the results are inconclusive, since the classification accuracies have not changed much respect to the ones reported in Table \ref{tab:weiRes}. It should be noted though that the dimension of the dataset used to validate hypotheses \textit{\textbf{(H2.1)}} and \textit{\textbf{(H2.2})} halved, which has an impact on the statistical relevance of the results. \section{Discussion} Regarding the carefulness feature, as reported in Table \ref{tab:carefRes} the first classifier is able to correctly discriminate if the transportation of the object requires carefulness or not, independently from the sensing modality used. Considering the performance on the data coming from the two sources, no significant difference is detected between them. Therefore, not only using an accurate system such as the motion capture, that integrates sensory inputs from different locations to estimate the position in space of the target, but also using the camera of the robot (single point of view), it is possible to extract features to discriminate between careful and not careful motions. Figure \ref{fig:optoCaref} shows an insight on how the learning process advanced for the two data sources. Even though the final performances are comparable, it can be appreciated how the model trained with the features from the motion capture converges quicker to an accuracy value above the 80\%.\\ The approach adopted with the second classifier is more general, in the sense that data are not re-sampled to have the same dimension but the variability in their duration is taken into account. Even though this model is simpler, with just one LSTM and one dense layer, the performance on the carefulness classification considering the MoCap data increased (see Table \ref{tab:carefRes_LSTM} for reference). Although the accuracy using the optical flow is slightly lower, we consider this as a promising step towards the implementation of the same algorithm on the robot. \par Concerning the weight, the accuracy achieved for both the sensing modalities and for both the models is lower than the one obtained for the carefulness (see Tables \ref{tab:weiRes} and \ref{tab:weightRes_LSTM} for reference). To explain this outcome in Sect. \ref{Results_weight} we have formalized two additional hypotheses. \textit{\textbf{(H2.1)}} was inspired by \cite{PalinkoSciutti}, where it has been proposed that the vertical component of the velocity during the manipulation of an object is perceived by humans as informative about its weight. Since the trials in our dataset explored a variety of directions and elevations, this introduced a great variability in the vertical component of the velocity. Instead, concerning \textit{\textbf{(H2.2)}}, we have supposed that the greatest challenge for the volunteers during the experiment is to safely handle the glasses full of water; the difference in weight between the objects was not remarkable in comparison with the stark contrast given by the presence (or absence) of water. As mentioned in Sect. \ref{Results_weight} the first classifier was tested against these hypotheses, but no significant improvements in the accuracy have been achieved. Given the results of our experiment we can not validate hypothesis \textit{\textbf{(H2)}}. However, since we have explored only a subset of the possible kinematic features we can not argue against this hypothesis either. A possibility for future works is to focus on the vertical component of the velocity. Furthermore, \textit{\textbf{(H2.1)}} and \textit{\textbf{(H2.2)}} should be explored on reasonably extended datasets to obtain more reliable results. \section{Conclusions} As human-robot interactions are becoming increasingly frequent, it is crucial that robots gain certain abilities, such as the correct interpretation of implicit signals associated with the human movement. In this study we focused on two fundamental implicit signals commonly communicated in human movements: the impact of the weight and the carefulness required in the object manipulation (e.g. transport of fragile, heavy and unstable objects). Our hypotheses aimed to demonstrate that it is possible to discriminate between lighter and heavier items \textit{\textbf{(H2)}} and to infer the carefulness required by human operator in manipulating objects \textit{\textbf{(H1)}}. We proved that it is feasible to reliably discriminate when the human operator recruits motor commands of careful manipulation during the transportation of an object. Indeed, it is reliable to estimate extreme carefulness from two different typologies of sensory acquisition: from motion tracking system and from the single view point of the robot`s camera observing the movement. On the other hand, the proposed algorithms show lower accuracy when applied to weight classification, and these results does not allow us to validate our second hypothesis. The estimation of the weight from human motion should be subject of further studies, exploring other classification strategies or kinematic features subset (e.g. extraction of the vertical components of the velocity during manipulation). This study firmly supports the research in human-robot interaction, especially in the direction of addressing complex situations in realistic settings (e.g.: industrial environment, construction site, home care assistance, etc.). In these specific scenarios the robot can autonomously leverage on insights inferred from implicit signals, such as the carefulness required to move a object, in order to facilitate the cooperation with the human partner. \section*{Acknowledgement} This paper is supported by CHIST-ERA, (2014-2020) project InDex (Robot In-hand Dexterous manipulation).
proofpile-arXiv_065-3748
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $\pi: {\widetilde C} \to C$ be a double \'etale cover between smooth curves of genus $g=g(C)$ and $\widetilde g=g({\widetilde C})=2g-1$, and denote by $(P,\Xi)$ its (principally polarized) Prym variety. In his fundamental work \cite{mu}, Mumford classified the singularities of the theta divisor $\Xi$. More precisely, he considered a translation $P^+$ of the Prym variety to $\Pic^{2g-2}({\widetilde C})=\Pic^{\widetilde g-1}({\widetilde C})$, together with a canonical theta divisor $\Xi^+\subset P^+$: \begin{gather*} P^+=\{M\in\Pic^{2g-2}({\widetilde C})\mid\mathrm{Nm}_\pi(M)=\omega_C,\;h^0({\widetilde C},M)\text{ even}\}, \\ \Xi^+=\{M\in P^+\mid h^0({\widetilde C},M)\geq 2\} \end{gather*} Then every singular point of $\Xi^+$ is \textit{stable} ($M\in \Xi^{+}$ with $h^0({\widetilde C}, M)\geq4$) or \textit{exceptional} ($M=\pi^*L\otimes A \in \Xi ^{+}$ such that $h^0(C,L)\geq 2$ and $h^0({\widetilde C},A)>0$). Assume $C$ has a semicanonical pencil, that is, an even theta-characteristic $L$ with $h^0(C,L)\geq2$ (in the literature, this is also frequently referred to as a \emph{vanishing theta-null}). If $h^0({\widetilde C},\pi^*L)$ is furthermore even, then $M=\pi^*L\in\Xi^{+}$ is an example of exceptional singularity. In that case, $L$ is called an \emph{even semicanonical pencil} for the cover $\pi$, and the Prym variety $(P,\Xi)$ belongs to the divisor $\theta_{null}\subset\ensuremath{\mathcal{A}}_{g-1}$ of principally polarized abelian varieties whose theta divisor contains a singular $2$-torsion point. In the paper \cite{be_invent}, Beauville showed that the Andreotti-Mayer locus \[ \mathcal{N}_0=\set{(A,\Xi) \in \mathcal A_4 \mid \text{Sing}\,(\Xi) \text{ is non-empty} } \] in $\ensuremath{\mathcal{A}}_4$ is the union of two irreducible divisors: the (closure of the) Jacobian locus $\mathcal{J}_4$ and $\theta_{null}$. An essential tool for the proof is the extension of the Prym map $\ensuremath{\mathcal{P}}_g:\ensuremath{\mathcal{R}}_g\to\ensuremath{\mathcal{A}}_{g-1}$ to a proper map $\widetilde\ensuremath{\mathcal{P}}_g:\widetilde\ensuremath{\mathcal{R}}_g\to\ensuremath{\mathcal{A}}_{g-1}$, by considering admissible covers instead of only smooth covers. In the case $g=5$, this guarantees that every 4-dimensional principally polarized abelian variety is a Prym variety (i.e.~the dominant map $\ensuremath{\mathcal{P}}_5$ is replaced by the surjective map $\widetilde {\ensuremath{\mathcal{P}}}_5$). Then, one of the key points in Beauville's work is an identification of the coverings whose (generalized) Prym variety is contained in $\theta_{null}$. Indeed, the results in \cite[Section~7]{be_invent} together with \cite[Theorem~4.10]{be_invent} show that \[ \ensuremath{\mathcal{T}}^e=\text{(closure in $\widetilde\ensuremath{\mathcal{R}}_5$ of)}\set{ [\pi:{\widetilde C} \longrightarrow C ]\in \ensuremath{\mathcal{R}}_5 \mid \text{the cover $\pi$ has an even semicanonical pencil}} \] is irreducible and equals $\widetilde \ensuremath{\mathcal{P}}_5^{-1}(\theta_{null})$. Therefore, the irreducibility of $\theta_{null}\subset\ensuremath{\mathcal{A}}_4$ is obtained from the irreducibility of $\ensuremath{\mathcal{T}}^e$; the proof of the latter starts by noticing that \[ \ensuremath{\mathcal{T}} = \set{[C]\in \ensuremath{\mathcal{M}}_5 \mid \text{$C$ has a semicanonical pencil}} \] is an irreducible divisor of $ \ensuremath{\mathcal{M}}_5$. Now let us consider double étale covers of curves with a semicanonical pencil, in arbitrary genus. For a fixed $g\geq3$, let $\ensuremath{\mathcal{T}}_g\subset\ensuremath{\mathcal{M}}_g$ denote the locus of (isomorphism classes of) curves with a semicanonical pencil (i.e.~with an even, effective theta-characteristic). Note that $\ensuremath{\mathcal{T}}_g$ is the divisorial part of the locus of curves admitting a theta-characteristic of positive (projective) dimension (\cite{te2}). The general element of $\ensuremath{\mathcal{T}}_g$ has a unique such theta-characteristic (which is a semicanonical pencil $L$ with $h^0(C,L)=2$), and the pullback of $\ensuremath{\mathcal{T}}_g$ to $ \ensuremath{\mathcal{R}}_g$ decomposes as a union $\mathcal T^e_g \cup \mathcal T^o_g$ according to the parity of $h^0({\widetilde C},\pi^*L)$. In other words, the general element of $\ensuremath{\mathcal{T}}^e_g$ (resp.~$\ensuremath{\mathcal{T}}^o_g$) is a cover with an even semicanonical pencil (resp.~an odd semicanonical pencil). In view of Beauville's work, it is natural to ask whether $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ are irreducible divisors, and to ask about the behaviour of the restricted Prym maps ${\widetilde\ensuremath{\mathcal{P}}_g}|_{\ensuremath{\mathcal{T}}_g^e}$ and ${\widetilde\ensuremath{\mathcal{P}}_g}|_{\ensuremath{\mathcal{T}}_g^o}$. This paper exclusively deals with the first question, and studies the divisors $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ of even and odd semicanonical pencils. Aside from its independent interest, it provides tools for attacking the second question; a study of the restricted Prym maps ${\widetilde\ensuremath{\mathcal{P}}_g}|_{\ensuremath{\mathcal{T}}_g^e}$ and ${\widetilde\ensuremath{\mathcal{P}}_g}|_{\ensuremath{\mathcal{T}}_g^o}$ is carried out in the subsequent paper \cite{lnr}. Coming back to the first question, the divisor $\ensuremath{\mathcal{T}}_g\subset\ensuremath{\mathcal{M}}_g$ was studied by Teixidor in \cite{te}. Using the theory of limit linear series on curves of compact type developed by Eisenbud and Harris in \cite{eh}, Teixidor proved the irreducibility of $\ensuremath{\mathcal{T}}_g$ and computed the class of its closure in the Deligne-Mumford compactification $\ensuremath{\overline{\mathcal{M}}}_g$. In our case, we will work in the Deligne-Mumford compactification $\ensuremath{\overline{\mathcal{R}}}_g$ of $\ensuremath{\mathcal{R}}_g$ (first considered in \cite[Section 6]{be_invent}, during the construction of the proper Prym map). Following closely Teixidor's approach, we obtain natural analogues of her results for the two divisors of Prym semicanonical pencils: \vspace{2mm} \begin{thmIntr}\label{thmA} Let $[\ensuremath{\mathcal{T}}^e_g],[\ensuremath{\mathcal{T}}^o_g]\in\Pic(\ensuremath{\overline{\mathcal{R}}}_g)_\ensuremath{\mathbb{Q}}$ denote the classes of (the closures of) $\ensuremath{\mathcal{T}}^e_g$, $\ensuremath{\mathcal{T}}^o_g$ in the Deligne-Mumford compactification $\ensuremath{\overline{\mathcal{R}}}_g$. Then, the following equalities hold: \begin{align*} [\ensuremath{\mathcal{T}}^e_g]&=a \lambda -b_0'\delta_0'-b_0'' \delta_0''-b_0^{ram}\delta_0^{ram} -\sum_{i=1}^{ \lfloor g/2\rfloor} (b_i\delta_i+b_{g-i}\delta_{g-i} +b_{i:g-i}\delta_{i:g-i}), \\ [\ensuremath{\mathcal{T}}^o_g]&=c \lambda -d_0'\delta_0'-d_0'' \delta_0''-d_0^{ram}\delta_0^{ram} -\sum_{i=1}^{ \lfloor g/2\rfloor} (d_i\delta_i+d_{g-i}\delta_{g-i} +d_{i:g-i}\delta_{i:g-i}), \end{align*} where \begin{align*} &a=2^{g-3}(2^{g-1}+1), &&c=2^{2g-4},\\ &b_0'=2^{2g-7}, &&d_0'=2^{2g-7},\\ & b_0''=0, &&d_0''= 2^{2g-6}, \\ &b_0^{ram}=2^{g-5}(2^{g-1}+1), && d_0^{ram}=2^{g-5}(2^{g-1}-1),\\ & b_i=2^{g-3}(2^{g-i}-1)(2^{i-1}-1), && d_i=2^{g+i-4}(2^{g-i}-1),\\ & b_{g-i}=2^{g-3}(2^{g-i-1}-1)(2^{i}-1), &&d_{g-i}=2^{2g-i-4}(2^{i}-1), \\ &b_{i:g-i}=2^{g-3}(2^{g-1}-2^{i-1}-2^{g-i-1}+1),&& d_{i:g-i}=2^{g-3}(2^{g-1}-2^{g-i-1}-2^{i-1}). \end{align*} \end{thmIntr} \vspace{1.5mm} \begin{thmIntr}\label{thmB} For every $g\neq4$ the divisors $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ are irreducible. \end{thmIntr} A crucial role in the proofs is played by the intersection of $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ with the boundary divisors in $\ensuremath{\overline{\mathcal{R}}}_g$ of covers of reducible curves. This is the content of \autoref{boundary}. Then in \autoref{proofA} we prove \autoref{thmA} by intersecting $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ with appropriate test curves in $\ensuremath{\overline{\mathcal{R}}}_g$. The proof of \autoref{thmB} for $g\geq5$ is given in \autoref{irred}, and combines monodromy arguments with the intersection of $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ with the boundary divisor $\Delta_1\subset\ensuremath{\overline{\mathcal{R}}}_g$. We point out that the irreducibility for $g=3$ can be immediately checked in terms of hyperelliptic curves (\autoref{g=3}), whereas the irreducibility of $\ensuremath{\mathcal{T}}_4^e$ and $\ensuremath{\mathcal{T}}_4^o$ is obtained in the paper \cite{lnr} as a consequence of the study of the restricted Prym maps ${\ensuremath{\mathcal{P}}_4}|_{\ensuremath{\mathcal{T}}^e_4}$ and ${\ensuremath{\mathcal{P}}_4}|_{\ensuremath{\mathcal{T}}^o_4}$. \vspace{2mm} \textbf{Acknowledgements.} The authors developed parts of this work independently as part of their doctoral research, and they would like to thank their respective advisors, Gavril Farkas, Martí Lahoz and Joan Carles Naranjo for their help and guidance. Thanks are also due to Alessandro Verra for suggesting the computation of divisor classes as a tool for the study of the Prym map on these divisors, as well as to the anonymous referees for their detailed comments. \vspace{1mm} \section{Preliminaries} \subsection{The moduli space \texorpdfstring{$\ensuremath{\overline{\mathcal{R}}}_g$}{Rg}} This part is a brief review of the Deligne-Mumford compactification $\ensuremath{\overline{\mathcal{R}}}_g$ and its boundary divisors. We follow the presentation of \cite[Section~1]{fa-lu}; the reader is referred to it for further details. Let $\ensuremath{\mathcal{M}}_g$ be the moduli space of smooth curves of genus $g$, and let $\ensuremath{\overline{\mathcal{M}}}_g$ be its Deligne-Mumford compactification by stable curves. Following the standard notations, we denote by $\Delta_i$ ($i=0,\ldots,\lfloor g/2\rfloor$) the irreducible divisors forming the boundary $\ensuremath{\overline{\mathcal{M}}}_g\setminus\ensuremath{\mathcal{M}}_g$. The general point of $\Delta_0$ is an irreducible curve with a single node, whereas the general point of $\Delta_i$ (for $i\geq1$) is the union of two smooth curves of genus $i$ and $g-i$, intersecting transversely at a point. The classes $\delta_i$ of the divisors $\Delta_i$, together with the Hodge class $\lambda$, are well known to form a basis of the rational Picard group $\Pic(\ensuremath{\overline{\mathcal{M}}}_g)_\ensuremath{\mathbb{Q}}$. We denote by $\ensuremath{\mathcal{R}}_g$ the moduli space of connected double \'etale covers of smooth curves of genus $g$. In other words, $\ensuremath{\mathcal{R}}_g$ parametrizes isomorphism classes of pairs $(C,\eta)$, where $C$ is smooth of genus $g$ and $\eta\in JC_2\setminus\set{\ensuremath{\mathcal{O}}_C}$. It comes with a natural forgetful map $\pi:\ensuremath{\mathcal{R}}_g\to\ensuremath{\mathcal{M}}_g$ which is \'etale of degree $2^{2g}-1$. Then, the Deligne-Mumford compactification $\ensuremath{\overline{\mathcal{R}}}_g$ is obtained as the normalization of $\ensuremath{\overline{\mathcal{M}}}_g$ in the function field of $\ensuremath{\mathcal{R}}_g$. This gives a commutative diagram \[ \xymatrix{ \ensuremath{\mathcal{R}}_g\ar[rr]\ar[d]_{\pi}&& \ensuremath{\overline{\mathcal{R}}}_g\ar[d]\\ \ensuremath{\mathcal{M}}_g\ar[rr]&& \ensuremath{\overline{\mathcal{M}}}_g } \] where $\ensuremath{\overline{\mathcal{R}}}_g$ is normal and the morphism $\ensuremath{\overline{\mathcal{R}}}_g\to\ensuremath{\overline{\mathcal{M}}}_g$ (that we will denote by $\pi$ as well) is finite. Beauville's partial compactification $\widetilde{\ensuremath{\mathcal{R}}}_g$ by admissible covers admits a natural inclusion into $\ensuremath{\overline{\mathcal{R}}}_g$. As proved in \cite{balcasfon}, the variety $\ensuremath{\overline{\mathcal{R}}}_g$ parametrizes isomorphism classes of \textit{Prym curves} of genus $g$, that is, isomorphism classes of triples $(X,\eta,\beta)$ where: \begin{itemize}[\textbullet] \item $X$ is a quasi-stable curve of genus $g$, i.e.~$X$ is semistable and any two of its exceptional components are disjoint\footnote{Recall that a smooth rational component $E\subset X$ is called \textit{exceptional} if $\sharp E\cap\overline{X\setminus E}=2$, namely if it intersects the rest of the curve in exactly two points.}. \item $\eta\in\Pic^0(X)$ is a line bundle of total degree 0, such that $\restr{\eta}{E}=\ensuremath{\mathcal{O}}_E(1)$ for every exceptional component $E\subset X$. \item $\beta:\eta^{\otimes 2}\to\ensuremath{\mathcal{O}}_X$ is generically nonzero over each non-exceptional component of $X$. \end{itemize} In case that $\beta$ is clear from the context, by abuse of notation the Prym curve $(X,\eta,\beta)$ will be often denoted simply by $(X,\eta)$. Then the morphism $\pi:\ensuremath{\overline{\mathcal{R}}}_g\to\ensuremath{\overline{\mathcal{M}}}_g$ sends (the class of) $(X,\eta,\beta)$ to (the class of) the \textit{stable model} $\st(X)$, obtained by contraction of the exceptional components of $X$. Using pullbacks of the boundary divisors of $\ensuremath{\overline{\mathcal{M}}}_g$, the boundary $\ensuremath{\overline{\mathcal{R}}}_g\setminus\ensuremath{\mathcal{R}}_g$ admits the following description (see \cite[Examples 1.3 and 1.4]{fa-lu}): \begin{enumerate}[(1)] \item Let $(X,\eta,\beta)$ be a Prym curve, such that $\st(X)$ is the union of two smooth curves $C_i$ and $C_{g-i}$ (of respective genus $i$ and $g-i$) intersecting transversely at a point $P$. In such a case $X=\st(X)$, and giving a 2-torsion line bundle $\eta\in\Pic^0(X)_2$ is the same as giving a nontrivial pair $(\eta_i,\eta_{g-i})\in\left(JC_i\right)_2\times\left(JC_{g-i}\right)_2$. \noindent Then the preimage $\pi^{-1}(\Delta_i)$ decomposes as the union of three irreducible divisors (denoted by $\Delta_i$, $\Delta_{g-i}$ and $\Delta_{i:g-i}$), which are distinguished by the behaviour of the 2-torsion bundle. More concretely, their general point is a Prym curve $(X,\eta)$, where $X=C_i\cup_P C_{g-i}$ is a reducible curve as above and the pair $\eta=(\eta_i,\eta_{g-i})$ satisfies: \begin{itemize}[\textbullet] \item $\eta_{g-i}=\ensuremath{\mathcal{O}}_{C_{g-i}}$, in the case of $\Delta_i$. \item $\eta_i=\ensuremath{\mathcal{O}}_{C_i}$, in the case of $\Delta_{g-i}$. \item $\eta_i\neq\ensuremath{\mathcal{O}}_{C_i}$ and $\eta_{g-i}\neq\ensuremath{\mathcal{O}}_{C_{g-i}}$, in the case of $\Delta_{i:g-i}$. \end{itemize} \vspace{2mm} \item Let $(X,\eta,\beta)$ be a Prym curve, such that $\st(X)$ is the irreducible nodal curve obtained by identification of two points $p,q$ on a smooth curve $C$ of genus $g-1$. \noindent If $X=\st(X)$ and $\nu:C\to X$ denotes the normalization, then $\eta\in\Pic^0(X)_2$ is determined by the choice of $\eta_C=\nu^*(\eta)\in JC_2$ and an identification of the fibers $\eta_C(p)$ and $\eta_C(q)$. \begin{itemize}[\textbullet] \item If $\eta_C=\ensuremath{\mathcal{O}}_C$, there is only one possible identification of $\ensuremath{\mathcal{O}}_C(p)$ and $\ensuremath{\mathcal{O}}_C(q)$ (namely identification by $-1$) giving a nontrivial $\eta\in\Pic^0(X)_2$. The corresponding element $(X,\eta)$ may be regarded as a Wirtinger cover of $X$. \item If $\eta_C\neq\ensuremath{\mathcal{O}}_C$, for each of the $2^{2g-2}-1$ choices of $\eta_C$ there are two possible identifications of $\ensuremath{\mathcal{O}}_C(p)$ and $\ensuremath{\mathcal{O}}_C(q)$. The $2(2^{2g-2}-1)$ corresponding Prym curves $(X,\eta)$ are non-admissible covers of $X$. \end{itemize} \noindent If $X\neq\st(X)$, then $X$ is the union of $C$ with an exceptional component $E$ through the points $p$ and $q$. The line bundle $\eta\in\Pic^0(X)$ must satisfy $\restr{\eta}E=\ensuremath{\mathcal{O}}_E(1)$ and $\restr{\eta}C^{\otimes 2}=\ensuremath{\mathcal{O}}_C(-p-q)$, which gives $2^{2g-2}$ possibilities. The corresponding Prym curves $(X,\eta)$ give Beauville admissible covers of $\st(X)$. \noindent It follows that $\pi^{-1}(\Delta_0)=\Delta_0'\cup\Delta_0''\cup\Delta_0^{ram}$, where $\Delta_0'$ (resp. $\Delta_0''$, resp. $\Delta_0^{ram}$) is an irreducible divisor whose general point is a non-admissible (resp. Wirtinger, resp. Beauville admissible) cover. Moreover, $\Delta_0^{ram}$ is the ramification locus of $\pi$ (see \cite[Page 763]{fa-lu} or \cite[Section 3]{balcasfon}). \end{enumerate} In terms of divisor classes, we have equalities \[ \pi^*(\delta_i)=\delta_i+\delta_{g-i}+\delta_{i:g-i},\;\;\;\;\pi^*(\delta_0)=\delta_0'+\delta_0''+2\delta_0^{ram} \] where of course $\delta_i,\delta_{g-i},\delta_{i:g-i}$ ($1\leq i\leq\lfloor g/2\rfloor$) and $\delta_0',\delta_0'',\delta_0^{ram}$ are the classes of the boundary divisors of $\ensuremath{\overline{\mathcal{R}}}_g$. These boundary classes, together with the pullback (also denoted by $\lambda$) of the Hodge class of $\ensuremath{\overline{\mathcal{M}}}_g$, form a basis of the rational Picard group $\Pic(\ensuremath{\overline{\mathcal{R}}}_g)_\ensuremath{\mathbb{Q}}$. \subsection{Divisors of Prym semicanonical pencils} If $C$ is a smooth curve of genus $g\geq3$, by \emph{semicanonical pencil} on $C$ we mean an even, effective theta-characteristic. By \emph{dimension} of a theta-characteristic $L$ we mean the (projective) dimension $h^0(C,L)-1$ of the linear system $|L|$. The locus of smooth curves admitting a semicanonical pencil is a divisor in $\ensuremath{\mathcal{M}}_g$, whose irreducibility was proved in \cite[Theorem~2.4]{te}. In the same paper, the class of its closure $\ensuremath{\mathcal{T}}_g$ in $\ensuremath{\overline{\mathcal{M}}}_g$ was computed. Since the parity of theta-characteristics remains constant in families (\cite{mu2}), the pullback of $\ensuremath{\mathcal{T}}_g$ to $\ensuremath{\overline{\mathcal{R}}}_g$ decomposes as $\pi^{-1}(\ensuremath{\mathcal{T}}_g)=\ensuremath{\mathcal{T}}_{g}^e\cup\ensuremath{\mathcal{T}}_{g}^o$, where $\ensuremath{\mathcal{T}}^e_g$ (resp. $\ensuremath{\mathcal{T}}^o_g$) is the closure in $\ensuremath{\overline{\mathcal{R}}}_g$ of the set \[ \begin{aligned} &\set{(C,\eta)\in\ensuremath{\mathcal{R}}_g\mid C\text{ has a semicanonical pencil $L$ with $h^0(C,L\otimes\eta)$ even}}\\ (\text{resp. }&\set{(C,\eta)\in\ensuremath{\mathcal{R}}_g\mid C\text{ has a semicanonical pencil $L$ with $h^0(C,L\otimes\eta)$ odd}}) \end{aligned} \] Note that both $\ensuremath{\mathcal{T}}^e_g$ and $\ensuremath{\mathcal{T}}^o_g$ have pure codimension 1 in $\ensuremath{\overline{\mathcal{R}}}_g$, since their union is the pullback by a finite map of an irreducible divisor. Furthermore, the restriction \[\restr{\pi}{\ensuremath{\mathcal{T}}_{g}^e}:\ensuremath{\mathcal{T}}_{g}^e\longrightarrow\ensuremath{\mathcal{T}}_{g} \;\;\;\text{ (resp. } \restr{\pi}{\ensuremath{\mathcal{T}}_{g}^o}:\ensuremath{\mathcal{T}}_{g}^o\longrightarrow\ensuremath{\mathcal{T}}_{g}) \]is surjective and generically finite of degree $2^{g-1}(2^g+1)-1$ (resp. of degree $2^{g-1}(2^g-1)$). This follows from the fact that a general element of $\ensuremath{\mathcal{T}}_g$ has a unique semicanonical pencil (\cite[Theorem 2.16]{te2}), as well as from the number of even and odd theta-characteristics on a smooth curve. \vspace{0.5mm} \begin{ex}\label{g=3} When $g=3$ a semicanonical pencil is the same as a $g^1_2$, and thus the divisor $\ensuremath{\mathcal{T}}_3\subset\ensuremath{\overline{\mathcal{M}}}_3$ equals the hyperelliptic locus $\ensuremath{\mathcal{H}}_3$. Of course, the semicanonical pencil on every smooth curve $C\in\ensuremath{\mathcal{T}}_3$ is unique. The 63 non-trivial elements of $JC_2$ can be represented by linear combinations of the Weierstrass points $R_1,\ldots,R_8$ as follows: \begin{itemize}[\textbullet] \item Those represented as a difference of two Weierstrass points, $\eta=\ensuremath{\mathcal{O}}_C(R_i-R_j)$, form a set of $\binom{8}{2}=28$ elements. Observe that in this case the theta-characteristic $g^1_2\otimes\eta=\ensuremath{\mathcal{O}}_C(2R_j+R_i-R_j)=\ensuremath{\mathcal{O}}_C(R_i+R_j)$ is odd. \item Those expressed as a linear combination of four distinct Weierstrass points, $\eta=\ensuremath{\mathcal{O}}_C(R_i+R_j-R_k-R_l)$, form a set of $\frac{\binom{8}{4}}{2}=35$ elements\footnote{Division by 2 comes from the fact that any two complementary sets of four Weierstrass points induce the same two-torsion line bundle.}. According to the number of odd and even theta-characteristics on a genus 3 curve, in this case $g^1_2\otimes\eta$ is even. \end{itemize} Hence we obtain \begin{gather*} \ensuremath{\mathcal{T}}_3^o=\text{(closure of)}\set{(C,\eta)\in\ensuremath{\mathcal{R}}_3\mid C\text{ hyperelliptic, }\eta=\ensuremath{\mathcal{O}}_C(R_i-R_j)} \subset\ensuremath{\overline{\mathcal{R}}}_3\\ \ensuremath{\mathcal{T}}_3^e=\text{(closure of)}\set{(C,\eta)\in\ensuremath{\mathcal{R}}_3\mid C\text{ hyperelliptic, }\eta=\ensuremath{\mathcal{O}}_C(R_i+R_j-R_k-R_l)}\subset\ensuremath{\overline{\mathcal{R}}}_3 \end{gather*} and, since monodromy on hyperelliptic curves acts transitively on tuples of Weierstrass points, it turns out that both divisors $\ensuremath{\mathcal{T}}^o_3$ and $\ensuremath{\mathcal{T}}^e_3$ are irreducible. \end{ex} \vspace{1mm} \section{Proof of \texorpdfstring{\autoref{thmA}}{Theorem A}} \label{proofA} We denote by $[\ensuremath{\mathcal{T}}_{g}^e],[\ensuremath{\mathcal{T}}_{g}^o]\in\Pic(\ensuremath{\overline{\mathcal{R}}}_g)_\mathbb{Q}$ the classes in $\ensuremath{\overline{\mathcal{R}}}_g$ of the divisors $\ensuremath{\mathcal{T}}_{g}^e$ and $\ensuremath{\mathcal{T}}_{g}^o$. This section is entirely devoted to proving \autoref{thmA}. First of all, observe that the pullback of the class $[\ensuremath{\mathcal{T}}_{g}]\in\Pic(\ensuremath{\overline{\mathcal{M}}}_g)_\mathbb{Q}$ (computed in \cite[Proposition~3.1]{te}) expresses $[\ensuremath{\mathcal{T}}_{g}^e]+[\ensuremath{\mathcal{T}}_{g}^o]$ as \begin{equation*} \pi^*[\ensuremath{\mathcal{T}}_g] = 2^{g-3} \left( (2^g+1) \lambda -2^{g-3}( \delta_0' +\delta_0''+2\delta_0^{ram})-\sum_{i=1}^{\lfloor g/2\rfloor} (2^{g-i}-1)(2^i-1)(\delta_i+\delta_{g-i}+\delta_{i:g-i}) \right). \end{equation*} This relation, together with the linear independence of the basic classes considered in $\ensuremath{\overline{\mathcal{R}}}_g$, simplifies the computations: if we know a coefficient for one of the divisors, then we also know the coefficient corresponding to the same basic class for the other divisor. Keeping this in mind, the coefficients of \autoref{thmA} can be determined by essentially following three steps: \begin{enumerate}[(1)] \item\label{step1} The pushforward $\pi_*[\ensuremath{\mathcal{T}}_{g}^e]$ easily gives the coefficient $a$ (hence $c$), as well as a relation between $b_0',b_0''$ and $b_0^{ram}$ (hence between $d_0',d_0''$ and $d_0^{ram}$). \item\label{step2} We adapt an argument of Teixidor \cite{te} to compute the coefficients $b_i,b_{g-i}$ and $b_{i:g-i}$ for every $i\geq1$: first we describe the intersection of $\ensuremath{\mathcal{T}}_{g}^e$ with the boundary divisors $\Delta_i,\Delta_{g-i}$ and $\Delta_{i:g-i}$, and then we intersect $\ensuremath{\mathcal{T}}_{g}^e$ with certain test curves. \item\label{step3} Finally, $d_0'$ and $d_0''$ are obtained intersecting $\ensuremath{\mathcal{T}}_{g}^o$ with test curves contained inside $\Delta_0'$ and $\Delta_0''$ respectively. The relation obtained in \eqref{step1} determines $d_0^{ram}$ as well. \end{enumerate} \vspace{3.5mm} For step~\eqref{step1}, note that on the one hand \[ \pi_*[\ensuremath{\mathcal{T}}^e_g]=\deg (\ensuremath{\mathcal{T}}^e_g \to \ensuremath{\mathcal{T}}_g)\cdot[\ensuremath{\mathcal{T}}_{g}]=(2^{g-1}(2^g+1)-1)2^{g-3}\left((2^g+1) \lambda -2^{g-3}\delta_0-\ldots \right) \] where $\ldots $ is a expression involving only the classes $\delta_1,\ldots,\delta_{\lfloor g/2\rfloor}$. On the other hand \[ \pi_*[\ensuremath{\mathcal{T}}^e_g]=a\pi_*\lambda-b_0'\pi_*\delta_0'-b_0'' \pi_*\delta_0''-b_0^{ram}\pi_*\delta_0^{ram} -\sum_{i=1}^{\lfloor g/2\rfloor} (b_i\pi_*\delta_i+b_{g-i}\pi_*\delta_{g-i} +b_{i:g-i}\pi_*\delta_{i:g-i}) \] and, since $\pi_*\lambda=\pi_*(\pi^*\lambda)=\deg\pi\cdot\lambda$ and the divisors $\Delta_0',\Delta_0''$ and $\Delta_0^{ram}$ of $\ensuremath{\overline{\mathcal{R}}}_g$ have respective degrees $2(2^{2g-2}-1),1$ and $2^{2g-2}$ over $\Delta_0\subset\ensuremath{\overline{\mathcal{M}}}_g$, we obtain \[ \pi_*[\ensuremath{\mathcal{T}}_{g}^e]=a(2^{2g}-1) \lambda -(2(2^{2g-2}-1)b_0'+b_0''+2^{2g-2}b_0^{ram})\delta_0+\ldots \] where $\ldots$ again denotes a linear combination of $\delta_1,\ldots,\delta_{\lfloor g/2\rfloor}$. Using that $\lambda,\delta_0,\ldots\delta_{\lfloor g/2\rfloor}\in\Pic(\ensuremath{\overline{\mathcal{M}}}_g)_\mathbb{Q}$ are linearly independent, we can compare the coefficients of $\lambda$ and $\delta_0$. Comparison for $\lambda$ yields \[ a=\frac{(2^{g-1}(2^g+1)-1)2^{g-3}(2^g+1)}{2^{2g}-1}=2^{g-3}(2^{g-1}+1), \] therefore $c=2^{2g-4}$ due to the relation $a+c=2^{g-3}(2^g+1)$. Comparison for $\delta_0$ gives \[ (2^{2g-1}-2)b_0'+b_0''+2^{2g-2}b_0^{ram} =2^{2g-6}(2^{g-1}(2^g+1)-1), \] or equivalently \[ (2^{2g-1}-2)d_0'+d_0''+2^{2g-2}d_0^{ram} =2^{3g-7}(2^g-1). \] \vspace{4mm} In step~\eqref{step2}, the key point is the following description of the intersection of $\ensuremath{\mathcal{T}}_{g}^e$ and $\ensuremath{\mathcal{T}}_{g}^o$ with the preimages $\pi^{-1}(\Delta_i)$. It is nothing but an adaptation of \cite[Proposition~1.2]{te}: \begin{prop} \label{boundary} For $i\geq1$, the general point of the intersection $\ensuremath{\mathcal{T}}_{g}^e\cap\pi^{-1}(\Delta_i)$ (resp. $\ensuremath{\mathcal{T}}_{g}^o\cap\pi^{-1}(\Delta_i)$) is a pair $(C,\eta)$ where: \begin{enumerate}[{\rm(i)}] \item\label{bound:item1} The curve $C$ is the union at a point $P$ of two smooth curves $C_i$ and $C_{g-i}$ of respective genera $i$ and $g-i$, and satisfies one of these four conditions ($j=i,g-i$): \begin{enumerate} \item[$\alpha_j)$] $C_j$ has a 1-dimensional (even) theta-characteristic $L_j$. In this case, the 1-dimensional limit theta-characteristics on $C$ are determined by the aspects $|L_j|+(g-j)P$ on $C_j$ and $|L_{g-j}+2P|+(j-2)P$ on $C_{g-j}$, where $L_{g-j}$ is any even theta-characteristic on $C_{g-j}$. \item[$\beta_j)$] $P$ is in the support of an effective (0-dimensional) theta-characteristic $L_j$ on $C_j$. The aspects of the 1-dimensional limit theta-characteristics on $C$ are $|L_j+P|+(g-j-1)P$ on $C_j$ and $|L_{g-j}+2P|+(j-2)P$ on $C_{g-j}$, where $L_{g-j}$ is any odd theta-characteristic on $C_{g-j}$. \end{enumerate} \item\label{bound:item2} $\eta=(\eta_i,\eta_{g-i})$ is a non-trivial 2-torsion line bundle on $C$, such that the numbers $h^0(C_i, L_i\otimes\eta_i)$ and $h^0(C_{g-i}, L_{g-i}\otimes\eta_{g-i})$ have the same (resp. opposite) parity. \end{enumerate} \end{prop} \begin{proof} First of all, note that item~\eqref{bound:item1} describes the general element of the intersection $\ensuremath{\mathcal{T}}_{g}\cap\Delta_i$ in $\ensuremath{\overline{\mathcal{M}}}_g$: this is exactly \cite[Proposition~1.2]{te}. Moreover, if $(C,\eta)\in\ensuremath{\mathcal{T}}_{g}^e\cap\pi^{-1}(\Delta_i)$ (resp. $(C,\eta)\in\ensuremath{\mathcal{T}}_{g}^o\cap\pi^{-1}(\Delta_i)$), then there exists (a germ of) a 1-dimensional family $(\ensuremath{\mathcal{C}}\to S,H,\mathcal{L})$ of Prym curves $(\ensuremath{\mathcal{C}}_s,H_s)$ endowed with a 1-dimensional theta-characteristic $\mathcal{L}_s$, such that: \begin{enumerate} \item For every $s\neq0$, $(\ensuremath{\mathcal{C}}_s,H_s)$ is a smooth Prym curve such that $\mathcal{L}_s\otimes H_s$ is an even (resp. odd) theta-characteristic on $\ensuremath{\mathcal{C}}_s$. \item The family $(\ensuremath{\mathcal{C}}\to S,H)$ specializes to $(C,\eta)=(\ensuremath{\mathcal{C}}_0,H_0)$. \end{enumerate} The possible aspects of the 1-dimensional limit series of $\mathcal{L}$ on $C=\mathcal{C}_0$ are described by item~\eqref{bound:item1}. Now the result follows from the fact that, on the one hand, the aspects of the limit series of $\mathcal{L}\otimes H$ on $C=\mathcal{C}_0$ are the same aspects as the limit of $\mathcal{L}$, but twisted by $\eta=H_0$; and on the other hand, the parity of a theta-characteristic on the reducible curve $C$ is the product of the parities of the theta-characteristics induced on $C_i$ and $C_{g-i}$. \end{proof} \vspace{1.5mm} \begin{rem} For a fixed general element $C$ of the intersection $\ensuremath{\mathcal{T}}_{g}\cap\Delta_i$ (i.e.~a curve $C$ satisfying the condition \eqref{bound:item1} above), the number of $\eta=(\eta_i,\eta_{g-i})$ such that $(C,\eta)\in\ensuremath{\mathcal{T}}_{g}^e$ can be easily computed. Indeed, the number of $\eta$ giving parities (even,even) is the product of the number of even theta-characteristics on $C_i$ and the number of even theta-characteristics on $C_{g-i}$: \[ 2^{i-1}(2^i+1)2^{g-i-1}(2^{g-i}+1)=2^{g-2}(2^i+1)(2^{g-i}+1). \] Similarly, the number of $\eta$ giving parities (odd,odd) is \[ 2^{i-1}(2^i-1)2^{g-i-1}(2^{g-i}-1)=2^{g-2}(2^i-1)(2^{g-i}-1). \] From all these, we have to discard the trivial bundle $(\ensuremath{\mathcal{O}}_{C_i},\ensuremath{\mathcal{O}}_{C_{g-i}})$. Hence the total number of $\eta$ (both even and odd, counted together) is \[ 2^{g-2}(2^i+1)(2^{g-i}+1)+2^{g-2}(2^i-1)(2^{g-i}-1)-1=2^{g-1}(2^g+1)-1, \] which indeed coincides with the degree of $\ensuremath{\mathcal{T}}_{g}^e$ over $\ensuremath{\mathcal{T}}_{g}$. Of course the configuration of the fiber $\restr{\pi}{\ensuremath{\mathcal{T}}_{g}^e}^{-1}(C)$ along the divisors $\Delta_i$, $\Delta_{g-i}$ and $\Delta_{i:g-i}$ will depend on whether $C$ satisfies $\alpha_j)$ or $\beta_j)$. \end{rem} \vspace{1.5mm} \begin{lem} \label{oddtc} If $C$ is a smooth curve of genus $g$ and $\eta\in JC_2$ is a non-trivial 2-torsion line bundle, then there are exactly $2^{g-1}(2^{g-1}-1)$ odd theta-characteristics $L$ on $C$ such that $L\otimes\eta$ is also odd. \end{lem} \begin{proof} This can be checked, for example, by considering how the group $JC_2$ of $2$-torsion line bundles acts on the set $S_g(C)$ of theta characteristics. The associated difference map \[ S_g(C)\times S_g(C)\longrightarrow JC_2,\quad(M,N)\longmapsto M\otimes N^{-1} \] can be restricted to the set of pairs of non-isomorphic odd theta-characteristics, that is, \[ S^-_g(C)\times S^-_g(C)-\Delta\longrightarrow JC_2-\{\mathcal{O}_C\}. \] Since $\#S^-_g(C)=2^{g-1}(2^g-1)$ and $\#JC_2=2^{2g}$, the fibers of this restriction have order \[ \#S^-_g(C)\cdot(\#S^-_g(C)-1)\cdot(\#JC_2-1)^{-1} =2^{g-1}(2^{g-1}-1), \] which reflects the number of odd theta-characteristics $L$ such that $L\otimes\eta$ is also odd. \end{proof} \vspace{1.5mm} Now, given an integer $i \geq1$, we proceed to compute the coefficients $b_i$, $b_{g-i}$ and $b_{i:g-i}$ of the class $[\ensuremath{\mathcal{T}}_{g}^e]$. We follow the argument in \cite[Proposition~3.1]{te}. Fix two smooth curves $C_i$ and $C_{g-i}$ of respective genera $i$ and $g-i$ having no theta-characteristic of positive dimension, as a well as a point $p\in C_i$ lying in the support of no effective theta-characteristic. We denote by $F$ the curve (isomorphic to $C_{g-i}$ itself) in $\Delta_i\subset\ensuremath{\overline{\mathcal{M}}}_g$, obtained by identifying $p$ with a variable point $q\in C_{g-i}$. This curve has the following intersection numbers with the basic divisor classes of $\ensuremath{\overline{\mathcal{M}}}_g$: \[ F\cdot\lambda=0,\;F\cdot\delta_j=0\text{ for }j\neq i,\;F\cdot\delta_i=-2(g-i-1) \] (for a justification of these intersection numbers, see \cite[page~81]{ha-mu}). Since the curve $F\subset\ensuremath{\overline{\mathcal{M}}}_g$ does not intersect the branch locus of the morphism $\pi$, it follows that the preimage $\pi^{-1}(F)$ has $2^{2g}-1$ connected components; each of them is isomorphic to $F$, and corresponds to the choice of a pair $\eta=(\eta_i,\eta_{g-i})$ of 2-torsion line bundles on $C_i$ and $C_{g-i}$ being not simultaneously trivial. Let $\widetilde{F_i}$ be one of the components of $\pi^{-1}(F)$ contained in the divisor $\Delta_i$ of $\ensuremath{\overline{\mathcal{R}}}_g$; it is attached to an element $\eta=(\eta_i,\ensuremath{\mathcal{O}}_{C_{g-i}})$, for a fixed non-trivial $\eta_i\in (JC_i)_2$. On the one hand, clearly $\delta_i$ is the only basic divisor class of $\ensuremath{\overline{\mathcal{R}}}_g$ that intersects $\widetilde{F_i}$. The projection formula then says that the number $\widetilde{F_i}\cdot\delta_i$ in $\ensuremath{\overline{\mathcal{R}}}_g$ equals the intersection $F\cdot\delta_i=-2(g-i-1)$ in $\ensuremath{\overline{\mathcal{M}}}_g$. Therefore, \[ \widetilde{F_i}\cdot[\ensuremath{\mathcal{T}}_{g}^e]=\widetilde{F_i}\cdot(a\lambda-b_0'\delta_0'-\ldots)=2(g-i-1)b_i. \] On the other hand, according to \autoref{boundary} an element $(C,\eta)\in\widetilde{F_i}$ belongs to $\ensuremath{\mathcal{T}}_{g}^e$ if and only if the two following conditions are satisfied: \begin{enumerate}[a)] \item\label{conditiona} The point $q\in C_{g-i}$ that is identified with $p$ lies in the support of an effective theta-characteristic $L_{g-i}$. That is, $C$ satisfies $\beta_{g-i})$. \item\label{conditionb} The odd theta-characteristic $L_i$ of $C_i$, when twisted by $\eta_i$, remains odd. \end{enumerate} This gives the intersection number \[ \widetilde{F_i}\cdot[\ensuremath{\mathcal{T}}_{g}^e]=(g-i-1)2^{g-i-1}(2^{g-i}-1)2^{i-1}(2^{i-1}-1), \] where we use \autoref{oddtc} to count the possible theta-characteristics $L_i$. Comparing both expressions for $\widetilde{F_i}\cdot[\ensuremath{\mathcal{T}}_{g}^e]$, it follows that $b_i=2^{g-3}(2^{g-i}-1)(2^{i-1}-1)$. With a similar argument (considering a connected component of $\pi^{-1}(F)$ contained in $\Delta_{g-i}$ or $\Delta_{i:g-i}$), one can find the numbers \[ b_{g-i}=2^{g-3}(2^{g-i-1}-1)(2^i-1),\;\;b_{i:g-i}=2^{g-3}(2^{g-1}-2^{i-1}-2^{g-i-1}+1). \] \vspace{1mm} \begin{rem}\label{transvers} The transversality of these intersections can be shown by looking at the scheme $X^e$ parametrizing pairs $((C,\eta),L)$, where $(C,\eta)$ is a Prym curve and $L$ is a semicanonical pencil on $C$ such that $L\otimes\eta$ is even. If we restrict the forgetful map $X^e\to\ensuremath{\mathcal{T}}_{g}^e$ to the component $\widetilde{F_i}$, we obtain a scheme $\mathcal{X}\to{\ensuremath{\mathcal{T}}_{g}^e}|_{\widetilde{F}_i}$ which is, by the above discussion, isomorphic to the scheme $\mathfrak{J}^1_{g-1}(\widetilde{F_i})$ of limit linear series of type $\mathfrak{g}^1_{g-1}$ on Prym curves $(C,\eta)\in\widetilde{F_i}$ satisfying conditions \eqref{conditiona} and \eqref{conditionb}. Following the description of this moduli space given in \cite[Theorem~3.3]{eh}, we see that the scheme $\mathfrak{J}^1_{g-1}(\widetilde{F_i})$ splits as the product of two reduced $0$-dimensional schemes, namely \[ \{(L_{g-i},\,q)\textrm{ as in }\eqref{conditiona}\}\times\{L_i\textrm{ as in }\eqref{conditionb}\}. \] Therefore $\mathfrak{J}^1_{g-1}(\widetilde{F_i})\cong\mathcal{X}\to{\ensuremath{\mathcal{T}}_{g}^e}|_{\widetilde{F}_i}$ is everywhere reduced and the intersection between $\widetilde{F_i}$ and $\ensuremath{\mathcal{T}}_{g}^e$ is transverse. A breakdown of this argument may be found in \cite[Theorem~2.2]{fa}. \end{rem} \vspace{1mm} Now we proceed with step~\eqref{step3}. We will determine the constants $d_0',d_0'',d_0^{ram}$ of the class $[\ensuremath{\mathcal{T}}^o_g]$ by using the test curve of \cite[Example~3.137]{ha-mo}. Fix a general smooth curve $D$ of genus $g-1$, with a fixed general point $p\in D$. Identifying $p$ with a moving point $q\in D$, we get a curve $G$ (isomorphic to $D$) which lies in $\Delta_0\subset\overline{\mathcal M}_g$. As explained in \cite{ha-mo}, the following equalities hold: \[ G\cdot\lambda=0, G\cdot\delta_0=2-2g, G\cdot\delta_1=1, G\cdot\delta_i=0\text{ for }i\geq2, \] where the intersection of $G$ and $\Delta_1$ occurs when $q$ approaches $p$; in that case the curve becomes reducible, having $D$ and a rational nodal curve as components. Combining this information with the known divisor class $[\ensuremath{\mathcal{T}}_{g}]$ in $\ensuremath{\overline{\mathcal{M}}}_g$, we have \[ G\cdot[\ensuremath{\mathcal{T}}_{g}]=2^{g-3}((g-3)\cdot2^{g-2}+1). \] In order to compute $d_0''$, let $\widetilde{G}''$ be the connected component of $\pi^{-1}(G)$ obtained by attaching to every curve $C=D_{pq}$ the 2-torsion line bundle $e=(\ensuremath{\mathcal{O}}_D)_{-1}$ (i.e.~$\ensuremath{\mathcal{O}}_D$ glued by -1 at the points $p,q$). Indeed $e$ is well defined along the family $G$, so $\widetilde{G}''$ makes sense and is isomorphic to $G$. Then: \begin{itemize}[\textbullet] \item By the projection formula, $\widetilde{G}''\cdot\lambda=0$. \item Again by projection, $\widetilde{G}''\cdot(\pi^*\delta_0)=2-2g$. Actually, since $\widetilde{G}''\subset \Delta_0''$ and $\widetilde{G}''$ intersects neither $\Delta_0'$ nor $\Delta_0^{ram}$, the following equalities hold: \[ \widetilde{G}''\cdot\delta_0''=2-2g, \;\; \widetilde{G}''\cdot\delta_0'=0=\widetilde{G}''\cdot\delta_0^{ram}. \] \item We have $\widetilde{G}''\cdot(\pi^*\delta_1)=1$, with $\widetilde{G}''\cdot\delta_1=1$ and $\widetilde{G}''\cdot\delta_{g-1}=0=\widetilde{G}''\cdot\delta_{1:g-1}$. \noindent Indeed, the intersection $G\cap\Delta_1$ occurs when $p=q$; for that curve, the 2-torsion that we consider is trivial on $D$ but not on the rational component. Hence the lift to $\widetilde{G}''$ of the intersection point $G\cap\Delta_1$ gives a point in $\widetilde{G}''\cap\Delta_1$. \item It is clear that $\widetilde{G}''\cdot\delta_i=\widetilde{G}''\cdot\delta_{g-i}=\widetilde{G}''\cdot\delta_{i:g-i}=0$ for $i\geq2$. \item Since twisting by $e$ changes the parity of any theta-characteristic in any curve of the family $G$ by \cite[Theorems~2.12 and 2.14]{ha}, it follows that all the intersection points of $G$ and $\ensuremath{\mathcal{T}}_{g}$ lift to points of $\widetilde{G}''\cap\ensuremath{\mathcal{T}}_{g}^o$. \end{itemize} All in all, we have \[ 2^{g-3}((g-3)\cdot2^{g-2}+1)=\widetilde{G}''\cdot [\ensuremath{\mathcal{T}}^o_g]=(2g-2)d_0''-2^{g-3}(2^{g-1}-1) \] and solving the equation we obtain $d_0''=2^{2g-6}$. For the computation of $d_0'$, we consider $\widetilde{G}'=\pi^{-1}(G)\cap\Delta_0'$ in $\ensuremath{\overline{\mathcal{R}}}_g$. Note that for an element $(C=D_{pq},\eta)\in\widetilde{G}'$, $\eta$ is obtained by gluing a nontrivial 2-torsion line bundle on $D$ at the points $p,q$. Then: \begin{itemize}[\textbullet] \item $\widetilde{G}'\cdot\lambda=0$ by the projection formula. \item Again by projection, $\widetilde{G}'\cdot(\pi^*\delta_0)=\deg(\widetilde{G}' \to G)(G\cdot\delta_0)=2(2-2g)(2^{2g-2}-1)$. Moreover, since $\widetilde{G}'\subset \Delta_0'$ intersects neither $\Delta_0''$ nor $\Delta_0^{ram}$ it follows that \[ \widetilde{G}'\cdot\delta_0'=2(2-2g)(2^{2g-2}-1), \;\;\widetilde{G}'\cdot\delta_0''=0=\widetilde{G}'\cdot\delta_0^{ram}. \] \item $\widetilde{G}'\cdot(\pi^*\delta_1)=\deg(\widetilde{G}' \to G)(G\cdot\delta_1)=2(2^{2g-2}-1)$. We claim that $\widetilde{G}'\cdot\delta_1=0$ and $\widetilde{G}'\cdot\delta_{g-1}=2^{2g-2}-1=\widetilde{G}'\cdot\delta_{1:g-1}$. \noindent Indeed, $G\cap\Delta_1$ occurs when $p=q$; when such a point is lifted to $\widetilde{G}'$, the 2-torsion is nontrivial on $D$ (by construction). This gives $\widetilde{G}'\cdot\delta_1=0$. \noindent Moreover, triviality on the rational nodal component will depend on which of the two possible gluings of the 2-torsion on $D$ we are taking; in any case, since $\widetilde{G}'=\pi^{-1}(G)\cap\Delta_0'$ considers simultaneously all possible gluings of all possible non-trivial 2-torsion line bundles on $D$, we have $\widetilde{G}'\cdot\delta_{g-1}=\widetilde{G}'\cdot\delta_{1:g-1}$. This proves the claim. \item Of course, $\widetilde{G}'\cdot(\pi^*\delta_i)=\widetilde{G}'\cdot\delta_{g-i}=\widetilde{G}'\cdot\delta_{i:g-i}=0$ whenever $i\geq2$. \item Finally, we use again that the parity of a theta-characteristic on a nodal curve of the family $G$ is changed when twisted by $e=(\ensuremath{\mathcal{O}}_D)_{-1}$. Since the two possible gluings of a non-trivial 2-torsion bundle on $D$ precisely differ by $e$, the intersection numbers $\widetilde{G}'\cdot[\ensuremath{\mathcal{T}}_{g}^e]$ and $\widetilde{G}'\cdot[\ensuremath{\mathcal{T}}_{g}^o]$ have to coincide, and at the same time add up to the total \[ \widetilde{G}'\cdot(\pi^*[\ensuremath{\mathcal{T}}_{g}])=\deg(\widetilde{G}' \to G)(G\cdot[\ensuremath{\mathcal{T}}_{g}])=2(2^{2g-2}-1)\cdot 2^{g-3}((g-3)\cdot2^{g-2}+1) \] by the projection formula. That is, \[ \widetilde{G}'\cdot[\ensuremath{\mathcal{T}}_{g}^e]=\widetilde{G}'\cdot[\ensuremath{\mathcal{T}}_{g}^o]=(2^{2g-2}-1)\cdot2^{g-3}((g-3)\cdot2^{g-2}+1). \] \end{itemize} Putting this together with the coefficients $d_{g-1}=2^{2g-5}$ and $d_{1:g-1}=2^{g-3}(2^{g-2}-1)$ obtained in step~\eqref{step2}, we get \begin{align*} (2^{2g-2}-1)\cdot2^{g-3}((g-3)\cdot2^{g-2}+1)&=\widetilde{G}'\cdot[\ensuremath{\mathcal{T}}^o_g]=\\ =2(2g-2)(2^{2g-2}-1)d_0'&-2^{2g-5}(2^{2g-2}-1)-2^{g-3}(2^{g-2}-1)(2^{2g-2}-1) \end{align*} and therefore $d_0'=2^{2g-7}$. Finally, to compute $d_0^{ram}$ we simply combine the relation \[ (2^{2g-1}-2)d_0'+d_0''+2^{2g-2}d_0^{ram} =2^{g-1}(2^g-1)2^{2g-6} \] obtained in step~\eqref{step1} with the coefficients $d_0',d_0''$ just found, to obtain $d_0^{ram}=2^{g-5}(2^{g-1}-1)$. This concludes step~\eqref{step3} and hence the proof of \autoref{thmA}. \begin{rem} The divisor $\ensuremath{\mathcal{T}}_g$ has a more natural interpretation in the compactification of the moduli space $\ensuremath{\mathcal{S}}_g^{+}$ of even spin curves (i.e.~curves equipped with an even theta-characteristic). In the same way, it would be preferable to discuss the divisors $\ensuremath{\mathcal{T}}_{g}^e$ and $\ensuremath{\mathcal{T}}_{g}^o$ in a space of curves endowed with both a Prym and a spin structure. In particular, if a good compactification of $\ensuremath{\mathcal{R}}_g\times_{\ensuremath{\mathcal{M}}_g}\ensuremath{\mathcal{S}}_g^{+}$ were constructed and studied, then the divisor classes of $\ensuremath{\mathcal{T}}_{g}^e$ and $\ensuremath{\mathcal{T}}_{g}^o$ could also be derived from the diagram \[ \xymatrix{ \ensuremath{\mathcal{R}}_g&\ensuremath{\mathcal{R}}_g\times_{\ensuremath{\mathcal{M}}_g}\ensuremath{\mathcal{S}}_g^{+}\ar[l]\ar[r]&\ensuremath{\mathcal{S}}_g^{+} } \] and the fact that the class of (the closure in $\ensuremath{\overline{\mathcal{S}}}_g^{+}$ of) the divisor \[ \text{}\set{(C,L)\in \ensuremath{\mathcal{S}}_g^{+} \mid \text{$L$ is a semicanonical pencil on $C$}} \] was computed by Farkas in \cite[Theorem~0.2]{fa}. Following the ideas of \cite{ser}, a candidate space for such a compactification is proposed in \cite[Section~2.4]{mphd}, although it remains to check that this space is indeed a smooth and proper Deligne-Mumford stack. Under the assumption that it is, a study of its boundary reveals the same expressions obtained in \autoref{thmA}. Further details can be found in \cite{mphd}. \end{rem} \vspace{2mm} \section{Proof of \texorpdfstring{\autoref{thmB}}{Theorem B}}\label{irred} In this section we study the irreducibility of the divisors $\ensuremath{\mathcal{T}}_{g}^o$ and $\ensuremath{\mathcal{T}}_{g}^e$. Recall that for $g=3$, we already saw in \autoref{g=3} that the divisors $\ensuremath{\mathcal{T}}_{3}^o$ and $\ensuremath{\mathcal{T}}_{3}^e$ are irreducible. In the general case ($g\geq5)$, our arguments are essentially an adaptation of those of Teixidor in \cite[Section~2]{te}, used to prove the irreducibility of $\ensuremath{\mathcal{T}}_{g}$ in $\ensuremath{\overline{\mathcal{M}}}_g$. The idea for proving the irreducibility of $\ensuremath{\mathcal{T}}_{g}^o$ is the following (the proof for $\ensuremath{\mathcal{T}}_{g}^e$ will be similar, but some simplifications will arise). By using \autoref{boundary}, first we will fix a Prym curve $(C,\eta)$ (degeneration of smooth hyperelliptic ones) lying in all the irreducible components of the intersection $\ensuremath{\mathcal{T}}^o_g\cap\Delta_1$. This reduces the problem to the local irreducibility of $\ensuremath{\mathcal{T}}^o_g$ in a neighborhood of $(C,\eta)$ (after checking that every irreducible component of $\ensuremath{\mathcal{T}}^o_g$ intersects $\Delta_1$). For the proof of the local irreducibility of $\ensuremath{\mathcal{T}}^o_g$, we can take advantage of the scheme of pairs $((C,\eta),L)$ introduced in \autoref{transvers} and use the following observation: \vspace{1mm} \begin{rem}\label{irredneigh} In a neighborhood of a given point, the local irreducibility of $\ensuremath{\mathcal{T}}_{g}^o$ (resp. $\ensuremath{\mathcal{T}}_{g}^e$) is implied by the local irreducibility of the scheme $X^o$ (resp. $X^e$) parametrizing pairs $((C,\eta),L)$, where $(C,\eta)$ is a Prym curve and $L$ is a semicanonical pencil on $C$ such that $L\otimes\eta$ is odd (resp. even). This follows from the surjectivity of the forgetful map $X^o\to\ensuremath{\mathcal{T}}_{g}^o$ (resp. $X^e\to\ensuremath{\mathcal{T}}_{g}^e$). \end{rem} Then the local irreducibility of $X^o$ (near our fixed $(C,\eta)$) will be argued by showing that monodromy interchanges the (limit) semicanonical pencils on $C$ that become odd when twisted by the $2$-torsion bundle $\eta$. Let us recall, for later use in this monodromy argument, some features of theta-characteristics on hyperelliptic curves: \vspace{1mm} \begin{rem}\label{hyp} Let $C$ be a smooth hyperelliptic curve of genus $g$, with Weierstrass points $R_1,\ldots,R_{2g+2}$. Then, it is well known (see e.g.~\cite[Proposition~6.1]{mutata}) that the theta-characteristics on $C$ have the form $r\cdot g^1_2+S$, $r$ being its dimension (with $-1\leq r\leq[\frac{g-1}{2}]$) and $S$ being the fixed part of the linear system (which consists of $g-1-2r$ distinct Weierstrass points). Moreover, given a 2-torsion line bundle of the form $\eta=\ensuremath{\mathcal{O}}_C(R_i-R_j)$, theta-characteristics changing their parity when twisted by $\eta$ are exactly those for which $R_i,R_j\in S$ (the dimension increases by 1) or $R_i,R_j\notin S$ (the dimension decreases by 1). \end{rem} For the proof of \autoref{thmB} we also need the following result, which will guarantee that every irreducible component of $\ensuremath{\mathcal{T}}^o_g$ and $\ensuremath{\mathcal{T}}^e_g$ intersects the boundary divisor $\Delta_1\subset\ensuremath{\overline{\mathcal{R}}}_g$: \begin{lem}\label{divboundary} Let $\ensuremath{\mathcal{D}}\subset\ensuremath{\mathcal{R}}_g$ be any divisor, where $g\geq5$. Then the closure $\ensuremath{\overline{\mathcal{D}}}\subset\ensuremath{\overline{\mathcal{R}}}_g$ intersects $\Delta_1$ and $\Delta_{g-1}$. \end{lem} \begin{proof} We borrow the construction from \cite[Section~4]{mnp}, where (a stronger version of) the corresponding result for divisors in $\ensuremath{\mathcal{M}}_g$ is proved. Fix a complete integral curve $B\subset\ensuremath{\mathcal{M}}_{g-2}$ (whose existence is guaranteed by the assumption $g\geq5$), two elliptic curves $E_1,E_2$ and a certain 2-torsion element $\eta\in JE_1\setminus\{0\}$. If $\Gamma_b$ denotes the smooth curve of genus $g-2$ corresponding to $b\in B$, one defines a family of Prym curves parametrized by $\Gamma_b^2$ as follows. If $(p_1,p_2)\in\Gamma_b^2$ is a pair of distinct points, glue to $\Gamma_b$ the curves $E_1$ and $E_2$ at the respective points $p_1$ and $p_2$ (this is independent of the chosen point on the elliptic curves). To this curve attach a 2-torsion bundle being trivial on $\Gamma_b$ and $E_2$, and restricting to $\eta$ on $E_1$. To an element $(p,p)\in\Delta_{\Gamma_b^2}\subset\Gamma_b^2$, we attach the curve obtained by gluing a $\ensuremath{\mathbb{P}}^1$ to $\Gamma_b$ at the point $p$, and then $E_1,E_2$ are glued to two other points in $\ensuremath{\mathbb{P}}^1$. Of course, the 2-torsion bundle restricts to $\eta$ on $E_1$, and is trivial on the remaining components. Moving $b$ in $B$, this construction gives a complete threefold $T=\underset{b\in B}{\bigcup}\Gamma_b^2$ contained in $\Delta_1\cap\Delta_{g-1}$. Let also $S=\underset{b\in B}{\bigcup}\Delta_{\Gamma_b^2}$ be the surface in $T$ given by the union of all the diagonals; it is the intersection of $T$ with $\Delta_2$. Then, the following statements hold: \begin{enumerate} \item $\restr{\delta_1}{S}=0$ and $\restr{\delta_{g-1}}{S}=0$ (the proof of \cite[Lemma~4.2]{mnp} is easily translated to our setting). \item $\restr{\lambda}{\Delta_{\Gamma_b^2}}=0$ for every $b\in B$, since all the curves in $\Delta_{\Gamma_b^2}$ have the same Hodge structure. \item If $a\in\ensuremath{\mathbb{Q}}$ is the coefficient of $\lambda$ for the class $[\ensuremath{\overline{\mathcal{D}}}]\in\Pic(\ensuremath{\overline{\mathcal{R}}}_g)_\ensuremath{\mathbb{Q}}$, then $a\neq0$. Indeed, $2^{2g-1}a\in\ensuremath{\mathbb{Q}}$ is the coefficient of $\lambda$ for the class $[\overline{\pi(\ensuremath{\mathcal{D}})}]\in\Pic(\ensuremath{\overline{\mathcal{M}}}_g)_\ensuremath{\mathbb{Q}}$; then \cite[Remark~4.1]{mnp} proves the claim. \end{enumerate} These are the key ingredients in the original proof of \cite[Proposition~4.5]{mnp}. The same arguments there work verbatim in our case and yield the analogous result: $\restr{[\ensuremath{\overline{\mathcal{D}}}]}{T}\neq m\cdot S$ for every $m\in\ensuremath{\mathbb{Q}}$. In particular, the intersection $\ensuremath{\overline{\mathcal{D}}}\cap T$ is non-empty (and not entirely contained in $S$). \end{proof} \vspace{0.5mm} \begin{prop}\label{irredTog} For $g\geq5$, the divisor $\ensuremath{\mathcal{T}}_{g}^o$ is irreducible. \end{prop} \begin{proof} According to \autoref{boundary}, the intersection $\ensuremath{\mathcal{T}}_{g}^o\cap\Delta_1$ consists of two loci $\alpha$ and $\beta$. The general point of each of these loci is the union at a point $P$ of a Prym elliptic curve $(E,\eta)$ and a smooth curve $C_{g-1}$ (with trivial line bundle) of genus $g-1$, such that: \begin{itemize}[\textbullet] \item In the case of $\alpha$, the curve $C_{g-1}$ has a 1-dimensional theta-characteristic, i.e, $C_{g-1}\in\ensuremath{\mathcal{T}}_{g-1}$ in $\ensuremath{\overline{\mathcal{M}}}_{g-1}$. Moreover, there is exactly one limit semicanonical pencil on $C_{g-1}\cup_PE$ changing parity when twisted by the 2-torsion bundle; it induces the theta-characteristic $\eta$ on $E$. \noindent It follows that $\alpha$ is irreducible (by irreducibility of $\ensuremath{\mathcal{T}}_{g-1}$) and the intersection of $\ensuremath{\mathcal{T}}^o_g$ and $\Delta_1$ along $\alpha$ is reduced. In particular, there is a unique irreducible component of $\ensuremath{\mathcal{T}}^o_g$ (that we will denote by $\ensuremath{\mathcal{T}}^o_{g,\alpha}$) intersecting $\Delta_1$ along the whole locus $\alpha$. \item In the case of $\beta$, $P$ is in the support of a 0-dimensional theta-characteristic on $C_{g-1}$. Again, there is a unique limit semicanonical pencil changing parity, with induced theta-characteristic $\ensuremath{\mathcal{O}}_E$ on $E$. \end{itemize} Now we consider a reducible Prym curve $(C,\eta)\in\Delta_1$ constructed as follows: $C$ is the join of an elliptic curve $E$ and a general smooth hyperelliptic curve $C'$ of genus $g-1$ at a Weierstrass point $P\in C'$, whereas the line bundle $\eta$ is trivial on $C'$. Note that $(C,\eta)$ is the general point of the intersection $\widetilde{\ensuremath{\mathcal{H}}}_g\cap\Delta_1$, where $\widetilde{\ensuremath{\mathcal{H}}}_g\subset\ensuremath{\mathcal{T}}_{g}^o$ denotes the locus of hyperelliptic Prym curves whose 2-torsion bundle is a difference of two Weierstrass points. Of course $(C,\eta)$ belongs to $\alpha$ and $\beta$; we claim that it actually belongs to any component of $\beta$. To prove this claim, consider any irreducible component of $\beta$, and fix a general element of it. This general element admits the description given above: let us denote by $X$ (written $C_{g-1}$ above) the irreducible component of genus $g-1$, and by $Q_X\in X$ the point connecting $X$ with the elliptic component. Recall that $Q_X$ lies in the support of a 0-dimensional theta-characteristic $L_X$ on $X$. We deform the pair $(X,L_X)$ to a pair $(C',L)$ formed by our hyperelliptic curve $C'$ and a $0$-dimensional theta-characteristic $L$ on it. According to the description of \autoref{hyp}, under this deformation the point $Q_X\in X$ specializes to a Weierstrass point $Q\in C'$. Therefore, our irreducible component of $\beta$ contains a Prym curve which is the union of $C'$ (with trivial 2-torsion) and a Prym elliptic curve $(E',\eta')$ at the Weierstrass point $Q\in C'$. Since the monodromy on hyperelliptic curves acts transitively on the set of Weierstrass points, we may replace $Q$ by our original Weierstrass point $P$ without changing the component of $\beta$. Using that $\ensuremath{\overline{\mathcal{R}}}_1$ is connected we can also replace $(E',\eta')$ by $(E,\eta)$. This proves the claim. Now, to prove the irreducibility of $\ensuremath{\mathcal{T}}_{g}^o$ we argue as follows: since $\ensuremath{\mathcal{T}}_{g}^o$ has pure codimension 1, we know by \autoref{divboundary} that each of its irreducible components intersects $\Delta_1$. As our point $(C,\eta)$ belongs to all the irreducible components of $\ensuremath{\mathcal{T}}_{g}^o\cap\Delta_1$, it suffices to check the local irreducibility of $\ensuremath{\mathcal{T}}_{g}^o$ in a neighborhood of $(C,\eta)$. To achieve this, in view of \autoref{irredneigh} we will check the local irreducibility of the scheme $X^o$. In other words, we need to study the \emph{limit semicanonical pencils on $C$ changing parity when twisted by $\eta$}. We do this in the rest of the proof, by checking that monodromy on $\widetilde{\ensuremath{\mathcal{H}}}_g\subset\ensuremath{\mathcal{T}}_{g}^o$ connects any \emph{limit semicanonical pencil changing parity on $(C,\eta)$} of type $\beta$ with one of type $\alpha$, and checking that limits of type $\alpha$ are also permuted by monodromy on $\ensuremath{\mathcal{T}}^o_{g,\alpha}$. Let $R_1,R_2,R_3$ be the points on $E$ differing from $P$ by 2-torsion, and let $R_4,\ldots,R_{2g+2}$ be the Weierstrass points on $C'$ that are different from $P$: reordering if necessary, we assume $\restr{\eta}{E}=\ensuremath{\mathcal{O}}_E(R_1-R_2)$. Note that $R_1,\ldots,R_{2g+2}$ are the limits on $C$ of Weierstrass points on nearby smooth hyperelliptic curves, since they are the ramification points of the limit $g^1_2$ on $C$. With this notation, arguing as in the proof of \autoref{boundary}, the possible aspects on $E$ of a \emph{limit semicanonical pencil changing parity on $(C,\eta)$} are: \begin{itemize}[\textbullet] \item Those of type $\alpha$ have aspect on $E$ differing from the even theta-characteristic $\eta$ by $(g-1)P$, hence $\ensuremath{\mathcal{O}}_E(R_3+(g-2)P)=\ensuremath{\mathcal{O}}_E(R_1+R_2+(g-3)P)$. \item Those of type $\beta$ have aspect differing from the odd theta-characteristic $\ensuremath{\mathcal{O}}_E$ by $(g-1)P$, hence $\ensuremath{\mathcal{O}}_E((g-1)P)=\ensuremath{\mathcal{O}}_E(R_1+R_2+R_3+(g-4)P)$. \end{itemize} Given a family of \textit{semicanonical pencils changing parity} on nearby smooth curves of $\widetilde{\ensuremath{\mathcal{H}}}_g$, we can distinguish the type of its limit on $C$ by knowing how many of the $g-1-2r$ fixed Weierstrass points in the moving theta-characteristic (recall \autoref{hyp}) specialize to $E$. If this number is 0 or 3 (resp. 1 or 2), then our limit is of type $\beta$ (resp. of type $\alpha$). Hence, after using monodromy on smooth hyperelliptic curves to interchange the (limit) Weierstrass point $R_3$ with an appropriate (limit) Weierstrass point on $C'$, we obtain that monodromy on $\widetilde{\ensuremath{\mathcal{H}}}_g\subset\ensuremath{\mathcal{T}}_{g}^o$ interchanges any \textit{limit semicanonical pencil changing parity} of type $\beta$ with one of type $\alpha$ (of the same dimension). The only possible exception is a limit of $\frac{g-1}{2}\cdot g^1_2$ when $g\equiv3\pmod{4}$, since in that case there are no fixed points to interchange with $R_3$. In addition, monodromy on $\ensuremath{\mathcal{T}}^o_{g,\alpha}$ (the unique irreducible component of $\ensuremath{\mathcal{T}}^o_g$ containing $\alpha$) acts transitively on the set of \textit{limit semicanonical pencils changing parity} of type $\alpha$. Indeed, if $X^o_\alpha$ denotes the preimage of $\ensuremath{\mathcal{T}}^o_{g,\alpha}$ in $X^o$, then the forgetful map $X^o_\alpha\to \ensuremath{\mathcal{T}}^o_{g,\alpha}$ is birational (by \cite[Theorem 2.16]{te2}) and has finite fibers; consequently $X^o_\alpha$ is irreducible, which proves the assertion. Therefore to conclude the proof of the local irreducibility of $X^o$ near $(C,\eta)$ it only remains to show that, if $g\equiv3\pmod{4}$, the monodromy on $\ensuremath{\mathcal{T}}_{g}^o$ interchanges the limit of $\frac{g-1}{2}\cdot g^1_2$ with a limit of theta-characteristics of lower dimension. This can be achieved exactly with the same family of limit theta-characteristics as in \cite[Proposition~2.4]{te} for certain reducible curves; let us include a few words about the geometry of this family. First, one degenerates $C'$ to a reducible hyperelliptic curve obtained by identifying a point $P'\in E'$ ($E'$ elliptic curve) with a Weierstrass point $Q\in C''$ ($C''\in\ensuremath{\mathcal{M}}_{g-2}$ hyperelliptic), such that the Weierstrass point $P\in C'$ specializes to a point of $E'$. This naturally induces a degeneration $C_{P'}$ of our Prym curve $(C,\eta)$, in which the 2-torsion bundle is non-trivial only along the component $E$. We will denote by $R_4,R_5$ the points of $E'$ differing by $2$-torsion from $P$ and $P'$ (limits of the corresponding Weierstrass points of $C'$). Consider the family of Prym curves $C_X$ obtained by glueing $E$ (the only component with non-trivial 2-torsion) and $E'$ at $P$, and by identifying $Q\in C''$ with a variable point $X\in E'$. Note that for $X=P'$, we indeed recover our deformation $C_{P'}$ of $(C,\eta)$. Every such Prym curve $C_X$ can be equipped with a \textit{limit semicanonical pencil changing parity} of aspects $\ensuremath{\mathcal{O}}_E((g-1)P)$ on $E$, $\ensuremath{\mathcal{O}}_{E'}(Q+(g-2)X)$ on $E'$ and $\ensuremath{\mathcal{O}}_{C''}((g-1)Q)$ on $C''$. On $C_{P'}$, this corresponds to the limit of $\frac{g-1}{2}\cdot g^1_2$ on nearby smooth Prym curves of $\widetilde{\ensuremath{\mathcal{H}}}_g$; on the other hand, $C_{R_5}$ is also hyperelliptic and we have a limit of theta-characteristics of the form $\frac{g-5}{2}\cdot g^1_2+R_1+R_2+R_3+R_4$. Therefore, monodromy on $\ensuremath{\mathcal{T}}^o_g$ moves the limit of $\frac{g-1}{2}\cdot g^1_2$ to a limit theta-characteristic of type $\beta$ of lower dimension, which concludes the proof. \end{proof} \vspace{1mm} \begin{prop}\label{irredTeg} For $g\geq5$, the divisor $\ensuremath{\mathcal{T}}_{g}^e$ is irreducible. \end{prop} \begin{proof} The proof is similar to that of $\ensuremath{\mathcal{T}}^o_g$, but with some simplifications (due to the fact that the intersection $\ensuremath{\mathcal{T}}_{g}^e\cap\Delta_1$ consists only of a locus $\alpha$). Let us give an outline of the argument. In virtue of \autoref{boundary}, the general point of $\alpha$ is the union at a point $P$ of a Prym elliptic curve $(E,\eta)$ and a curve $C_{g-1}\in\ensuremath{\mathcal{M}}_{g-1}$ (with trivial 2-torsion bundle) having a 1-dimensional theta-characteristic. Let us denote by $R_1,R_2,R_3$ the points of $E$ differing from $P$ by $2$-torsion, so that $\eta=\ensuremath{\mathcal{O}}_E(R_1-R_2)$. Then there are exactly two \textit{limit semicanonical pencils on $E\cup_PC_{g-1}$ remaining even when twisted by $(\eta,\ensuremath{\mathcal{O}}_{C_{g-1}})$}. For these limit semicanonical pencils, $\ensuremath{\mathcal{O}}_E(R_1-R_3)$ and $\ensuremath{\mathcal{O}}_E(R_2-R_3)$ are the induced theta-characteristics on $E$ (and hence $|R_2+P|+(g-3)P$ and $|R_1+P|+(g-3)P$ are the corresponding aspects on $E$). It follows that the intersection of $\ensuremath{\mathcal{T}}^e_g$ and $\Delta_1$ is irreducible (by irreducibility of $\ensuremath{\mathcal{T}}_{g-1}$) but not reduced. We also deduce that $\ensuremath{\mathcal{T}}^e_g$ will have at most two irreducible components, but we cannot directly derive the irreducibility of $\ensuremath{\mathcal{T}}^e_g$. To circumvent this problem, we consider (as in the proof of \autoref{irredTog}) a Prym curve $(C,\eta)\in\ensuremath{\mathcal{T}}^e_g\cap\Delta_1$ obtained by taking $C_{g-1}=C'$ ($C'$ general smooth hyperelliptic curve) and $P\in C'$ a Weierstrass point. Recall that $(C,\eta)$ is the general point of the intersection $\widetilde{\ensuremath{\mathcal{H}}}_g\cap\Delta_1$ ($\widetilde{\ensuremath{\mathcal{H}}}_g\subset\ensuremath{\mathcal{T}}_{g}^e$ being the locus of hyperelliptic Prym curves whose 2-torsion bundle is a difference of two Weierstrass points). By using monodromy on smooth hyperelliptic curves to interchange the (limit) Weierstrass points $R_1$ and $R_2$, we obtain that monodromy on $\widetilde{\ensuremath{\mathcal{H}}}_g\subset\ensuremath{\mathcal{T}}_{g}^e$ connects (locally around $(C,\eta)$) the two possible irreducible components of $\ensuremath{\mathcal{T}}^e_g$. This finishes the proof. \end{proof} \vspace{1mm} All in all, we have showed the irreducibility of $\ensuremath{\mathcal{T}}_g^o$ and $\ensuremath{\mathcal{T}}_g^e$ for every $g\neq4$. As explained in the introduction, the irreducibility of $\ensuremath{\mathcal{T}}_4^o$ and $\ensuremath{\mathcal{T}}_4^e$ can be deduced from a study of the Prym map $\ensuremath{\mathcal{P}}_4$ restricted to these divisors, which is contained in \cite{lnr}. \vspace{1mm}
proofpile-arXiv_065-3762
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Lie groups are ubiquitous in geometry, physics and many application domains such as robotics \cite{barrau_invariant_2017}, medical imaging \cite{lorenzi_efficient_2014} or computer vision \cite{hauberg_unscented_2013}, giving rise to a prolific research avenue. Structure preserving numerical methods have demonstrated significant qualitative and quantitative improvements over extrinsic methods \cite{iserles_lie-group_2005}. Moreover, machine learning \cite{barbaresco_lie_2020} and optimisation methods \cite{journee_gradient-optimization_2007} are being developed to deal with Lie group data. In this context, parallel transport is a natural tool to define statistical models and optimisation procedures, such as the geodesic or spline regression \cite{kim_smoothing_2020,nava-yazdani_geodesic_2020}, or to normalise data represented by tangent vectors \cite{yair_parallel_2019,brooks_riemannian_2019}. Different geometric structures are compatible with the group structure, such as its canonical Cartan connection, whose geodesics are one-parameter subgroups, or left-invariant Riemannian metrics. In this work we focus on the latter case, that is fundamental in geometric mechanics \cite{kolev_lie_2004} and has been studied in depth since the foundational papers of Arnold \cite{arnold_sur_1966} and Milnor \cite{milnor_curvatures_1976}. The fundamental idea of Euler-Poincarré reduction is that the geodesic equation can be expressed entirely in the Lie algebra thanks to the symmetry of left-invariance~\cite{marsden_mechanical_2009}, alleviating the burden of coordinate charts. However, to the best of our knowledge, there is no literature on a similar treatment of the parallel transport equation. We present here a derivation of the parallel transport equation expressed in the Lie algebra of a Lie group endowed with a left-invariant metric. We exemplify the use of this equation on the group of rigid body motions $SE(3)$, using common numerical integration schemes, and compare it to the pole ladder approximation algorithm. This results in a stable and efficient implementation of parallel transport. The implementation leverages the python package \url{geomstats} and is available online at \url{http://geomstats.ai}. In section~\ref{sec:notation}, we give the general notations and recall some basic facts from Lie group theory. Then we derive algebraic expressions of the Levi-Civita connection associated to the left-invariant metric in section~\ref{sec:metric}. The equation of parallel transport is deduced from this expression and its integration is exemplified in section~\ref{sec:equation}. \section{Notations} \label{sec:notation} Let $G$ be a lie group of (finite) dimension $n$. Let $e$ be its identity element, $\mathfrak{g} = T_eG$ be its tangent space at $e$, and for any $g \in G$, let $L_g: h\in G \mapsto gh$ denote the left-translation map, and $dL_g$ its differential map. Let $\mathfrak{g}^L$ be the Lie algebra of left-invariant vector fields of $G$: $X \in \mathfrak{g}^L \iff \forall g \in G, X|_g = dL_g X_e$. $\mathfrak{g}$ and $\mathfrak{g}^L$ are in one-to-one correspondence, and we will write $\tilde x$ the left-invariant field generated by $x \in \mathfrak{g}$: $\forall g \in G$, $\tilde x_g = dL_g x$. The bracket defined on $\mathfrak{g}$ by $[x, y] = [\Tilde{x}, \Tilde{y}]_e$ turns $\mathfrak{g}$ into a Lie algebra that is isomorphic to $g^L$. One can also check that this bracket coincides with the adjoint map defined by $\textrm{ad}_x(y) = d_e(g\mapsto \textrm{Ad}_g y)$, where $Ad_g=d_e(h \mapsto ghg^{-1})$. For a matrix group, it is the commutator. Let $(e_1, \ldots, e_n)$ be an orthonormal basis of $\mathfrak{g}$, and the associated left-invariant vector fields $X_i^L = \Tilde{e_i} = g \mapsto dL_g e_i$. As $dL_g$ is an isomorphism, $(X_1^L|_g, \ldots, X_n^L|_g)$ form a basis of $T_gG$ for any $g \in G$, so one can write $X|_g = f^i(g)X_i^L|_g$ where for $i=1,\ldots,n$, $g\mapsto f^i(g)$ is a smooth real-valued function on $G$. Any vector field on $G$ can thus be expressed as a linear combination of the $X_i^L$ with function coefficients. Finally, let $\theta$ be the Maurer-Cartan form defined on $G$ by: \begin{equation} \forall g \in G, \forall v \in T_gG, \; \theta|_g(v) = (dL_g)^{-1} v \in \mathfrak{g} \end{equation} It is a $\mathfrak{g}$-valued 1-form and for a vector field $X$ on $G$ we write $\theta(X)|_g = \theta|_g(X|_g)$ to simplify the notations. \section{Left-invariant metric and connection} \label{sec:metric} A Riemannian metric $\langle\cdot,\cdot\rangle$ on $G$ is called left-invariant if the differential map of the left translation is an isometry between tangent spaces, that is \begin{equation*} \forall g,h \in G, \forall u,v \in T_gG, \;\; \langle u,v\rangle_g = \langle dL_h u, dL_h v\rangle_{hg}. \end{equation*} It is thus uniquely determined by an inner product on the tangent space at the identity $T_eG =\mathfrak{g}$ of $G$. Furthermore, the metric dual to the adjoint map is defined such that \begin{equation} \forall a,b,c \in \mathfrak{g}, \langle\textrm{ad}_a^*(b), c\rangle = \langle b, \textrm{ad}_a(c)\rangle = \langle [a, c], b\rangle. \end{equation} As the bracket can be computed explicitly in the Lie algebra, so can $\textrm{ad}^*$ thanks to the orthonormal basis of $\mathfrak{g}$. Now let $\nabla$ be the Levi-Civita connection associated to the metric. It is also left-invariant and can be characterised by a bi-linear form on $\mathfrak{g}$ that verifies \cite{pennec_exponential_2012,gallier_differential_2020}: \begin{equation} \forall x,y \in \mathfrak{g}, \;\; \alpha(x, y) := (\nabla_{\Tilde{x}}\Tilde{y})_e = \frac{1}{2}\big([x, y] - \textrm{ad}_x^*(y) - \textrm{ad}_y^*(x)\big) \label{eq:alpha} \end{equation} Indeed by the left-invariance, for two left-invariant vector fields $X=\tilde x, Y=\tilde y~\in~\mathfrak{g}^L$, the map $g\mapsto \langle X, Y\rangle_g$ is constant, so for any vector field $Z=\tilde z$ we have $Z(\langle X,Y\rangle)=0$. Kozsul formula thus becomes \begin{align} 2\langle\nabla_X Y, Z\rangle &= \langle [X, Y], Z\rangle - \langle [Y, Z], X\rangle - \langle [X, Z], Y\rangle\label{eq:kozsul}\\ 2\langle\nabla_X Y, Z\rangle_e &= \langle [x, y], z\rangle_e - \langle\textrm{ad}_y(z), x\rangle_e - \langle\textrm{ad}_x(z), y\rangle_e\nonumber\\ 2 \langle\alpha(x, y), z\rangle_e&= \langle [x, y], z\rangle_e- \langle\textrm{ad}_y^*(x), z\rangle_e - \langle\textrm{ad}_x^*(y), z\rangle_e.\nonumber \end{align} Note however that this formula is only valid for left-invariant vector fields. We will now generalise to any vector fields defined along a smooth curve on $G$, using the left-invariant basis ($X_1^L, \ldots, X_n^L$). Let $\gamma:[0,1]\rightarrow G$ be a smooth curve, and $Y$ a vector field defined along $\gamma$. Write $Y = g^i X_i^L$, $\dot{\gamma} = f^i X_i^L$. Let's also define the \textit{left-angular velocities} $\omega(t)= \theta|_{\gamma(t)} \dot{\gamma}(t) = (f^i\circ \gamma)(t) e_i \in \mathfrak{g}$ and $\zeta(t) = \theta(Y)|_{\gamma(t)} = (g^j \circ \gamma)(t) e_j \in \mathfrak{g}$. Then the covariant derivative of $Y$ along $\gamma$ is \begin{align*} \nabla_{\dot \gamma(t)}Y &= (f^i \circ \gamma)(t) \nabla_{X_{i}^{L}}\big( g^i X_i^L \big) \\ &= (f^i \circ \gamma)(t) X_i^L(g^j) X_j^L + (f^i \circ \gamma)(t) (g^j\circ \gamma)(t) (\nabla_{X_{i}^{L}}{X_j^L})_{\gamma(t)}\\ dL_{\gamma(t)}^{-1} \nabla_{\dot \gamma(t)}Y &= (f^i \circ \gamma)(t) X_i^L(g^j) e_j + (f^i \circ \gamma)(t) (g^j\circ \gamma)(t) dL_{\gamma(t)}^{-1}(\nabla_{X_{i}^{L}}{X_j^L})_{\gamma(t)} \\ &= (f^i \circ \gamma)(t) X_i^L(g^j) e_j + (f^i \circ \gamma)(t) (g^j \circ \gamma)(t)\nabla_{e_i}e_j \end{align*} where Leibniz formula and the invariance of the connection is used in $(\nabla_{X_{i}^{L}}{X_j^L}) = dL_{\gamma(t)} \nabla_{e_i}e_j$. Therefore for $k=1..n$ \begin{align} \langle dL_{\gamma(t)}^{-1} \nabla_{\dot \gamma(t)}Y, e_k\rangle &= (f^i \circ \gamma)(t) X_i^L(g^j) \langle e_j, e_k\rangle \nonumber \\ &\quad + (f^i \circ \gamma)(t) (g^j\circ \gamma)(t) \langle\nabla_{e_i}e_j,e_k\rangle \end{align} but on one hand \begin{align} \zeta(t) &= \theta(Y)|_{\gamma(t)} = \theta|_{\gamma(t)}\big(((g^j\circ \gamma)(t) X_j^L|_{\gamma(t)})\big) \nonumber\\ &= (g^j\circ \gamma)(t) e_j \\ \dot{\zeta}(t) & = (g^j\circ \gamma)'(t) e_j = d_{\gamma(t)}g^j \dot{\gamma}(t) e_j \nonumber\\ &= d_{\gamma(t)} g^j \Big( (f^i \circ \gamma)(t) X_i^L|_{\gamma(t)}\Big) e_j \nonumber\\ &= (f^i \circ \gamma)(t) d_{\gamma(t)}g^j X_i^L|_{\gamma(t)} e_j \nonumber\\ &= (f^i \circ \gamma)(t) X_i^L(g^j) e_j \end{align} and on the other hand, using \eqref{eq:kozsul}: \begin{align} (f^i \circ \gamma)(g^j\circ \gamma) \langle\nabla_{e_i}e_j,e_k\rangle &= \frac{1}{2} (f^i \circ \gamma)(g^j\circ \gamma)(\langle [e_i, e_j], e_k\rangle \nonumber\\ &\qquad - \langle [e_j, e_k], e_i\rangle - \langle [e_i, e_k], e_j\rangle) \nonumber\\ &= \frac{1}{2} ( \langle [(f^i \circ \gamma) e_i, (g^j \circ \gamma) e_j], e_k\rangle \nonumber\\ &\qquad - \langle [(g^j \circ \gamma) e_j, e_k], (f^i \circ \gamma) e_i\rangle \nonumber\\ &\qquad - \langle [(f^i \circ \gamma) e_i, e_k], (g^j \circ \gamma) e_j\rangle) \nonumber\\ &= \frac{1}{2}([\omega, \zeta] - \textrm{ad}_{\omega}^*\zeta - \textrm{ad}_{\zeta}^*\omega) = \alpha(\omega, \zeta) \end{align} Thus, we obtain an algebraic expression for the covariant derivative of any vector field $Y$ along a smooth curve $\gamma$. It will be the main ingredient of this paper. \begin{equation} \label{eq:connection} dL_{\gamma(t)}^{-1}\nabla_{\dot{\gamma}(t)}Y(t) = \dot{\zeta}(t) + \alpha(\omega(t), \zeta(t)) \end{equation} A similar expression can be found in \cite{arnold_sur_1966,gay-balmaz_invariant_2012}. As all the variables of the right-hand side are defined in $\mathfrak{g}$, they can be computed with matrix operations and an orthonormal basis. \section{Parallel Transport} \label{sec:equation} We now focus on two particular cases of $\eqref{eq:connection}$ to derive the equations of geodesics and of parallel transport along a curve. \subsection{Geodesic equation} The first particular case is for $Y(t)=\dot \gamma(t)$. It is then straightforward to deduce from \eqref{eq:connection} the Euler-Poincarré equation for a geodesic curve \cite{kolev_lie_2004,cendra_lagrangian_1998}. Indeed in this case, recall that $\omega=\theta|_{\gamma(t)} \dot{\gamma}(t)$ is the left-angular velocity, $\zeta = \omega$ and $\alpha(\omega, \omega)~=~ -\textrm{ad}^*_\omega(\omega)$. Hence $\gamma$ is a geodesic if and only if $dL_{\gamma(t)}^{-1} \nabla_{\dot \gamma (t)} \dot \gamma(t) = 0$ i.e. setting the left-hand side of \eqref{eq:connection} to $0$. We obtain \begin{equation} \label{eq:ep} \begin{cases} \dot{\gamma}(t) &= dL_{\gamma(t)} \omega(t) \\ \dot{\omega}(t) &= \textrm{ad}_{\omega(t)}^* \omega(t). \end{cases} \end{equation} \begin{remark} One can show that the metric is bi-invariant if and only if the adjoint map is skew-symmetric (see \cite{pennec_exponential_2012} or \cite[Prop. 20.7]{gallier_differential_2020}). In this case $ad_\omega^*(\omega) = 0$ and \eqref{eq:ep} coincides with the equation of one-parameter subgroups on $G$. \end{remark} \subsection{Reduced Parallel Transport Equation} The second case is for a vector $Y$ that is parallel along the curve $\gamma$, that is, $\forall t, \nabla_{\dot \gamma(t)} Y(t) = 0$. Similarly to the geodesic equation, we deduce from \eqref{eq:connection} the parallel transport equation expressed in the Lie algebra. \begin{theorem} \label{thm} Let $\gamma$ be a smooth curve on $G$. The vector $Y$ is parallel along $\gamma$ if and only if it is solution to \begin{equation} \label{eq:tp} \begin{cases} \omega(t) &= dL_{\gamma(t)}^{-1} \dot{\gamma}(t) \\ Y(t) &= dL_{\gamma(t)} \zeta(t) \\ \dot{\zeta}(t) &= -\alpha(\omega(t), \zeta(t)) \end{cases} \end{equation} \end{theorem} Note that in order to parallel transport along a geodesic curve, \eqref{eq:ep} and \eqref{eq:tp} are solved jointly. \subsection{Application} We now exemplify Theorem~\ref{thm} on the group of isometries of $\mathbb{R}^3$, $SE(3)$, endowed with a left-invariant metric $g$. $SE(3)$, is the semi-direct product of the group of three-dimensional rotations $SO(3)$ with $\mathbb{R}^3$, i.e. the group multiplicative law for $R,R' \in SO(3), t,t' \in \mathbb{R}^3$ is given by \begin{equation*} (R,t)\cdot (R',t') = (RR', t + Rt'). \end{equation*} It can be seen as a subgroup of $GL(4)$ and represented by homogeneous coordinates: \begin{equation*} (R,t) = \begin{pmatrix} R & t \\ 0 & 1 \end{pmatrix}, \end{equation*} and all group operations then correspond to the matrix operations. Let the metric matrix at the identity be diagonal: $G=\mathrm{diag}(1,1,1,\beta,1,1)$ for some $\beta>0$, the anisotropy parameter. An orthonormal basis of the Lie algebra $\mathfrak{se}(3)$ is \begin{align*} e_1 = \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} &\qquad& e_2 = \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} &\qquad& e_3 = \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}\\ e_4 = \frac{1}{\sqrt{\beta}}\begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} &\qquad& e_5 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} &\qquad& e_6 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix} . \end{align*} Define the corresponding structure constants $C_{ij}^k = \langle [e_i,e_j],e_k\rangle$, where the Lie bracket $[\cdot,\cdot]$ is the usual matrix commutator. It is straightforward to compute \begin{align} \label{eq:structure_constants} C_{ij}^k &= \frac{1}{\sqrt{2}} \;\; \textrm{if} \;\;ijk \;\;\textrm{is a direct cycle of}\;\; \{1,2,3\};\\ C_{15}^6 &= - C_{16}^5 = - \sqrt{\beta} C_{24}^6 = \frac{1}{\sqrt{\beta}} C_{26}^4 = \sqrt{\beta} C_{34}^5 = -\frac{1}{\sqrt{\beta}} C_{35}^4 = \frac{1}{\sqrt{2}} . \end{align} and all others that cannot be deduced by skew-symmetry of the bracket are equal to $0$. The connection can then easily be computed using \begin{equation*} \alpha(e_i, e_j) = \nabla_{e_i}e_j = \frac{1}{2} \sum_k (C_{ij}^k - C_{jk}^i + C_{ki}^j)e_k, \end{equation*} For $\beta=1$, $(SE(3), G)$ is a symmetric space and the metric corresponds to the direct product metric of $SO(3) \times \mathbb{R}^3$. However, for $\beta \neq 1$, the geodesics cannot be computed in closed-form and we resort to a numerical scheme to integrate \eqref{eq:ep}. According to \cite{guigui_numerical_2020}, the pole ladder can be used with only one step of a fourth-order scheme to compute the exponential and logarithm maps at each rung of the ladder. We use a Runge-Kutta (RK) scheme of order $4$. The Riemannian logarithm is computed with a gradient descent on the initial velocity, where the gradient of the exponential is computed by automatic differentiation. All of these are available in the \url{InvariantMetric} class of the package \href{http://geomstats.ai}{geomstats} \cite{miolane_geomstats_2020}. We now compare the integration of \eqref{eq:tp} to the pole ladder \cite{guigui_numerical_2020} for $\beta=1.5,2$ to parallel transport a tangent vector along a geodesic. The results are displayed on Figure~\ref{fig:my_label} in a log-log plot. \begin{figure}[ht] \centering \includegraphics[width=12cm]{se3_pl_integration_beta_1000.pdf} \caption{Comparison of the integration of the reduced equation with the pole ladder} \label{fig:my_label} \end{figure} As expected, we reach convergence speeds of order two for the pole ladder and the RK2 scheme, while the RK4 schemes is of order four. Both integration methods are very stable, while the pole ladder is less stable for $\sim n \geq 200$. \section{Acknowledgments} \label{sec:acknowledgments} This work was partially funded by the ERC grant Nr. 786854 G-Statistics from the European Research Council under the European Union’s Horizon 2020 research and innovation program. It was also supported by the French government through the 3IA Côte d’Azur Investments ANR-19-P3IA-0002 managed by the National Research Agency. \bibliographystyle{splncs04}
proofpile-arXiv_065-3771
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Natural language is one of the easiest and most efficient means for humans to communicate, and has recently also been the focus for extensive research in human-robot interaction (HRI). A social robot with language capabilities has to understand not only single utterances but must also be able to conduct a dialogue with a human. Human dialogues follow conversational norms in order to be successful, and phenomena such as sudden changes of topic, need of clarification, ambiguity, turn taking, misunderstandings, and non-understandings influence the character and quality of a dialogue. Current approaches to computerised dialogue systems do not explicitly handle conversational norms. The overall goal of our research is to conduct work in this area by formalising dialogue and conversational norms, and by developing dialogue system components that take breaches of norms into account. Our work is divided into the following three parts \begin{enumerate} \item Formalising dialogue structure and mental states of dialogue participants. \item Formalising conversational norms occurring in dialogue. \item Developing computational methods to detect and handle violations of conversational norms in dialogue management. \end{enumerate} We believe that a formalisation and understanding of how and why dialogue structure, conversational norms and changes of mental states co-evolve in the course of utterance exchanges is essential for the development of computational methods for dialogue management in HRI. \section{Background} Dialogues are conversations, intentionally focused to question thoughts and actions, address problems, increase common knowledge and hence bring greater understanding~\cite{romney2005art}. The dialogue structure or dialogue flow is currently not well understood and existing paradigms to model dialogue structure fail to generalise or provide insight. The two main paradigms to dialogue managment are knowledge-based approaches and data-driven approaches~\cite{Lee2010}. The data-driven paradigm learns how a dialogue should be conducted from dialogue corpora, whereas the knowledge-driven paradigm relies on handcrafted dialogue flows and thus on expert knowledge. Data-driven approaches (for example, \cite{Kim16, Thomson2010}, fall short of providing an understanding into the problem of dialogue management and can lead to serious ethical consequences\footnote{In March 2016, Microsoft's chatbot \emph{Tay} parroted racist language after having learned from anonymised public data. It was taken offline by Microsoft around 16 hours after its launch.}. The knowledge-based approaches (for example, \cite{hori:2009:wfstbsdm, Rama15} are insufficient in real-world setting as these approaches do not scale for real applications. Recent hybrid approaches to dialogue management combine the benefits of both approaches trying to avoid the disadvantages \cite{Lison2015}. Our approach is a hybrid approach combining a finite-state and data-driven methods. Gricean maxims were introduced in \cite{grice1975logic} as a way to describe how dialogue participants ideally form their utterances (and thus also what dialogue participants may assume utterances to be). Grice views a conversation as a collaborative action where the participants agree upon a common intention or a predefined direction. The Gricean maxims are stated as follows: \begin{enumerate} \item Quantity: Make your contribution as informative as possible. \item Quality: Do not say what you believe to be false or which lacks evidence. \item Relation: Be relevant. \item Manner: Avoid obscurity and ambiguity. Be brief and orderly. \end{enumerate} The author in \cite{monz2000modeling} analysed and proposed a model for ambiguous expressions in multi-agent systems, while in \cite{de2007explaining} the authors provided a formal model for Grice’s Quantity implicature for a given utterance. \section{Approach} In line with viewing dialogues as collaborative actions, we formalise dialogues (e.g. turn takes and general dialogue structure), the mental states of dialouge participants, and conversational norms with co-operating distributed grammar systems (CDGSs). CDGSs are abstract devices for describing multi-agent systems, such as a human and a robot, by means of formal grammars based on the blackboard architecture (see, for example, \cite{Csuhaj-Varju:1994:GSG:561869}). Using CDGS to model dialogue structure allows us to reflect conversational norm as a public string that all agents (e.g. dialouge participants) work on together, transforming and extended the string during the dialogue. How the string is transformed (i.e. how a robot recovers from violations of conversational norms) is defined by a so-called derivation mode that the agents are in. Within our formal framework we investigate how and why conversational norms are reflected in utterances and the entire dialogue structure. That is, by formalising conversational norms we are able to develop computational methods to identify breaches. For instance, the maxim of brevity (i.e. be brief) can be expressed using the number of words in a dialogue turn. To express the maxim of relevance, topic modelling can be used, based on Latent Dirichlet allocation (LDA) or automated semantic analysis (e.g. analysing thematic roles). The topic identification is formalised within our CDGS framework in order to investigate how and why topics occur during a dialogue (i.e. dialogue structure). We further develop computational methods to handle breaches of conversational norms. For example, if a human in a dialogue is not brief the robot might be allowed to interrupt the human. After a topic change is identified, the robot can either follow up the new topic or resume the previous topic depending on the extent of the violation of the relevance maxim. If the maxim of informativeness is violated, the robot switches to a mode in which it either asks for more information (if the information by the human was too sparse) or interrupt the human (if the information was too detailed). \bibliographystyle{IEEEtran}
proofpile-arXiv_065-3787
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and main results} Throughout this paper, we assume that $T>0$, $0<\alpha<1$, and $\Omega\subset\mathbb R^d$ is a bounded domain with sufficiently smooth boundary $\partial\Omega$, and let $\nu=(\nu_1,\cdots,\nu_d)$ denote the unit outwards normal vector to the boundary $\partial\Omega$. Let the operator $A$ be defined by $$ -A\varphi:=\sum_{j,k=1}^d \partial_j(a_{jk}(x) \partial_k \varphi) + \sum_{j=1}^d b_j(x)\ppp_j\va + c(x) \varphi, \quad \varphi\in \mathcal{D}(A) := H_0^1(\Omega)\cap H^2(\Omega), $$ where the principle part is uniformly elliptic, namely $a_{ij}=a_{ji}, b_j \in C^1(\overline\Omega)$, $1\le i,j\le d$ and $c\in L^{\infty}(\OOO)$, satisfy that \begin{equation} \label{condi-elliptic} a_0 \sum_{j=1}^d \xi_j^2 \le \sum_{j,k=1}^d a_{jk}(x) \xi_j\xi_k, \quad x\in\overline\Omega,\ \xi\in\mathbb R^d, \end{equation} where $a_0>0$ is a constant independent of $x,\xi$. Here we set the normal derivative with respect to the operator $A$ as $$ \partial_{\nu_A} u = \sum_{i,j=1}^d a_{ij}\nu_i \partial_j u. $$ for any $ u \in H^{3/2}(\partial\Omega)$. We define the time-fractional derivative $\pppa$. First let $H^{\alpha}(0,T)$ be the fractional Sobolev space with the norm $$ \vert v\vert_{H^{\alpha}(0,T)} = \left( \vert v\vert_{L^2(0,T)}^2 + \int^T_0 \int^T_0 \frac{\vert v(t) - v(s)\vert^2} {\vert t-s\vert^{1+2\alpha}} dsdt \right)^{\hhalf} $$ (e.g., Adams \cite{Ad}). We set $$ \Halp = \begin{cases} \{ v \in H^{\alpha}(0,T); \, v(0) = 0\}, &\hhalf < \alpha < 1, \\ \left\{ v \in H^{\hhalf}(0,T);\, \int^T_0 \frac{\vert v(t)\vert^2}{t} dt < \infty\right\}, &\alpha = \hhalf, \\ H^{\alpha}(0,T), &0 < \alpha < \hhalf, \end{cases} $$ and $$ \vert v\vert_{\Halp} = \begin{cases} \vert v\vert_{H^{\alpha}(0,T)}, & 0 < \alpha < 1, \, \alpha \ne \hhalf, \\ \left( \vert v\vert^2_{H^{\hhalf}(0,T)} + \int^T_0 \frac{\vert v(t)\vert^2}{t} dt\right)^{\hhalf}, & \alpha = \hhalf. \end{cases} $$ We set $$ J^{\alpha}v(t) = \frac{1}{\Gamma(\alpha)}\int^t_0 (t-s)^{\alpha-1} v(s) ds. $$ Then it is known $$ J^{\alpha}L^2(0,T) = \Halp, \quad 0 < \alpha \le 1 $$ and there exists a constant $C>0$ such that $$ C^{-1}\vert J^{\alpha}v\vert_{\Halp} \le \vert v\vert_{L^2(0,T)} \le C\vert J^{\alpha}v\vert_{\Halp} $$ for $v \in L^2(0,T)$ (Gorenflo, Luchko and Yamamoto \cite{GLY}). We define the time-fractional derivative $\pppa$ in $\Halp$ by $$ \pppa v = (J^{\alpha})^{-1}v, \quad v \in \Halp. $$ \begin{rmk} \label{rem-caputo} We define the Caputo derivative $$ \ddda v(t) = \frac{1}{\Gamma(1-\alpha)}\int^t_0 (t-s)^{-\alpha} \frac{dv}{ds}(s) ds $$ and we consider $\ddda$ for $v \in {\CCC} := \{ v \in C^1[0,T];\, v(0) = 0\}$. Regarding $\ddda$ as an operator with the domain $\CCC$, we can see that the minimum closed extension of $\ddda$ coincides with $\pppa$ (Kubica, Ryszewska and Yamamoto \cite{KRY}). \end{rmk} We consider \begin{equation} \label{eq-gov} \left\{ \begin{alignedat}{2} & \partial_t^{\alpha} (u - u_0) + A u = 0 &\quad& \mbox{in $\Omega\times(0,T)$,}\\ &u(x,\cdot)-u_0(x)\in \Halp, &\quad&\mbox{for almost all $x\in\Omega$,}\\ &u(x,t)=0, &\quad& \mbox{$(x,t)\in\partial\Omega\times(0,T)$.} \end{alignedat} \right. \end{equation} We assume that $u_0 \in L^2(\Omega)$. Then it is known (e.g., Kubica, Ryszewska and Yamamoto \cite{KRY}, Kubica and Yamamoto \cite{KY}) that there exists a unique solution \begin{equation} \label{rslt-regu} u \in L^2(0,T; H^1_0(\OOO) \cap H^2(\Omega)) \end{equation} such that $u-u_0 \in H_{\alpha}(0,T;L^2(\OOO))$ and $u(\cdot,t) \in H^1_0(\OOO)$ for almost all $t \in (0,T)$. We refer also to Gorenflo, Luchko and Yamamoto \cite{GLY}, Zacher \cite{Za}. By \eqref{rslt-regu}, we see that $\ppp_{\nu_A}u \in L^2(\ppp\OOO \times (0,T))$. We are ready to state the first main result. \begin{thm} \label{thm-ucp} Let $\Gamma \subset \ppp\OOO$ be an arbitrarily chosen subboundary and let $T>0$. For $u_0 \in L^2(\Omega)$, let $u$ be the solution to \eqref{eq-gov}. If $\ppp_{\nu_A}u = 0$ on $\Gamma \times (0,T)$, then $u=0$ in $\OOO \times (0,T)$. \end{thm} This uniqueness result is known to be equivalent to the approximate controllability for the adjoint system to (1.1) (e.g., Fujishiro and Yamamoto \cite{FY}). For evolution equations with natural number order time-derivative, see e.g., Schmidt and Weck \cite{SW}, Triggiani \cite{Tr}. Here we do not discuss about the approximate controllability. In the case where $-A$ is symmetric, that is, $b_j = 0$ for $j=1,..., d$, there are several results. For example, we refer to Sakamoto and Yamamoto \cite{SY}. See also Jiang, Li, Liu and Yamamoto \cite{JLLY} for not necessarily symmetric $A$. The argument in \cite{JLLY} is different from ours: \cite{JLLY} relies on the transformation of the problem to the determination of $u_0$ of the corresponding parabolic equation through the Laplace transform. In both \cite{JLLY} and \cite{SY}, the condition $u=0$ in $\omega \times (0,T)$ with a subdomain $\omega \subset \OOO$, is assumed in place of $\ppp_{\nu_A}u = 0$ on $\Gamma \times (0,T)$. We note that for $\ppp_{\nu_A}u$, we here assume more regularity for the initial value, that is, $u_0 \in H^1_0(\OOO)$. Our proof is based on the spectral property of the elliptic operator $A$, while the proof in \cite{JLLY} follows from the uniqueness result for a parabolic equation: $$ \left\{ \begin{alignedat}{2} & \ppp_tu + Au = 0, &\quad& \mbox{in $\OOO\times (0,T)$}, \\ & u\vert_{\ppp\OOO\times (0,T)} = 0, &\quad& \ppp_{\nu_A}u\vert_{\Gamma\times (0,T)} = 0. \end{alignedat}\right. $$ Indeed our proof of Theorem \ref{thm-ucp} works also for higher-order elliptic operator $A$, for example, \begin{equation} \label{eq-gov^m} \pppa u = -\left(-\Delta + \sum_{j=1}^d b_j(x)\ppp_j + c(x)\right)^mu \end{equation} with $m \in \N$ under suitable conditions, but the method in \cite{JLLY} requires us to prove that if $u$ satisfies \eqref{eq-gov^m} and $$ \ppp_{\nu}^ku = 0 \quad \mbox{on $\ppp\OOO\times (0,T)$, $k=0,1,..., 2m-1$,} $$ then $u=0$ in $\OOO\times (0,T)$, which needs more arguments than our proof for the case $m>1$. In the case of $m=1$, this uniqueness follows from the well-known unique continuation for a parabolic equation (e.g., Isakov \cite{Is}, Yamamoto \cite{Ya}). As one application of Theorem 1, we show the uniqueness for an inverse source problem. We consider \begin{equation} \label{eq-sp} \left\{ \begin{alignedat}{2} & \partial_t^{\alpha} y + A y = \mu(t)f(x), &\quad& x\in \OOO,\, 0<t<T, \\ & y(x,\cdot)\in \Halp, &\quad& \mbox{for almost all $x\in\Omega$,}\\ & y(x,t)=0, &\quad& (x,t)\in\partial\Omega\times(0,T). \end{alignedat}\right. \end{equation} We assume that $f \in L^2(\OOO)$ and $\mu \in L^2(0,T)$. Then we know (e.g., \cite{KRY}) that there exists a unique solution $y \in L^2(0,T;H^2(\OOO)\cap H^1_0(\OOO)) \cap H_{\alpha}(0,T;L^2(\OOO))$ to (1.4). Now for given $\mu$, we discuss an inverse source problem of determining $f$ in $\OOO$ by $\ppp_{\nu_A}y\vert_{\Gamma \times (0,T)}$. \begin{thm} \label{thm-isp} Let $\Gamma \subset \ppp\OOO$ be an arbitrarily chosen subboundary and let $f \in L^2(\Omega)$, $\mu \in C^1[0,T]$, $\not\equiv 0$ in $[0,T]$. If $\ppp_{\nu_A}y = 0$ on $\Gamma \times (0,T)$, then $f=0$ in $\OOO$. \end{thm} In this article, we discuss the determination of initial value and source term within the framework of \cite{KRY} which formulates initial boundary value problems (1.2) and (1.5) and establishes the well-posedness in fractional Sobolev spaces. In particular, for the first time we establish the uniqueness in the inverse source problem for (1.5) for general $f\in L^2(\Omega)$ and $\mu \in L^2[0,T]$, where the time-regularity of the solution $y$ is delicate. \\ This article is outlined as follows. In section \ref{sec-pre}, we first provide several preliminary results from spectral theory and prove some auxiliary lemmas for the formula of the Laplace transform for the fractional derivative $\partial_t^\alpha$ in $H_\alpha(0,T)$, which plays crucial roles in the proof of the main theorem. In section \ref{sec-ucp}, by the Laplace tranform argument, the unique continuation principle in Theorem \ref{thm-ucp} is proved. In section \ref{sec-isp}, based on the unique coninuation principle, from the Duhamel principle, see Lemma \ref{lem-Duhamel}, we finish the proof of Theorem \ref{thm-isp}. Finally, a concluding remark is given in section \ref{sec-rem}. \section{Preliminaries} \label{sec-pre} \subsection{Well-posedness for the forward problem} In this part, we are concerned with the wellposedness for the initial-boundary value problem \eqref{eq-gov}. More precisely, we will show that the solution to the problem \eqref{eq-gov} admit an exponential growth, which is essential for carrying out the Laplace transform argument. \begin{lem}[Coercivity] \label{lem-coercivity} For any measurable function $\varphi(\cdot)-c_0\in H_\alpha(0,T)$ with $\alpha\in(0,1)$, one has the coercivity inequality $$ \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \varphi(s)\partial_t^\alpha (\varphi(s) - c_0) d s \ge \varphi^2(t) - c_0^2,\quad \mbox{almost all $t\in(0,T)$}. $$ \end{lem} \begin{proof} We divide the proof into two steps. First, we assume $\varphi-c_0\in {_0}C^1[0,T]$, then from Lemma 1 in Alikhanov \cite{Al10}, it follows that \begin{equation} \label{esti-Caputo} \varphi(t) d_t^\alpha \varphi(t) \ge \frac12 d_t^\alpha (\varphi^2)(t),\quad t>0. \end{equation} Now noting that $\varphi(0)=c_0$, along with the formula $$ J^\alpha d_t^\alpha \varphi = \varphi(t) - c_0,\quad t\in(0,T), $$ we have $$ \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \varphi(s) d_s^\alpha \varphi(s) ds \ge \varphi^2(t) - c_0^2,\quad t\in(0,T). $$ Since $d_t^\alpha c_0 = 0$, we further arrive at the inequality $$ \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \varphi(s) d_t^\alpha (\varphi(s) - c_0) d s \ge \varphi^2(t) - c_0^2,\quad \mbox{almost all $t\in(0,T)$}. $$ Moreover, from Remark \ref{rem-caputo}, it follows that the Caputo derivative $d_t^\alpha$ concides with the fractional derivative $\partial_t^\alpha$ under the domain ${_0}C^1[0,T]$, and then we see that $$ \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \varphi(s) \partial_t^\alpha (\varphi(s) - c_0) d s \ge \varphi^2(t) - c_0^2,\quad \mbox{almost all $t\in(0,T)$}. $$ Now for the case $\varphi-c_0\in H_\alpha(0,T)$, letting $\psi:=\varphi - c_0\in H_\alpha(0,T)$, it is equivalent to prove the inequality $$ \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} (\psi(s)+c_0)\partial_t^\alpha \psi(s) d s \ge (\psi(t)+c_0)^2 - c_0^2,\quad \mbox{almost all $t\in(0,T)$}. $$ From the argument used in \cite{GLY}, we see that $\overline{{_0}C^1[0,T]}^{H_\alpha(0,T)} = H_\alpha(0,T)$, hence we can choose $\psi_n\in {_0}C^1[0,T]$ and $\psi_n$ tends to $\psi$ under the norm of $H^\alpha(0,T)$ as $n\to\infty$. Then from the conclusion in the first step, we see that \begin{equation} \label{ineq-coe-appro} \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} (\psi_n(s)+c_0)\partial_t^\alpha \psi_n(s) d s \ge (\psi_n(t)+c_0)^2 - c_0^2,\quad \mbox{almost all $t\in(0,T)$}. \end{equation} Since $\psi_n\to\psi$ in $H^\alpha(0,T)$ as $n\to\infty$, hence $\psi_n(t)\to\psi(t)$ for almost all $t\in(0,T)$, we see that the right-hand side of the above inequality tends to $(\psi(t)+c_0)^2 - c_0^2$ for almost all $t\in (0,T)$. Now we evaluate \begin{align*} &\int_0^T \left|\int_0^t (t-s)^{\alpha-1}(\psi_n(s)+c_0)\partial_t^\alpha \psi_n(s) ds - \int_0^t (t-s)^{\alpha-1}(\psi(s)+c_0)\partial_t^\alpha \psi(s) ds\right|dt \\ \le&\int_0^T \left|\int_0^t (t-s)^{\alpha-1} \left((\psi_n(s) - \psi(s))\partial_t^\alpha \psi_n(s)\right) ds\right|dt \\ &+\int_0^T \left|\int_0^t (t-s)^{\alpha-1}(\psi(s)+c_0)\left( \partial_t^\alpha \psi_n(s) ds - \partial_t^\alpha \psi(s)\right) ds\right|dt =: I_{n1} + I_{n2}. \end{align*} By the use of the Fubini lemma, $I_{n1}(t)$ can be further rewritten by \begin{align*} I_{n1}(t) \le&\int_0^T \left(\int_s^T (t-s)^{\alpha-1} dt \right) \left|(\psi_n(s) - \psi(s))\partial_t^\alpha \psi_n(s)\right| ds \\ \le &\frac{T^\alpha}{\alpha}\int_0^T \left|(\psi_n(s) - \psi(s))\partial_t^\alpha \psi_n(s)\right| ds. \end{align*} From H\"{o}lder's inequality, by a direct calculation, the last integration on the right hand side of the above estimates can be evaluated by \begin{align*} \int_0^T \left|(\psi_n(s) - \psi(s))\partial_t^\alpha \psi_n(s)\right| ds \le &\|\psi_n-\psi\|_{L^2(0,T)} \left(\int_0^T \Big|\partial_s^\alpha \psi_n(s)\Big|^2 ds\right)^{\frac12} \\ \le & \|\psi_n-\psi\|_{L^2(0,T)} (\|\psi\|_{H^\alpha(0,T)} + 1) \to 0, \quad\mbox{ as $n\to\infty$.} \end{align*} Similarly, we see that \begin{align*} I_{n2} \le& \frac{T^\alpha}{\alpha} \int_0^T \left|(\psi(s) + c_0)\partial_t^\alpha (\psi_n(s) - \psi(s))\right| ds \\ \le& \frac{T^\alpha}{\alpha} \|\psi + c_0\|_{L^2(0,T)} \left( \int_0^T \left|\partial_t^\alpha (\psi_n(s) - \psi(s))\right|^2 ds \right)^{\frac12} \\ \le& \frac{T^\alpha}{\alpha} \|\psi + c_0\|_{L^2(0,T)} \|\psi_n - \psi\|_{H^\alpha(0,T)} \to 0,\quad \mbox{as $n\to\infty$}. \end{align*} Consequently, we see that $$ \int_0^t (t-s)^{\alpha-1} (\psi_n(s)+c_0)\partial_t^\alpha \psi_n(s) ds \quad \mbox{tends to } \quad \int_0^t (t-s)^{\alpha-1} (\psi(s)+c_0)\partial_t^\alpha \psi(s) ds $$ for almost all $t\in(0,T)$. Now letting $n\to\infty$ on both sides of the inequality \eqref{ineq-coe-appro}, we find $$ \frac2{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} (\psi(s)+c_0)\partial_t^\alpha \psi(s) d s \ge (\psi(t)+c_0)^2 - c_0^2,\quad \mbox{almost all $t\in(0,T)$}. $$ We finish the proof of the lemma by changing $\psi(t) + c_0$ back to $\varphi(t)$. \end{proof} \begin{lem} \label{lem-analy} Let $u_0\in L^2(\Omega)$, then the unique solution $u:(0,T)\to H^2(\Omega)$ is $t$-analytic and can be analytically extended to $(0,\infty)$. Moreover, there exists a constant $C>0$ such that $$ \|u(\cdot,t)\|_{L^2(\Omega)} \le Ce^{Ct}\|u_0\|_{L^2(\Omega)},\quad t>0. $$ \end{lem} \begin{proof} For the proof of the $t$-analyticity of the solution, one can refer to Sakamoto and Yamamoto \cite{SY}, and Li, Huang and Yamamoto \cite{LHY}. It is sufficient to show the solution $u$ admits an exponential growth. For this, we multiply $u - u_0$ on both sides of the equation \eqref{eq-gov} to derive that $$ \langle\partial_t^\alpha (u - u_0), u\rangle_{L^2(\Omega)} + \langle Au,u\rangle_{L^2(\Omega)} = 0. $$ Now multiplying $J^\alpha$ on both sides of the above equation, noting Lemma \ref{lem-coercivity} and from integration by parts, we see that \begin{align*} &\frac12 \|u\|_{L^2(\Omega)}^2 - \frac12\|u_0\|_{L^2(\Omega)}^2 + J^\alpha\left(\int_\Omega a_{ij}(x) \partial_iu(x,t) \partial_ju(x,t) dx \right) \\ \le& J^\alpha \left( \int_\Omega (B(x)\cdot\nabla u(x,t)+c(x)u(x,t))(u(x,t)) dx\right). \end{align*} From the ellipticity \eqref{condi-elliptic} and the Cauchy-Schwarz inequality, for a sufficiently small $\varepsilon>0$, we can further derive $$ \frac12 \|u\|_{L^2(\Omega)}^2 - \frac12\|u_0\|_{L^2(\Omega)}^2 + J^\alpha (\|u(\cdot,t)\|_{H^1(\Omega)}^2) \le J^\alpha \left( \varepsilon \|u(\cdot,t)\|_{H^1(\Omega)}^2 + \frac{C}{\varepsilon} \|u(\cdot,t)\|_{L^2(\Omega)} \right). $$ By taking $\varepsilon>0$ small enough, we have $$ \|u\|_{L^2(\Omega)}^2 + J^\alpha (\|u(\cdot,t)\|_{H^1(\Omega)}^2) \le C\|u_0\|_{H^1(\Omega)}^2 + CJ^\alpha \|u(\cdot,t)\|_{L^2(\Omega)}, $$ which implies $$ \|u(\cdot,t)\|_{L^2(\Omega)}^2 \le C \|u_0\|_{L^2(\Omega)}^2 + \frac{C}{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \|u(\cdot,s)\|_{L^2(\Omega)}^2 ds. $$ Therefore we conclude from the general Gronwall inequality that $$ \|u(\cdot,t)\|_{L^2(\Omega)} \le Ce^{Ct}\|u_0\|_{L^2(\Omega)},\quad t\ge0. $$ Thus the proof of the lemma is complete. \end{proof} \subsection{Laplace transform of $\partial_t^\alpha$} We define the Laplace transform $(Lu)(p)$ by $$ (Lu)(p) := \int_0^\infty e^{-pt} u(t) dt $$ for $\Re p>p_0$: some constant. The formulae of the Laplace transforms for fractional derivatives are well-known. For example, \begin{equation} \label{eq-lap-caputo} L(d_t^\alpha u)(p) = p^\alpha (Lu)(p) - p^{\alpha-1}u(0) \end{equation} for $\Re p>p_0$: some constant. The formula \eqref{eq-lap-caputo} is convenient for solving fractional differential equations. However formula \eqref{eq-lap-caputo} requires some regularity for $u$. For instance, $u(0)$ should be apparently defined and \eqref{eq-lap-caputo} does not make a sense for $u\in H^\alpha(0,T)$ with $0<\alpha<\frac12$. Moreover such needed regularity should be consistent with the regularity which we can prove for solutions to a fractional differential equations. In particular, the regularity for the formula concerning the Laplace transform should be not very strong. Thus on the regularity assumption for the formula like \eqref{eq-lap-caputo}, we have to make adequate assumptions for $u$. In this section, we state the formula of the Laplace transform for the fractional derivative $\partial_t^\alpha$ in $H_\alpha(0,T)$. We set \begin{equation} \label{defi-V_alpha} \begin{split} V_\alpha(0,\infty) := \{ u\in L_{\rm loc}^1(0,\infty); u|_{(0,T)}\in H_\alpha(0,T) \mbox{ for any $T>0$,} \\ \mbox{ there exists a constant $C=C_u>0$ such that $|u(t)| \le Ce^{Ct}$ for $t\ge0$.} \} \end{split} \end{equation} Here we define a set $L_{\rm loc}^1(0,\infty)$ of functions defined in $(0,\infty)$ by $$ L_{\rm loc}^1(0,\infty) = \{u;u|_{(0,T)}\in L^1(0,T)\mbox{ for any } T>0\}. $$ Then we can state \begin{lem} \label{lem-lap-caputo} The Laplace transform $L(\partial_t^\alpha u)(p)$ can be defined for $u\in V_\alpha(0,\infty)$ by $$ L(\partial_t^\alpha u)(p) = \lim_{T\to\infty} \int_0^T e^{-pt} \partial_t^\alpha u(t) dt,\quad p>C_u $$ and $$ L(\partial_t^\alpha u)(p) = p^\alpha Lu(p), \quad p>C_u. $$ \end{lem} \begin{proof} We can refer to \cite{KRY} for the proof, but for completeness, here we provide the proof. First for $u\in H_\alpha(0,T)$, by Theorem 2.3 in \cite{KRY}, we can see that \begin{equation} \label{eq-Ju} J^{1-\alpha} u\in H_1(0,T) \subset H^1(0,T), \end{equation} and so $$ D_t^\alpha u = \frac{d}{dt} J^{1-\alpha} u \in L^2(0,T). $$ Theorem 2.4 from \cite{KRY} yields $\partial_t^\alpha u = \frac{d}{dt} J^{1-\alpha} u$ for $u\in H_\alpha(0,T)$. Let $T>0$ be arbitrarily fixed. Then, in terms of \eqref{eq-Ju}, we integrate by parts to obtain \begin{align*} \int_0^T e^{-pt} \partial_t^\alpha u(t) dt =& \int_0^T e^{-pt} \frac{d}{dt} (J^{1-\alpha} u)(t) dt\\ =& \left[ J^{1-\alpha} u(t) e^{-pt} \right]_{t=0}^{t=T} + p\int_0^T e^{-pt} J^{1-\alpha} u(t) dt. \end{align*} The Sobolev embedding (e.g., \cite{Ad}) yields $$ H^\alpha(0,T)\subset \begin{cases} L^{\frac{2}{1-2\alpha}}(0,T), & \mbox{if } 0<\alpha<\frac12,\\ L^{\frac1\delta}(0,T), & \mbox{with any $\delta>0$ if } \alpha=\frac12,\\ L^\infty(0,T), & \mbox{if }\frac12<\alpha<1. \end{cases} $$ First for $0<\alpha<\frac12$, the H\"older inequality implies \begin{align*} |J^\alpha u(t)| &= \left| \frac1{\Gamma(1-\alpha)} \int_0^t (t-s)^{-\alpha} u(s) ds \right| \\ &\le C\left( \int_0^t (|t-s|^{-\alpha})^{\frac{2}{1+2\alpha}} \right)^{\frac{1+2\alpha}{2}} \left( \int_0^t |u(s)|^{\frac{2}{1+2\alpha}} \right)^{\frac{1+2\alpha}{2}} \\ &\le C\left( t^{\frac1{1+2\alpha}} \right)^{\frac{1+2\alpha}2} \|u\|_{L^{\frac2{1-2\alpha}}(0,T)} \to0 \end{align*} as $t\to0$. Next let $\alpha=\frac12$. We choose $\delta\in(0,\frac12)$. Setting $p=\frac1{1-\delta}$ and $q=\frac1\delta$, we apply the H\"older inequality to have \begin{align*} |J^{1-\alpha} u(t)| \le& C\left( \int_0^t |t-s|^{-\frac12 p} ds \right)^{\frac1p} \left( \int_0^t |u(s)|^q ds \right)^{\frac1q} \\ \le& C\left( \int_0^t |t-s|^{-\frac12 \frac1{1-\delta}} ds \right)^{1-\delta} \left( \int_0^t |u(s)|^{\frac1\delta} ds \right)^{\delta} \to 0 \end{align*} as $t\to0$ by $0<\delta<\frac12$. Finally for $\frac12<\alpha<1$, we have $$ |J^{1-\alpha} u(t)| \le C\int_0^t |t-s|^{-\alpha} ds \|u\|_{L^\infty(0,T)} \to 0 $$ as $t\to0$. Thus we see that $$ \lim_{t\to0} J^{1-\alpha} u(t) = 0. $$ Hence \begin{align*} \int_0^T e^{-pt} \partial_t^\alpha u(t) dt =& \frac{e^{-pT}}{\Gamma(1-\alpha)} \int_0^T (t-s)^{-\alpha} u(s) ds \\ &+ \frac{p}{\Gamma(1-\alpha)} \int_0^T e^{-pt} \left( \int_0^t (t-s)^{-\alpha} u(s) ds \right) dt =I_1+I_2. \end{align*} Since $|u(t)| \le C_0 e^{C_0t}$ for $t\ge0$ with some constant $C_0>0$, we estimate \begin{align*} |I_1| \le& Ce^{-pT} \int_0^T (T-s)^{-\alpha} e^{C_0s} ds = Ce^{-pT} \int_0^T s^{-\alpha} e^{C_0(T-s)} ds \\ =& Ce^{-(p-C_0)T} \int_0^T s^{-\alpha} e^{-C_0s} ds \le Ce^{-(p-C_0)T} \int_0^\infty s^{-\alpha} e^{-C_0s} ds = Ce^{-(p-C_0)T} \frac{\Gamma(1-\alpha)}{C_0^{1-\alpha}}. \end{align*} Hence if $p>C_0$, then $\lim_{T\to\infty} I_1 = 0$. As for $I_2$, by the Fubini lemma, we see that \begin{align*} I_2 =& \frac{p}{\Gamma(1-\alpha)} \int_0^T \left( \int_s^T e^{-pt} (t-s)^{-\alpha} dt \right) u(s) ds \\ =& \frac{p}{\Gamma(1-\alpha)} \int_0^T \left( \int_0^{T-s} e^{-p\eta} \eta^{-\alpha} d\eta \right) e^{-ps} u(s) ds. \end{align*} For $p>C_0$, since $|u(s)|\le Ce^{C_0s}$ for some $s\ge0$, we have $$ \left| \int_0^{T-s} e^{-p\eta} \eta^{-\alpha} d\eta e^{-ps} u(s) \right| \le C\left(\int_0^\infty e^{-p\eta} \eta^{-\alpha} d\eta \right) e^{-(p-C_0)s} $$ for all $s>0$ and $T>0$, and the Lebesgue dominated convergence theorem yields \begin{align*} \lim_{T\to\infty} I_2 =& \frac{p}{\Gamma(1-\alpha)} \int_0^\infty \left( \int_0^\infty e^{-p\eta} \eta^{-\alpha} d\eta \right) e^{-ps} u(s) ds \\ =& \frac{p}{\Gamma(1-\alpha)} \frac{\Gamma(1-\alpha)}{p^{1-\alpha}} \int_0^\infty e^{-ps} u(s) ds =p^\alpha Lu(p) \end{align*} for $p>C_0$. Thus the proof of the lemma is complete. \end{proof} \subsection{Some results from spectral theory} We define the the operator $D_m^k$, $k\in\mathbb N$, related to the eigenvalue $\lambda_m$ of the operator $-A$ as follows $$ D_m^k \varphi := \frac1{2\pi i} \int_{\gamma_m} (\eta - \lambda_m)^k (\eta - A)^{-1} \varphi d\eta,\quad \varphi\in L^2(\Omega), $$ where $\gamma_m$ is a sufficiently small circle surrounding the eigenvalue $\lambda_m$ of the operator $-A$ (e.g., Kato \cite{Ka}). From the result from Suzuki and Yamamoto \cite{SYam}, we see that the multiplicity of the eigenvalue $\lambda_m$ is finite and we assume it as $m_\lambda$. Then we have $$ D_m^k = 0 \mbox{ in $L^2(\Omega)$},\quad k\ge m_\lambda. $$ We call $P_m:=D_m^0$ is the eigenprojection related to the eigenvalue $\lambda_m$ of the operator $-A$, and we see that \begin{lem} \label{lem-Pm} Let $k_0$ be a positive integer. If $\varphi \in L^2(\Omega)$ satisfies $D_m^{k_0} P_m \varphi = 0$, then $$ D_m^{k_0-1} P_m \varphi \in {\rm Ker}(\lambda_m - A). $$ \end{lem} \begin{proof} Since $P_m \varphi \in \mathcal D(A)$, we see that $A (\eta - A)^{-1} P_m \varphi = (\eta - A)^{-1} A P_m \varphi$ for any $\varphi\in L^2(\Omega)$, hence that \begin{align*} (\lambda_m - A) D_m^{k_0 - 1} P_m \varphi = \frac1{2\pi i} \int_{\gamma_m} (\eta - \lambda_m)^{k_0 - 1}(\eta - A)^{-1} (\lambda_m - A) P_m\varphi d\eta \end{align*} By writting $\lambda_m - A = \lambda_m - \eta + \eta - A$, we have \begin{align*} (\lambda_m - A) D_m^{k_0 - 1} P_m \varphi =-\frac1{2\pi i} \int_{\gamma_m} (\eta - \lambda_m)^{k_0}(\eta - A)^{-1} P_m\varphi d\eta +\frac1{2\pi i} \int_{\gamma_m} (\eta - \lambda_m)^{k_0 - 1} P_m\varphi d\eta. \end{align*} In view of the assumption $D_m^{k_0} = 0$ and the residue theory, it follows that the two terms on the right-hand side of the above equation are zero, that is, $$ D_m^{k_0 - 1} P_m \varphi \in {\rm Ker}(\lambda_m - A). $$ This completes the proof. \end{proof} \section{Proof of Theorem \ref{thm-ucp}} \label{sec-ucp} This section is devoted to the proof of the first main result, Theorem \ref{thm-ucp}. Before giving the proof, we first employ the Laplace transform treatment to show the uniqueness in determining the Neumann derivative of the initial value from the addition data of the solution on the subboundary , which plays crucial role in the proof of Theorem \ref{thm-ucp}. We have \begin{lem} \label{lem-Dm} Assume $u_0\in L^2(\Omega)$ and $u\in L^2(0,T;H^2(\Omega)\cap H_0^1(\Omega))$, $u-u_0\in H_{\alpha}(0,T;L^2(\Omega))$ solves the initial-boundary value problem \eqref{eq-gov}. If $\partial_{\nu_A} u = 0$ on $\Gamma\times(0,T)$, then for any $m,k\in\mathbb N$, $\partial_{\nu_A} D_m^k u_0 = 0$ on the subboundary $\Gamma$. \end{lem} \begin{proof} From the $t$-analyticity of the solution stated in Lemma \ref{lem-analy}, we can make unique extension for $u(x,t)$, $t\in(0,T)$ to $(0,\infty)$. Therefore, taking Laplace transforms on both sides of \eqref{eq-gov} implies \begin{equation} \label{eq-lap} \left\{ \begin{alignedat}{2} & A \widehat u(s) + s^{\alpha } \widehat u(s) = s^{\alpha -1} u_0 &\quad& \mbox{in $\Omega$,} \\ &\widehat u(s)|_{\partial\Omega}=0, &\quad& \Re s\ge s_0 \end{alignedat} \right. \end{equation} together with the formula from Laplace transform in Lemma \ref{lem-lap-caputo}. Therefore for $s^{\alpha }$ in the resolvent set $\rho(A)$ of the operator $A$, we see that $$ \widehat u(s) = s^{\alpha -1} \left(s^{\alpha } + A\right)^{-1} (u_0). $$ Moreover, the assumption $\partial_{\nu_A}u=0$ on $\Gamma\times(0,T)$ combined with the $t$-analyticity of the solution, it follows that $$ \partial_{\nu_A} \widehat u(s) = 0 \quad \mbox{on $\Gamma$}. $$ Now letting $\eta:=-s^{\alpha }$, we conclude from the above equality that $$ \partial_{\nu_A} (\eta-A)^{-1} u_0 = 0 \quad \mbox{on $\Gamma$.} $$ for any $\eta\in \rho(A)$, from which we further verify $$ \frac1{2\pi i} \int_{\gamma_m} (\eta - \lambda_m)^k \partial_{\nu_A}(\eta - A)^{-1} u_0 d\eta = 0, $$ that is, $\partial_{\nu_A} D_m^k u_0 = 0$ in view of the definition of the operator $D_m^k$. This completes the proof of the lemma. \end{proof} Now we are ready for the proof of our first main result. \begin{proof}[Proof of Theorem \ref{thm-ucp}] From Lemma \ref{lem-Pm} we derive $$ (\lambda_m - A)(D_m^{m_\lambda-1}P_mu_0) = 0\quad \mbox{in $\Omega$.} $$ Moreover, since $D_m^k P_m u_0\in H^2(\Omega)\cap H_0^1(\Omega)$ and $\partial_{\nu_A} D_m^k P_m u_0=0$ on $\Gamma$, $k=1,2,\cdots$, we conclude from the unique continuation principle for the elliptic equations that $D_m^{m_\lambda-1}P_m u_0 = 0$ in $\Omega$. Again similar argument yields $D_m^{m_\lambda-2}P_m u_0=0$ in $\Omega$. Continuing this procedure, we obtain $P_m u_0=0$ in $\Omega$ for any $m\in\mathbb N$. Therefore we must have $u_0=0$ from the completeness of the generalised eigenfunctions (see the last chapter of \cite{Ag}). Finally, from the uniqueness for the initial-boundary value problem \eqref{eq-gov}, it follows that $u\equiv0$. This completes the proof of our first main theorem. \end{proof} \section{Proof of Theorem \ref{thm-isp}} \label{sec-isp} Now let us turn to the proof of the uniqueness of the inverse source problem. The argument is mainly based on the weak unique continuation and the following Duhamel's principle for time-fractional diffusion equations. \begin{lem}[Duhamel's principle] \label{lem-Duhamel} Let $f\in L^2(\Omega)$ and $\mu\in C^1[0,T]$. Then the weak solution $y$ to the initial-boundary value problem \eqref{eq-sp} allows the representation \begin{equation} \label{eq-Duhamel} y(\cdot,t)=\int_0^t\theta(t-s)\,v(\,\cdot\,,s) ds,\quad 0<t<T, \end{equation} where $v$ solves the homogeneous problem \begin{equation} \label{equ-homo} \begin{cases} \partial_t^\alpha v + A v=0 & \mbox{in }\Omega\times(0,T),\\ v=f & \mbox{in }\Omega\times\{0\},\\ v=0 & \mbox{on }\partial\Omega\times(0,T) \end{cases} \end{equation} and $\theta\in L^1(0,T)$ is the unique solution to the fractional integral equation \begin{equation}\label{eq-FIE-te} J^{1-\alpha}\theta(t)=\mu(t),\quad 0<t<T. \end{equation} \end{lem} The above conclusion is almost identical to Liu, Rundell and Yamamoto \cite[Lemma 4.1]{LRY15} for the single-term case and Liu \cite[Lemma 4.2]{L15} for the multi-term case, except for the existence of non-symmetric part. Since the same argument still works in our setting, we omit the proof here. \begin{proof}[Proof of Theorem \ref{thm-isp}] Let $y$ satisfy the initial-boundary value problem \eqref{eq-sp} with $f(x)\,\mu(t)$, where $f\in H_0^1(\Omega)$ and $\mu\in C^1[0,T]$. Then $y$ takes the form of \eqref{eq-Duhamel} according to Lemma \ref{lem-Duhamel}. Performing the Riemann-Liouville fractional integral $J^{1-\alpha}$ to \eqref{eq-Duhamel}, we deduce \begin{align*} J^{1-\alpha}u(\,\cdot\,,t) & =\frac1{\Gamma(1-\alpha)}\int_0^t\frac1{(t-\tau)^{\alpha}}\int_0^\tau\theta(\tau-\xi)\,v(\,\cdot\,,\xi)\, d \xi d \tau\\ & =\frac1{\Gamma(1-\alpha )}\int_0^tv(\,\cdot\,,\xi)\int_\xi^t\frac{\theta(\tau-\xi)}{(t-\tau)^{\alpha }}\, d \tau d \xi\\ & =\int_0^tv(\,\cdot\,,\xi)\frac1{\Gamma(1-\alpha )}\int_0^{t-\xi}\frac{\theta(\tau)}{(t-\xi-\tau)^{\alpha }}\, d \tau d \xi\\ & =\int_0^tv(\,\cdot\,,\xi)J^{1-\alpha }\theta(t-\xi)\, d \xi=\int_0^t\mu(t-\tau)\,v(\,\cdot\,,\tau)\, d \tau, \end{align*} where we applied Fubini's theorem and used the relation \eqref{eq-FIE-te}. Then the vanishment of $\partial_{\nu_A}u$ on $\Gamma\times(0,T)$ immediately yields \[ \int_0^t\mu(t-\tau)\partial_{\nu_A}v(\,\cdot\,,\tau)\, d \tau=0\quad\mbox{on }\Gamma,\ 0<t<T. \] Differentiating the above equality with respect to $t$, we obtain \[ \mu(0)\partial_{\nu_A}v(\,\cdot\,,t)+\int_0^t\mu'(t-\tau)\partial_{\nu_A}v(\,\cdot\,,\tau) d \tau=0,\quad\mbox{on }\Gamma,\ 0<t<T. \] Owing to the assumption that $|\mu(0)|\ne0$, we estimate \begin{align*} \|\partial_{\nu_A}v(\,\cdot\,,t)\|_{L^2(\Gamma)} & \le\frac1{|\mu(0)|}\int_0^t|\mu'(t-\tau)|\|\partial_{\nu_A}v(\,\cdot\,,\tau)\|_{L^2(\Gamma)}\, d \tau \\ & \le\frac{\|\mu\|_{C^1[0,T]}}{|\mu(0)|} \int_0^t\|\partial_{\nu_A}v(\,\cdot\,,\tau)\|_{L^2(\Gamma)}\, d \tau,\quad 0<t<T. \end{align*} Taking advantage of Gronwall's inequality, we conclude $\partial_{\nu_A}v=0$ on $\Gamma\times(0,T)$. Finally, we apply Theorem \ref{thm-ucp} to the homogeneous problem \eqref{equ-homo} to derive $v=0$ in $\Omega\times(0,T)$, implying $f=v(\,\cdot\,,0)=0$. This completes the proof of Theorem \ref{thm-isp}. \end{proof} \section{Concluding remarks} \label{sec-rem} In this paper, we considered the multi-term time-fractional diffusion equation with advection. By taking Laplace tranform argument, we changed the problem \eqref{eq-gov} to an elliptic equation in the frequency domain. We then proved the weak unique continuation property of the solution to \eqref{eq-gov} by using the spectrol decomposition of the general operator and unique continuation for the elliptic equation. The statement concluded in Theorem \ref{thm-ucp} will be called as the weak unique continuation property because we impose the homogeneous Dirichlet boundary condition on the whole boundary, which is absent in the usual parabolic prototype. (see Cheng, Lin and Nakamura \cite{CLN}, Lin and Nakamura \cite{LN} and Xu, Cheng and Yamamoto \cite{XCY}). As a direct conclusion of the weak unique continuation, we proved that the uniqueness in determining the source term from the boundary measurement. Let us mention that the argument used for the proof of the weak unique continuation principle heavily relies on the choice of the coefficients of the fractional derivatives, namely constant coefficients. It would be interesting to investigate what happens if this assumption is not valid. On the other hand, we mention that in the one-dimensional case, the unique continuation (not weak type) for the fractional diffusion equation is valid, one can refer to the recent work from Li and Yamamoto \cite{LY19} in which the theta function method and Phragm\'em-Lindel\"of principle play essential roles in the proof. Unfortunately, the technique used in \cite{LY19} cannot work in showing the unique continuation for the fractional diffusion equation in the general dimensional case due to the absence of the theta function. This is one of reasons why the unique continuation in the general case was only established in the weak sense. To sum up, to overcome the above conjuecture, a new approach may need to be constructed rather than the Laplace transform and spectral decomposition. \section*{Acknowledgement} The first author is supported by National Natural Science Foundation of China (No. 11871240) and self-determined research funds of CCNU from the colleges’ basic research and operation of MOE (No. CCNU20TS003). The second author thanks National Natural Science Foundation of China (No. 11801326). The third author thanks the ENS Rennes and the AMOPA Section d'Ille-et-Vilaine (35) for their financial support. The fourth author is supported by Grant-in-Aid for Scientific Research (S) 15H05740 of Japan Society for the Promotion of Science, NSFC (No. 11771270, 91730303) and the \lq\lq RUDN University Program 5-100\rq\rq. This work was also supported by A3 Foresight Program \lq\lq Modeling and Computation of Applied Inverse Problems\rq\rq of Japan Society for the Promotion of Science.
proofpile-arXiv_065-3793
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Probabilistic SHM} Under the pattern recognition paradigm associated with Structural Health Monitoring (SHM) \cite{SHM}, data-driven methods have been established as a primary focus of research. Various machine learning tools have been applied in the literature, for example \cite{vanik2000bayesian,sohn2003review,chatzi2009unscented}, and used to infer the health or performance state of the monitored system, either directly or indirectly. Generally, algorithms for regression, classification, density estimation, or clustering learn patterns in the measured signals (available for training), and the associated patterns can be used to infer the state of the system in operation, given future measurements \cite{worden2006application}. Unsurprisingly, there are numerous ways to apply machine learning to SHM. Notably (and categorised \textit{generally}), advances have focussed on various probabilistic (e.g.\ \cite{vanik2000bayesian,ou2017vibration,flynn2010bayesian}) and deterministic (e.g.\ \cite{bornn2009structural,zhao2019deep,janssens2017deep}) methods. % Each approach has its advantages; however, considering certain challenges associated with SHM data (outlined in the next section) the current work focusses on probabilistic (i.e.\ statistical) tools: these algorithms appear to offer a natural solution to some key issues, which can otherwise prevent practical implementation. % Additionally, probabilistic methods can lead to predictions \textit{under uncertainty} \cite{papoulis1965} -- a significant advantage in risk-based applications. % \subsection{SHM, Uncertainty, and Risk} It should be clear that measured/observed data in SHM will be inherently uncertain, to some degree. Uncertainties can enter via \textit{experimental} sources, including limitations to sensor accuracy, precision or human error; further uncertainties will be associated with the model -- machine learning or otherwise -- including parametric variability, model discrepancy, and interpolation uncertainty. % Considering the implications of \textit{risk}, financially and in terms of safety, uncertainty should be mitigated (during data acquisition), and quantified (within models) as far as possible to inform decision making \cite{zonta2014value,cappello2015mechanical}. % That is, when supporting a financial or safety-critical decision, predictions should be presented with \textit{confidence}: clearly, a certain prediction, which implies a system is safe to use, differs significantly to an \textit{uncertain} prediction, supporting the same decision. If there is no attempt to quantify the associated uncertainties, there is no distinction between these scenarios. Various methods can return predictions with confidence (or \textit{credibility}) \cite{murphy}. The current work focusses on probabilistic models, which -- under Kolmogorov's axioms \cite{papoulis1965} -- allow for predictions under well-defined uncertainty, provided the model assumptions are \textit{appropriate}. \subsection{A Probabilistic Approach} Discussions in this work will consider the general strategy illustrated in Figure~\ref{SHM_khart}. That is, SHM is viewed as a multi-class problem, which categorises measured data into groups, corresponding to the condition of the monitored system. % The $i^{th}$ input, denoted by $\vec{x}_i$, is defined by a $d$-dimensional vector of variables, which represents an \textit{observation} of the system, such that $\vec{x}_i \in \mathbb{R}^d$. % The data \textit{labels} $y_i$, are used to specify the condition of the system, directly or indirectly. % Machine learning is introduced via the pattern recognition model, denoted $f(\cdot)$, and is used to infer relationships between the input and output variables, to inform predictive maintenance. \begin{figure}[pt] \centering \resizebox{\textwidth}{!}{% \begin{tikzpicture}[auto] \linespread{1} \tikzstyle{block} = [rectangle, thick, draw, text width=6em, text centered, minimum height=7em] \tikzstyle{block3} = [rectangle, thick, draw=black!40, text width=6em, text centered, minimum height=6em] \tikzstyle{block2} = [draw=black!0, rectangle, text width=5em, text centered, minimum height=6em] \tikzstyle{line} = [draw, -latex, thick] \node [block] (SHM) { $f(\cdot)$ \\ pattern recognition}; \node [block3, left of=SHM, node distance=45mm] (SP) {pre-processing, feature extraction}; \node [block2, left of=SP, node distance=30mm] (MD) {measured data}; \node [block3, right of=SHM, node distance=45mm] (PP) {post-\\processing}; \node [block2, right of=PP, node distance=30mm] (label) {diagnostic labels}; \path [line] (SP) -- node [above] {\textsl{inputs}} node [below] {$\vec{x}_i$} (SHM); \path [line, draw=black!40] (MD) -- (SP); \path [line, draw=black!40] (PP) -- (label); \path [line] (SHM) -- node [above] {\textsl{outputs}} node [below] {$y_i$} (PP); \end{tikzpicture} }% \caption{A \textsl{simplified} framework for pattern recognition within SHM.} \label{SHM_khart} \end{figure} The inputs $\vec{x}_i$ are assumed to be represented by some random vector $X$ (in this case, a continuous random vector), which can take any value within a given feature-space $\mathscr{X}$. The random vector is therefore associated with an appropriate probability density function (p.d.f.), denoted $p(\cdot)$, % such that the probability $P$ of $X$ falling within the interval $a < X \leq b$ is, % $P\left(a < X \leq b \right)\; = \int_{a}^{b} p\left(\vec{x}_i\right)\,d\vec{x}_i \;\textrm{such that}\; p\left(\vec{x}_i\right)\geq 0,\; \int_{_\mathscr{X}} p\left(\vec{x}_i\right)\,d\vec{x}_i = 1$. % For a discrete classification problem, the labels $y_i$ are represented by a discrete random variable $Y$, which can take any value from the finite set, $y_i \in \mathscr{Y} = \{1,...,{K}\}$. Note: {discrete classification is presented in this work, although, SHM is regularly informed by regression models -- i.e.\ $y_i$ is continuous; this is application specific, and most of the motivational arguments remain the same.} $K$ is the number of classes defining the (observed) operational, environmental, and health conditions, while $\mathscr{Y}$ denotes the label-space. An appropriate probability mass function (p.m.f.), also denoted $p(.)$, is such that, % $P\left({Y} = y_i\right) = p(y_i) \;\textrm{where}\; 0\leq P\left({Y} = y_i\right)\leq 1,\; \sum_{y_i\in Y} P\left({Y} = y_i\right) = 1$. % Note: {context should make the distinction between p.m.fs and p.d.fs clear.} % Further details regarding probability theory for pattern recognition can be found in a number of well written textbooks -- for example \cite{murphy,barber2012bayesian,gelman2013bayesian}. % \subsection{Layout} Section~\ref{s:sparse_data} summarises the most significant challenges for data-driven SHM, while Section~\ref{s:intro} suggests probabilistic methods to mitigate these issues. % Section~\ref{s:DGM} introduces theory behind directed graphical models (DGMs), which will be used to formally introduce each method. % Section~\ref{s:case_studies} collects four case studies to highlight the advantages of probabilistic inference. Active learning and Dirichlet process clustering are applied to the Z24 bridge data. % Semi-supervised learning is applied to data recorded during ground vibration tests of a Gnat aircraft. % Multi-task learning is applied simulated and experimental data from shear-building structures. % Note: the applications presented here were introduced in previous work by the authors. The related SHM literature is referenced in the descriptions of each mode of inference. \section{Incomplete Data and Missing Information}\label{s:sparse_data} Arguably, the most significant challenge when implementing pattern recognition for SHM is missing information. % Primarily, it is difficult to collect data that might represent {damage states} or the system in extreme {environments} (such as earthquakes) \textit{a priori}; % data are usually only available for a limited subset of the possible conditions for training algorithms \cite{SHM}. % As a result, conventional methods are restricted to novelty detection, % as the information required to inform \textit{multi-class} predictive models (that can localise and classify damage, as well as detect it \cite{worden2006application}) is unavailable or not obtained. For the measurements $\vec{x}_i$ that are available -- as well as those that are recorded during operation (\textit{in situ}) -- \textit{labels} to describe what the signals represent, $y_i$, are rarely at hand. % This missing information is usually due to the cost associated with manually inspecting structures (or data), as well as the practicality of investigating each observation. % The absence of labels makes defining and updating (multi-class) machine learning models difficult, particularly in the online setting, % as it can become difficult to determine if/when novel valuable information has been recorded, and what it represents \cite{onlineAL}. % For example, consider streaming data, recorded from a sub-sea pipeline. Comparisons of measured data to the model might indicate novelty; however, without labels, it is difficult to include this new information in a supervised manner: the measurements might represent another operational condition, abnormal wave loads, actual damage, or some other condition. \section{New Modes of Probabilistic Inference}\label{s:intro} New modes of probabilistic inference are being proposed to address challenges with SHM data. % Specifically, the algorithms focus on probabilistic frameworks to deal with \textit{limited labelled data}, as well as \textit{incomplete measured data}, that only correspond to a subset of the expected conditions \textit{in situ}. \subsection{Partially-Supervised Learning} \textit{Partially-supervised learning} allows multi-class inference in cases where labelled data are limited. % Missing label information is especially relevant to practical applications of SHM: while \textit{fully} labelled data are often infeasible, it can be possible to include labels for a limited set (or \textit{budget}) of measurements. % Typically, the budget is limited by some expense incurred when investigating the signals; this might include direct costs associated with inspection, or loss of income due to down-time \cite{BULL2020106653}. Generally speaking, partially-supervised methods can be used to perform multi-class classification, while utilising \textit{both} labelled $\mathcal{D}_l$ and unlabelled $\mathcal{D}_u$ signals within a \textit{unifying} training scheme \cite{Schwenker2014} -- as such, the training set $\mathcal{D}$ becomes, \begin{align} \mathcal{D} &=\mathcal{D}_l \cup \mathcal{D}_u \\ &= \left\{\vec{X},\vec{y}\right\} \cup \tilde{\vec{X}}\\[1em] \left\{\vec{X},\vec{y}\right\} &\triangleq \left\{\vec{x}_i,y_i\right\}_{i=1}^{n}\\ \tilde{\vec{X}} & \triangleq \left\{\tilde{\vec{x}}_i \right\}_{i=1}^{m} \end{align} \textit{Active} and \textit{semi-supervised} techniques are suggested -- as two variants of partially-supervised learning -- to combine/include information from labelled and unlabelled SHM data \cite{bull2018active,onlineAL,BULL2020106653}. \subsubsection{Semi-supervised learning} Semi-supervised learning utilises \textit{both} the labelled and unlabelled data to inform a classification \textit{mapping}, $f: \mathscr{X} \mapsto \mathscr{Y}$. % Often, a semi-supervised learner will use information in $\mathcal{D}_u$ to further update/constrain a classifier learnt from $\mathcal{D}_l$ \cite{mccallumzy1998employing}, or, alternatively, partial supervision can be implemented as constraints on a \textit{unsupervised} clustering algorithm \cite{SS}. This work focusses on classifier-based methods; however, constraints on clustering algorithms are discussed in later sections. Arguably, the most simple/intuitive method to introduce unlabelled data is \textit{self-labelling} \cite{zhu2005semi}. In this case, a classifier is trained using $\mathcal{D}_l$, which is used to predict labels for the unlabelled set $\mathcal{D}_u$. % This defines a new training-set -- some labels in $\mathcal{D}$ are the ground truth, from the supervised data, and the others are \textit{pseudo-labels}, predicted by the classifier. % Self-labelling is simple, and it can be applied to any supervised method; however, the effectiveness is highly dependent on the method of implementation, and the supervised algorithm within it \cite{SS}. Generative mixture models offer a formal \textit{probabilistic} framework to incorporate unlabelled data \cite{cozman2003semi,nigam1998learning}. % Generative mixtures apply the \text{cluster assumption}: \textsl{`if points are in the same cluster, they are likely to be of the same class}'. % Note: {the cluster assumption does not necessarily imply that each class is represented by a single, compact cluster; instead, the implication is that observations from different classes are unlikely to appear in the same cluster \cite{SS}.} Through density estimation \cite{barber2012bayesian}, a mixture of base-distributions can be used to estimate the underlying distribution of the data, $p(\vec{x}_i, y_i)$, and unlabelled observations can be included in various ways \cite{mccallumzy1998employing,vlachos2009unsupervised}. For example, the Expectation Maximisation (EM) algorithm (used to learn mixture models in the unsupervised case \cite{murphy}) can be modified to incorporate labelled observations \cite{nigam1998learning,mccallumzy1998employing}. % Figure~\ref{fig:gmm_ss_eg} demonstrates how a Gaussian mixture, given acoustic emission data \cite{AE}, can be improved by considering the surrounding unlabelled examples (via EM). % \begin{figure}[pt] \centering \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_2a.pdf} \caption{\label{a}}\label{fig:gmm_sl} \end{subfigure} \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_2b.pdf} \caption{\label{b}}\label{fig:gmm_ssl} \end{subfigure} \caption{Semi-supervised GMM for three-class AE data: (\subref{a}) supervised learning, given the labelled data only, $\bullet$ markers. (\subref{b}) semi-supervised learning, given the labelled \textsl{and} unlabelled data, $\bullet/ \circ$ markers. Adapted from \protect\cite{bull_2019thesis}.}\label{fig:gmm_ss_eg} \end{figure} To summarise, semi-supervised methods allow algorithms to learn from information in the available unlabelled measurements as well as a limited set of labelled data. % In practice, semi-supervised inference implies that the cost associated with labelling data could be managed in SHM \cite{chen2013,chen2014}, as the information in a small set of labelled signals is combined with larger sets of unlabelled data \cite{bull2019damage}. \subsubsection{Active Learning} Active learning is an alternative partially-supervised method; the key hypothesis is that an algorithm can provide improved performance, using fewer training labels, if it is allowed to select the data from which it learns \cite{settles2012active}. % As with semi-supervised techniques, the learner utilises $\mathcal{D}_l$ and $\mathcal{D}_u$ -- however, active algorithms query/annotate the unlabelled data in $\mathcal{D}_u$ to extend the labelled set $\mathcal{D}_l$. Thus, an active learner attempts to define an accurate mapping, $f: \mathscr{X} \mapsto \mathscr{Y}$, while keeping queries to a minimum \cite{two_faces}; general (and simplified) steps are illustrated in Figure~\ref{AL_frmwrk}. % \begin{figure}[pt] \centering \resizebox{\textwidth}{!}{% \begin{tikzpicture}[auto] \tikzstyle{block} = [rectangle, thick, draw=black!80, text width=8em, text centered, minimum height=7em, fill=black!5] \tikzstyle{line} = [draw, -latex, thick] \node [block, node distance=42mm] (A) {provide\\ unlabelled input data}; \node [block, right of=A, node distance=42mm] (B) {establish which data are the most informative}; \node [block, right of=B, node distance=42mm] (C) {provide labels for these data}; \node [block, right of=C, node distance=42mm] (D) {train a classifier on this informed subset}; \path [line, draw=black!80] (A) -- (B); \path [line, draw=black!80] (B) -- (C); \path [line, draw=black!80] (C) -- (D); \path [line, draw=black!80, dashed] (D) -- node {} ++(0,-2cm) -| (A) node[pos=0.25] {} node[pos=0.75] {}; \end{tikzpicture} } \caption{A general/simplified active learning heuristic.} \label{AL_frmwrk} \end{figure} The critical step for active algorithms is how to select the most informative signals to investigate \cite{wang_density,Schwenker2014}. For example, \textit{Query by Committee (QBC)} methods build an ensemble/committee of classifiers using a small, initial (random) sample of labelled data, leading to multiple predictions for unlabelled instances. Observations with the most conflicted label predictions are viewed as informative, thus, they are queried \cite{wang_density}. % On the other hand, \textit{uncertainty-sampling} usually refers to a framework that is based around a single classifier \cite{kremer_asvm,settles2012active}, where signals with the \textit{least confident} predicted label, given the model, are queried. % (It is acknowledged that QBC methods can also be viewed as a type of uncertainty sampling.) % Uncertainty sampling is (perhaps) most interpretable when considering probabilistic algorithms, as the posterior probability over the class-labels $p(y_i\,|\,\vec{x}_i)$ can be used to quantify uncertainty/confidence \cite{bull2020investigating}. % For example, consider a binary (two-class) problem: intuitively, uncertain samples could be instances whose posterior probability is nearest to $0.5$ for both classes. This view can be extended to multiple ($> 2$) classes using the \textit{Shannon entropy} \cite{mackay2003information} as a measure of uncertainty; i.e.\ high entropy (uncertain) signals given the GMM of the acoustic emission data \cite{AE} is illustrated in Figure~\ref{fig:entQ}. \begin{figure}[pt] \centering \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_4a.pdf} \caption{\label{a}} \label{fig:entQ} \end{subfigure} \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_4a.pdf} \caption{\label{b}} \label{fig:likQ} \end{subfigure} \caption{Uncertainty sampling for the AE data: {$\blacktriangleright\;\blacktriangleleft \blacktriangledown$~markers} show the training set, and $\bullet$ markers show the unlabelled data -- circles indicate queries by the active learner (a) based on entropy, (b) based on likelihood -- adapted from \protect\cite{bull_2019thesis}.}\label{fig:al_ae} \end{figure} In summary, as label information is limited by cost implications in practical SHM \cite{bull2019stsd}, active algorithms can be utilised to automatically administer the label budget, by selecting the most \textit{informative} data to be investigated -- such that the performance of predictive models is maximised \cite{bull2019machiningAL}. \subsection{Dirichlet Process Mixture Models for Nonparametric clustering} Dirichlet Process (DP) mixture models \cite{neal2000markov} offer another probabilistic framework to deal with limited labels as well as incomplete data \textit{a priori}. % The DP is suggested as an (unsupervised) Bayesian algorithm for nonparametric clustering, used to perform inference online such that the need for extensive training-data (before implementing the SHM strategy) is mitigated \cite{rogers2019}. % As such, unlike partially-supervised methods, labels are always an additional \textit{latent} variable (they are never observed); thus, the ground truth of $y_i$ is not known during inference. % Label information has the potential to be incorporated, however; either within the SHM strategy \cite{rogers2019}, or at the algorithm level to define a semi-supervised DP \cite{vlachos2009unsupervised}. % Conveniently, Bayesian properties of the DP allow the incorporation of prior knowledge and updates of belief, given the observed data. % The aim is to avoid the need for comprehensive training-data, while retaining flexibility to include any available data formally as prior knowledge. % Additionally, as there is a reduction in the number of user-tuned parameters, models can be implemented to perform powerful online learning with minimal \textit{a priori} input/knowledge, in terms of access to data or a physical model \cite{rogers2019}. \subsubsection{Dirichlet Process Clustering} A popular analogy to describe the DP (for clustering) considers a restaurant with an infinite number of tables \cite{aldous1985exchangeability} (i.e.\ clusters in $\mathscr{Y}$). Customers -- resembling observations in $\mathscr{X}$ -- arrive and sit at one of the tables (according to some probability) which are either occupied or vacant. As a table becomes more popular, the probability that customers join it increases. % The seating arrangement can be viewed to represent a DP mixture. Importantly, the probability that a \textit{new} vacant table is chosen (over an existing table) is defined by a hyperparameter $\alpha$, associated with the DP. % In consequence, $\alpha$ is sometimes referred to as the \textit{dispersion value} -- high values lead to an increased probability that new tables (clusters) are formed, while low values lead to less tables, as new tables are less likely to be initiated. The analogy should highlight a useful property of DP mixtures: the number of clusters $K$ (i.e.\ tables) does not need to be defined in advance, instead, this is be determined by the model and the data (as well as $\alpha$) \cite{vlachos2009unsupervised}. % As a result, the algorithm can be particularly useful when clustering SHM signals online, as the model can adapt and update, selecting the most appropriate value for $K$ as new information becomes available. % To demonstrate, consider a mixture of Gaussian base-distributions; a conventional \textit{finite mixture} (a GMM) requires the number of components $K$ to be defined \textit{a priori}, as in the supervised Gaussian Mixture Model (GMM) with $K=3$, shown in Figures~\ref{fig:gmm_ss_eg} and~\ref{fig:al_ae}. % As suggested by the analogy, a DP can be interpreted as an \textit{infinite} mixture, such that $K \rightarrow \infty $ \cite{rasmussen2000igmm}; this allows for the probabilistic inference of $K$ through the DP prior. An example DP-GMM for the same AE data \cite{AE} is shown in Figure~\ref{fig:DPcl}; the most likely number of components has been automatically found, $K = 3$, given the data and the model for $\alpha=0.1$. % The effect of the \textit{dispersion} hyperparameter $\alpha$ can be visualised in Figure~\ref{fig:DPK}, which shows the posterior-predictive-likelihood of $K$ given the data for various values of $\alpha$. % Considering that $K=3$, an appropriate hyperparameter range appears to be $0.01\leq\alpha\leq0.1$; although, as each class is clearly non-Gaussian, higher values of $K$ are arguably more appropriate to approximate the underlying density of the data. % Interestingly, for low values of $\alpha$, three components appear significantly more likely to describe the data than two (or one). % \begin{figure}[pt] \centering \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_5a.pdf} \caption{\label{a}}\label{fig:DPcl} \end{subfigure} \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_5b.pdf} \caption{\label{b}}\label{fig:DPK} \end{subfigure} \caption{Unsupervised Dirichlet process Gaussian mixture model for the three-class AE data: (\subref{a}) unsupervised DP clustering, $\bullet/ \circ$ markers are the ground-truth/predicted values for $y_i$. (\subref{b}) predictive likelihood for the number of clusters $K$ given $\alpha$, i.e.\ $p(K|\mathcal{D},\alpha)$.}\label{fig:DP} \end{figure} For SHM in practice, the implementation of the DP for online clustering means that an operator does not need to specify an expected number of normal, environmental or damage conditions (components $K$) in order to build the model, which can be difficult or impossible to define for a structure in operation \cite{rogers2019}. \subsection{Transfer and Multi-task Learning} Finally, methods for \textit{transfer} \cite{Gao2018,GARDNER2020106550,Jang2019} and \textit{multi-task} \cite{Ping2019,Yong2019} learning are proposed for inference with incomplete or limited training-data. In general terms, the idea for SHM applications is that valuable information might be transferred or shared, in some sense, between similar systems (via measured and/or simulated data). % By considering \textit{shared} information, the performance of predictive models might improve, despite insufficient training observations \cite{Chakraborty2011,Ye2017,Dorafshan2018}. % For example, consider wind turbines in an offshore wind-farm; one system may have comprehensively labelled measurements, investigated by the engineer, corresponding to a range of environmental effects; other turbines within the farm are likely to experience similar effects, however, the measured signals might be incomplete, with partial labelling or no labels at all. % Various tools \cite{pan2009survey} offer frameworks to transfer \textit{different aspects} of shared information. % For the methods discussed here, it is useful to define two objects \cite{GARDNER2020106550}: \begin{itemize} \item A \textbf{Domain} $\mathscr{D} = \{\mathscr{X},p(\vec{x}_i)\}$ is an object that consists of a feature space $\mathscr{X}$ and a marginal probability distribution $p(\vec{x}_i)$ over a finite sample of feature data {$\left\{\vec{x}_i\right\}_{i=1}^{n} \in \mathscr{X}$}. \vspace{1em} \item A \textbf{Task} $\mathcal{T} = \{\mathscr{Y},f(\cdot)\}$ is a combination of a label space $\mathscr{Y}$ and a predictive model/ function $f(\cdot)$. \end{itemize} \textit{Domain adaptation} is one approach to transfer learning, following a framework which maps the distributions from feature/label spaces (i.e.\ $\mathscr{X}$/$\mathscr{Y}$) associated with \textit{different} structures into a shared (more \textit{consistent}) space. The observations are typically \textit{labelled} for one structure only, therefore, a predictive model $f(\cdot)$ can be learnt, such that label information is \textit{transferred} between domains. % The domain with labelled data is referred to as the \textit{source} domain $\mathscr{D}_s$ -- shown in Figure~\ref{fig:Ds} -- while the unlabelled data correspond to the \textit{target} domain $\mathscr{D}_t$ -- shown in Figure~\ref{fig:Dt}. % Importantly, a classifier $f(\cdot)$ applied in the projected latent space of Figure~\ref{fig:Ls} should generalise to the target structure, despite missing label information. % \begin{figure}[pt] \raggedright \begin{minipage}[b]{0.43\textwidth} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{figures/fig_6a.pdf} \caption{Source domain $\mathscr{D}_s$}\label{fig:Ds} \end{subfigure}\\[\baselineskip] \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{figures/fig_6b.pdf} \caption{Target domain $\mathscr{D}_t$}\label{fig:Dt} \end{subfigure} \end{minipage} \hfill \begin{subfigure}[b]{0.54\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_6c.pdf} \caption{Latent space}\label{fig:Ls} \end{subfigure} \caption{Visualisation of knowledge transfer via domain adaptation. Ellipses represent clusters of data -- coloured according to labels. (a) and (b) are the source and target domains respectively, in their original sample spaces. (c) shows the source and target data mapped into a shared, more consistent latent space.}\label{fig:Da} \end{figure} \textit{Multi-task learning} considers shared information from an alternative perspective. As with domain adaptation, knowledge from \textit{multiple domains} is used to improve tasks \cite{pan2009survey}; however, in this case, each domain is weighted equally \cite{zhang2018overview}. The goal is, therefore, to generate an improved predictive function $f(\cdot)$ across multiple tasks by utilising \textit{labelled} feature data from several different \textit{source domains}. % This approach to inference is particularly useful when labelled training-data are insufficient across multiple tasks or systems. By considering the shared knowledge across various labelled domains, the amount of the training-data can, in effect, be increased. This work suggests \textit{kernelised Bayesian transfer learning} (KBTL) \cite{gonen2014kernelized} to model shared information. KBTL is a particular form of multi-task learning, which can be viewed as a method for \textit{heterogeneous} transfer; i.e.\ at least one feature space $\mathscr{X}_j$ for a domain $\mathscr{D}_j$ is not the same dimension as another feature space $\mathscr{X}_k$ (in the set of domains), such that $d_j \neq d_k$ \cite{GARDNER2020106550}. % KBTL is a probabilistic method that performs two tasks: 1) finding a shared latent subspace for each domain and 2) inferring a discriminative classifier in the shared latent subspace in a Bayesian manner. It is assumed that there is a relationship between the feature space and the label space for each domain, and that all domains provide knowledge that will improve the predictive function $f(\cdot)$ for all domains \cite{GARDNER2020106550}. % In practice, methods such as KBTL should be particularly useful for SHM, as the (labelled) training-data are often insufficient or incomplete across structures. If, through multi-task/transfer learning, tasks from \textit{different} structures can be considered together, this should increase the amount of information available to train algorithms. In turn, this should increase the performance of predictive models, utilising the \textit{shared} information between systems. % \section{Directed Graphical Models}\label{s:DGM} It will be useful to introduce basic concepts behind \emph{directed graphical models} (DGMs), as these will be used to (visually) introduce each probabilistic algorithm. The terminology here follows that of \cite{murphy}. % Generally speaking, DGMs can be used to represent the joint distribution of the variables in a statistical model by making assumptions of \emph{conditional independence}. % For these ideas to make sense, the \emph{chain rule} is needed; that is, the joint distribution of a probabilistic model can be represented as follows, using any ordering of the variables $\{X_1,X_2\ldots,X_V\}$: % \begin{align} p(X_{1:V}) &= p(X_1)p(X_2\,|\,X_1)p(X_3\,|\,X_1,X_2)\ldots p(X_V\,|\,X_{1:V-1}) \label{eq:chain} \\ \nonumber X_{1:V} &\triangleq \{X_1,X_2\ldots,X_V\} \end{align} \noindent In practice, a problem with expression (\ref{eq:chain}) is that it becomes difficult to represent the conditional distribution $p(X_V\,|\,X_{1:V-1})$ as $V$ gets large. Therefore, to efficiently approximate large joint distributions, assumptions of conditional independence (\ref{eq:c_ind}) are critical. % Specifically, conditional independence is denoted with $\bot$, and it implies that, \begin{align} A\,\bot\,B\,|\,C \;&\longleftrightarrow\; p(A,B\,|\,C) =p(A\,|\,C)\,p(B\,|\,C)\label{eq:c_ind} \end{align} Considering these ideas, nodes in a graphical model can be used to represent variables, while edges represent conditional dependencies. % For example, for the AE data (in Figures~\ref{fig:gmm_ss_eg}, \ref{fig:al_ae}, or \ref{fig:DPcl}), one can consider a random vector $\vec{x}_i$ to describe the (two-dimensional) measured features $\vec{x}_i = \left\{x^{(1)}_i, x^{(2)}_i\right\}$, and a random variable $y_i$ to represent the class label $\{1,2,3\}$. % As a result, the joint distribution of an appropriate model might be $p\left(\vec{x}_i, y_i\right)$. To simply matters, the features can be considered to be independent (an invalid but often acceptable assumption), i.e.\ $x^{(1)}_i\,\bot\,x^{(2)}_i\,|\,y_i$. This leads to the following approximation of distribution of the model (for a single observation): \begin{align} p\left(\vec{x}_i, y_i\right) = p\left(x^{(1)}_i\;\lvert\;y_i\right)p\left(x^{(2)}_i\;\lvert\;y_i\right)p\left(y_i\right) \label{eq:DE} \end{align} An appropriate distribution function $p(\cdot)$ can now be assigned to each of these densities (or masses). The DGM resulting from (\ref{eq:DE}) is plotted in in Figure~\ref{fig:dgm1}. In many cases, the features in $\vec{x}_i$ are the \textit{observed} variables (measured), while the labels $y_i$ are the \textit{latent} (or hidden) variables that one wishes to infer. To visualise this, the observed and latent variables are shown by shaded/unshaded nodes respectively in Figure~\ref{fig:dgm1}. % For high-dimensional feature vectors (e.g.\ $d >> 2$), plates can be used to represent conditionally-independent variables and avoid a cluttered graph, as shown in Figure~\ref{fig:dgm2}. % Another plate with ${i = \{1,\ldots,n\}}$ is included to represent \textit{independent and identically distributed} data, with $n$ observations. % The DGM now represents the whole dataset, which is a matrix of observed variables $\vec{X} = \left\{\vec{x}_1,\ldots,\vec{x}_n\right\}$, and the vector of labels, denoted $\vec{y} = \left\{y_1,\ldots,y_n\right\}$. % This assumptions implies that each sample was drawn independently from the same underlying distribution, such that the order in which data arrive makes no difference to the belief in the model, i.e.\ the likelihood of the dataset is, % \begin{align} p\left(\vec{X}, \vec{y}\right) = \prod_{i=1}^{n}{p\left(x^{(1)}_i\;\lvert\;y_i\right)p\left(x^{(2)}_i\;\lvert\;y_i\right)p\left(y_i\right)} \label{eq:DE_N} \end{align} \begin{figure} \centering \begin{subfigure}[]{.4\textwidth} \centering \begin{tikzpicture} \tikzstyle{RV}=[circle, fill=white!100, minimum size = 3.5em, thick, draw = black!90, node distance = 4em] \tikzstyle{constant}=[circle, inner sep=0pt, fill=black!100, minimum size = 1.2mm, draw = black!80, node distance = 4em] \tikzstyle{plate}=[rectangle, rounded corners, draw=black!80, label={[yshift=17pt]south:#1}]{}; \tikzstyle{connect}=[-latex, thick] \node[RV](Y)[]{$y_i$}; \node[RV, fill=black!10](X1)[below left=of Y]{${x}^{(1)}_{i}$}; \node[RV, fill=black!10](X2)[below right=of Y]{${x}^{(2)}_{i}$}; \path (Y) edge [connect] (X1) (Y) edge [connect] (X2); \end{tikzpicture} \caption{\label{a}}\label{fig:dgm1} \end{subfigure} \begin{subfigure}[]{.4\textwidth} \centering \begin{tikzpicture} \tikzstyle{RV}=[circle, fill=white!100, minimum size = 3.5em, thick, draw = black!90, node distance = 4em] \tikzstyle{constant}=[circle, inner sep=0pt, fill=black!100, minimum size = 1.2mm, draw = black!80, node distance = 4em] \tikzstyle{plate}=[rectangle, thick, rounded corners, draw=black!50, label={[yshift=17pt, xshift=-4.5em]south east:#1}]{}; \tikzstyle{connect}=[-latex, thick] \node[RV](Y)[]{$y_i$}; \node[RV, fill=black!10](X1)[below=of Y]{${x}^{(j)}_{i}$}; \node[plate=\small{$j = 1:d$}, inner sep=2em, fit= (X1)]{}; \node[plate=\small{$i = 1:n$}, inner sep=4em, fit= (X1) (Y)]{}; \path (Y) edge [connect] (X1); \end{tikzpicture} \caption{\label{b}}\label{fig:dgm2} \end{subfigure} \caption{Examples of directed graphical models (DGMs) based on the AE data. Shaded and unshaded nodes represent observed/latent variables respectively; arrows represent conditional dependencies; boxes represent plates.}\label{fig:DGM} \end{figure} \noindent The corresponding DGM can be used to describe a (maximum likelihood) Na\"ive Bayes classifier -- a simplified version of the generative classifiers applied later in this work. \section{Case Studies}\label{s:case_studies} Semi-supervised, active, and multi-task learning, as well as DP clustering, are now demonstrated in case studies. A brief overview of the theory for each algorithm is provided, with the corresponding DGMs; for details behind each algorithm, the reader is referred to the SHM application papers \cite{onlineAL,BULL2020106653,rogers2019,GARDNER2020106550,imac_kbtl}. \subsection{Active learning with Gaussian Mixture Models} A generative classifier is used to demonstrate probabilistic active learning. In this example -- originally shown in \cite{BULL2020106653} -- a Gaussian mixture model (GMM) is used to monitor streaming data from a motorway bridge, as if the signals were recorded online. The model defines a multi-class classifier, to aid both damage detection and identification, while limiting the number of (costly) system inspections. \subsubsection{The directed graphical model} As the data are being approximated by a Gaussian mixture model, when a new class $k$ is discovered from the streaming data (following inspection), it is assigned a Gaussian distribution -- Gaussian clusters like this can be visualised for the AE data in Figure~\ref{fig:gmm_ss_eg}. % Note: the first DGM is explained in detail, to introduce the theory that is used throughout. % The conditional distribution of the observations $\vec{x}_i$ given label $y_i = k$ is, therefore, \begin{equation}\label{eq:c_likeli} p\left(\vec{x}_i \mid y_i = k\right) = \mathcal{N}\left(\vec{x}_i \,;\, \vec{\mu}_k, \vec{\Sigma}_k \right) \end{equation} (Semicolon notation $;$ is used to indicate that a function is parameterised by the variables that follow -- this is distinct from bar notation $\lvert$ which implies a conditional probability.) % $k$ is used to index the class group, given the number of observed clusters at that time $k \in \left\{1,...,K\right\}$. As such, $\vec{\mu}_k$ is the mean (centre) and $\vec{\Sigma}_k$ is the covariance (scatter) of the cluster of data $\vec{x}_i$ with label $k$, for $K$ Gaussian base-distributions. A discrete random variable is used to represent the labels $y_i$, which is categorically distributed, parameterised by a vector of \textit{mixing proportions} $\vec{\lambda}$, \begin{equation}\label{eq:c_prior} p\left(y_i \right) = \textrm{Cat}(y_i\,;\,\vec{\lambda}) \end{equation} the mixing proportions can be viewed as a histogram over the label values, such that $\vec{\lambda} = \left\{\lambda_1,...,\lambda_K\right\}$ and $p(y_i=k) = P\left(y_i=k\right) = \lambda_k$. The collected parameters of the model (from each component) are denoted by $\vec{\theta}$, such that ${\vec{\theta} = \left\{\vec{\Sigma},\vec{\mu},\vec{\lambda}\right\}} = \left\{\vec{\Sigma}_i,\vec{\mu}_i,\vec{\lambda}_i\right\}_{i=1}^K$; therefore, the joint distribution of the model could be written as, \begin{align} p\left(\vec{x}_i,y_i\,;\,\vec{\theta}\right) = p\left(\vec{x}_i\,\lvert\,y_i\,;\,\vec{\theta}\right)p(y_i\,;\,\vec{\theta}) \end{align} However, to consider a more \textit{complete} model, a Bayesian approach is adopted. That is, the parameters $\vec{\theta}$ themselves are considered to be random variables, and, therefore, they are included in the joint distribution (rather than simply parametersing it), \begin{align} p\left(\vec{x}_i,y_i,\vec{\theta}\right) &= p\left(\vec{x}_i\,\lvert\,y_i,\vec{\theta}\right)p(y_i\,\lvert\,\vec{\theta})p\left(\vec{\theta}\right) \\ &= p\left(\vec{x}_i\,\lvert\,y_i,\vec{\Sigma},\vec{\mu}\right)p\left(\vec{\Sigma},\vec{\mu}\right) p(y_i\,\lvert\,\vec{\lambda})p\left(\vec{\lambda}\right) \end{align} This perspective has various advantages; importantly, it allows for the incorporation of prior knowledge regarding the parameters via the \textit{prior distribution} $p\left(\vec{\theta}\right)$. Additionally, when implemented correctly, Bayesian methods lead to robust, self-regularising models \cite{rasmussen2001occam}. To provide analytical solutions, it is convenient to assign conjugate (prior) distributions over the parameters $p\left(\vec{\theta}\right) = p\left(\vec{\Sigma},\vec{\mu})\,p(\vec{\lambda}\right)$. % Here it is assumed that $\{\vec{\Sigma},\vec{\mu}\}$ are independent from $\vec{\lambda}$, to define two conjugate pairs; one associated with the observations $\vec{x}_i$ and another with the labels $y_i$. For the mean $\vec{\mu}_k$ and covariance $\vec{\Sigma}_k$, a conjugate (hierarchical) prior is the Normal Inverse Wishart (NIW) distribution, \begin{equation} p(\vec{\mu}_k,\vec{\Sigma}_k) = \textmd{NIW}(\vec{\mu}_k,\vec{\Sigma}_k \,;\,\vec{m}_0, \kappa_0, \nu_0, \vec{S}_0) \label{eq:NIW} \end{equation} This introduces the \textit{hyperparameters} $\left\{\vec{m}_0, \kappa_0, \nu_0, \vec{S}_0\right\}$ associated with the prior, which can be interpreted as follows: $\vec{m}_0$ is the prior mean for the location of each class $\vec{\mu}_k$, and $\kappa_0$ determines the strength of the prior; $\vec{S}_0$ is (proportional to) the prior mean of the covariance, $\vec{\Sigma}_k$, and $\nu_0$ determines the strength of that prior \cite{murphy}. % Considering that the streaming data will be normalised (online), it is reasonable that hyperparemeters are defined such that the prior belief states that each class is represented by a zero-mean and unit-variance Gaussian distribution. % For the mixing proportions, the conjugate prior is a Dirichlet (Dir) distribution, parameterised by $\vec{\alpha}$, which encodes the prior belief of the mixing proportion (or weight) of each class. % In this case, each class is assumed equally weighted \emph{a priori} for generality -- although, care should be taken when setting this prior, as it is application specific, particularly for streaming data \cite{onlineAL}. \begin{align} p(\vec{\lambda}) &= \textmd{Dir}(\vec{\lambda}\,;\,\vec{\alpha}) \propto \prod^{K}_{k = 1}{\lambda_k}^{\alpha_k-1} \label{eq:dir}\\ \vec{\alpha} &\triangleq \left\{\alpha_1,\ldots,\alpha_k\right\} \end{align} With this information, the joint distribution of the model $p(\vec{x}_i, y_i, \vec{\theta})$ can be approximated, such that $p(\vec{X}, \vec{y}, \vec{\theta}) = \prod_{i=1}^{n}{p(\vec{x}_i, y_i, \vec{\theta})}$. The associated DGM can be drawn, including conditional dependences and hyperparameters, for $n$ (supervised) training data in Figure~\ref{fig:DG_GMM}. \begin{figure} \centering \begin{tikzpicture} \tikzstyle{RV}=[circle, fill=white!100, minimum size = 3.5em, thick, draw = black!90, node distance = 5em] \tikzstyle{constant}=[circle, inner sep=0pt, fill=black!100, minimum size = 1.2mm, draw = black!80, node distance = 4em] \tikzstyle{plate}=[rectangle, thick, rounded corners, draw=black!50, label={[yshift=17pt, xshift=-4.5em]south east:#1}]{}; \tikzstyle{connect}=[-latex, thick] \node[RV, fill=black!10](X){$\vec{x}_{i}$}; \node[RV](sigma)[left=of X]{$\vec{\Sigma}_{k}$}; \node[RV](mu)[below=of sigma]{$\vec{\mu}_{k}$}; \node[RV, fill=black!10](Y)[below=of X]{$y_i$}; \node[RV](Pi)[right=of Y]{$\lambda_k$}; \node[constant](alpha)[right=of Pi, label=below:$\vec{\alpha}$]{}; \node[constant](sigma_0)[left=of sigma, label=left:$\vec{S}_0$]{}; \node[constant](nu)[below = 1.6em of sigma_0, label=left:$\nu_0$]{}; \node[constant](kappa_0)[left=of mu, label=left:$\kappa_0$]{}; \node[constant](mu_0)[above = 1.6em of kappa_0, label=left:$\vec{m}_0$]{}; \node[plate=\small{$i = 1:n$}, inner sep=2em, fit= (X) (Y)]{}; \node[plate=\small{$k = 1:K$}, inner sep=2em, fit= (sigma) (mu)]{}; \node[plate=\small{$k = 1:K$}, inner sep=2em, fit= (Pi)]{}; \path (nu) edge [connect] (sigma) (sigma_0) edge [connect] (sigma) (kappa_0) edge [connect] (mu) (mu_0) edge [connect] (mu) (mu) edge [connect] (X) (sigma) edge [connect] (X) (Y) edge [connect] (X) (Pi) edge [connect] (Y) (alpha) edge [connect] (Pi) (sigma) edge [connect] (mu); \end{tikzpicture} \caption{Directed graphical model for the GMM $p(\vec{x}_i, y_i, \vec{\theta})$ over the \textsl{labelled} data $\mathcal{D}_l$. As training data are supervised, both $\vec{x}_i$ and $y_i$ are observed variables. Shaded and white nodes are the observed and latent variables respectively; arrows represent conditional dependencies; dots represent constants (i.e.\ hyperparameters). Adapted from \protect\cite{bull_2019thesis}.}\label{fig:DG_GMM} \end{figure} Having observed the labelled training data $\mathcal{D}_l = \left\{\vec{X},\vec{y}\right\}$ , the posterior distributions can be defined by applying Bayes' theorem to each conjugate pair -- where $\vec{X}_k$ denotes the observations $\vec{x}_i \in \vec{X}$ with the labels $y_i = k$, \begin{align} p\left(\vec{\mu}_k,\vec{\Sigma}_k \mid \vec{X}_k, \right) &= \frac{p\left(\vec{X}_k \mid \vec{\mu}_k,\vec{\Sigma}_k\right) p\left(\vec{\mu}_k,\vec{\Sigma}_k\right)}{p(\vec{X}_k)} \label{eq:pos1}\\[1em] p\left(\vec{\lambda} \mid \vec{y} \right) &= \frac{p(\vec{y} \mid \vec{\lambda})p\left(\vec{\lambda}\right)}{p(\vec{y})} \label{eq:pos2} \end{align} In general terms, while the prior $p(\vec{\theta})$ was the distribution over the parameters \textit{before} any data were observed, the posterior distribution $p(\vec{\theta}\mid\mathcal{D}_l)$ describes the parameters given the training data (i.e.\ conditioned on the training data). Conveniently, each of these have analytical solutions \cite{barber2012bayesian,murphy}. \subsubsection{Active sampling} To use the DGM to query informative data recorded from the motorway bridge, an initial model is learnt given a small sample of data recorded at the beginning of the monitoring regime. % In this case, it should be safe to assume the labels $y_i=1$, which corresponds to the normal condition of the structure. % As new (unlabelled) measurements arrive online, denoted $\tilde{\vec{x}}_i$, the model can be used to predict the labels \textit{under uncertainty}. % The predictive equations are found by marginalising (integrating) out the parameters from the joint distribution (for each conjugate pair), \begin{align} &p(\vec{\tilde{x}}_i \,|\, \tilde{y}_i = k, \mathcal{D}_l) = \int \int p(\vec{\tilde{x}}_i \,|\, \vec{\mu}_k,\vec{\Sigma}_k)\underbrace{p(\vec{\mu}_k,\vec{\Sigma}_k \,|\, \mathcal{D}_l)}_{\textrm{Eq.(\ref{eq:pos1})}}~d\vec{\mu}_k d{\vec{\Sigma}_k} \label{eq:pp1}\\[1ex] &p(\tilde{y}_i \,|\, \mathcal{D}_l) = \int p(\tilde{y}_i \,|\, \vec{\lambda}) \underbrace{p(\vec{\lambda} \,|\, \mathcal{D}_l)}_{\textrm{Eq.(\ref{eq:pos2})}}~d\vec{\lambda} \label{eq:pp2} \end{align} Again, due to conjugacy, these have analytical solutions \cite{murphy}. The posterior predictive equations (\ref{eq:pp1}) and (\ref{eq:pp2}) can be combined to define the posterior over the label estimates given unlabelled observations of the bridge, \begin{equation}\label{bayes} p(\tilde{y}_i \,|\, \vec{\tilde{x}}_i,\mathcal{D}_l) = \frac{p(\vec{\tilde{x}}_i \,|\, \tilde{y}_i,\mathcal{D}_l)~p(\tilde{y}_i \,|\, \mathcal{D}_l)}{p(\vec{\tilde{x}}_i \,|\, \mathcal{D}_l)} \end{equation} Considering the predictive distribution (\ref{bayes}), labels that appear most uncertain can be investigated by the engineer. This observation is now labelled $\{\vec{x}_i, y_i\}$, thus extending the (supervised) training set $\mathcal{D}_l$. Two measures of uncertainty are considered: a) the marginal likelihood of the new observation given the model (the denominator of Equation (\ref{bayes})) and b) the entropy of the predicted label, given by, \begin{equation}\label{entropy} H(\tilde{y}_i) = - \sum_{k=1}^{K}{p(\tilde{y}_i= k \,|\, \vec{\tilde{x}}_i,\mathcal{D}_l) \log{p(\tilde{y}_i= k \,|\, \vec{\tilde{x}}_i,\mathcal{D}_l)}} \end{equation} Queries with high entropy consider data at the boundary between two existing classes, while queries given low likelihood will select data that appear unlikely given the current model estimate. % Visual examples of data that would be selected given these measures are shown in Figure \ref{fig:entQ} for high entropy, and Figure \ref{fig:likQ} for low likelihood. Figure \ref{process} demonstrates how streaming SHM signals might be queried using these uncertainty measures. % The (unlabelled) data arrive online, in batches of size $B$; the data that appear most uncertain (given the current model) are investigated. % The number of investigations per batch $q_b$ is determined by the label budget, which, in turn, is limited by cost implications. % Once labelled by the engineer, these data can be added to $\mathcal{D}_l$ and used to update the classification model. % \begin{figure}[pt] \centering \resizebox{\textwidth}{!}{% \begin{tikzpicture}[auto] \begin{footnotesize} \tikzstyle{decision} = [diamond, draw, text width=4em, text badly centered, inner sep=2pt] \tikzstyle{block} = [rectangle, draw, text width=10em, text centered, rounded corners, minimum height=4em] \tikzstyle{block2} = [rectangle, draw, text width=10em, text centered, rounded corners, minimum height=4em, fill=black!5] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw=black!50, circle, node distance=3cm, minimum height=2em, text width=4em,text centered] \tikzstyle{point}=[draw, circle] \node [block2, node distance=4em] (start) {start:\\ initial training-set, $\mathcal{D}_l$}; \node [point, below of=start, node distance=12mm] (point) {}; \node [block, below of=point, node distance=12mm] (train) {train model\\ $p(\vec{\theta}\,|\, \mathcal{D}_l)$}; \node [decision, below of=train, node distance=25mm] (new data) {new data?}; \node [block2, left of=new data, node distance=40mm] (stop) {stop}; \node [block, below of=new data, node distance=25mm] (update u) {update unlabelled set, $\mathcal{D}_u$}; \node [cloud, left of=update u, node distance=40mm] (measured data) {measured data, batch size $B$}; \node [block, right of=update u, node distance=42mm] (predict) {predict \\ $p(\tilde{y}_i \,|\, \vec{\tilde{x}}_i,\mathcal{D}_l), \forall \vec{\tilde{x}}_i \in \mathcal{D}_u$}; \node [block, above of=predict, node distance=25mm] (query) {query $q_b$ informative data from $\mathcal{D}_u$}; \node [cloud, right of=query, node distance=40mm] (annotate) {labels provided by the engineer}; \node [block, above of=query, node distance=25mm] (update l) {update $\mathcal{D}_l$ to include new queried labels}; \path [line] (start) -- (point); \path [line] (point) -- (train); \path [line] (train) -- (new data); \path [line] (new data) -- node {no}(stop); \path [line] (new data) -- node {yes}(update u); \path [line, dashed, draw=black!50] (measured data) -- (update u); \path [line] (update u) -- (predict); \path [line] (predict) -- (query); \path [line, dashed, draw=black!50] (annotate) -- (query); \path [line] (query) -- (update l); \path [line] (update l) |- (point); \end{footnotesize} \end{tikzpicture} } \caption{Flow chart to illustrate the online active learning process -- adapted from \protect\cite{onlineAL}.} \label{process} \end{figure} \subsubsection{Z24 bridge dataset} The Z24 bridge was a concrete highway bridge in Switzerland, connecting the villages of Koppigen and Utzenstorf. Before its demolition in 1998, the bridge was used for experimental SHM purposes \cite{SIMCES}. Over a twelve-month period, a series of sensors were used to capture dynamic response measurements, to extract the first four natural frequencies of the structure. Air/deck temperature, humidity and wind speed were also recorded \cite{OGz24}. There are a total of 3932 observations in the dataset. Before demolition, different types of damage were artificially introduced, starting from observation 3476 \cite{robust_og}. The natural frequencies and deck temperature are shown in Figure~\ref{z24data}. Visible fluctuations in the natural frequencies can be observed in Figure~\ref{z24data}, for $1200 \leq n \leq 1500$, while there is little variation following the introduction of damage at observation 3476. % It is believed that the asphalt layer in the deck experienced very low temperatures during this time, leading to increased structural stiffness. \begin{figure}[pt!] \centering \includegraphics[width=.8\linewidth]{figures/fig_10.pdf} \caption{Z24 bridge data, time history of natural frequencies, colours represent three classes of data: normal data (blue), outlying data due to environmental effects (green), and damage (red).}\label{z24data} \end{figure} In the analysis, the four natural frequencies are the observation data, such that $\vec{x}_i \in \mathbb{R}^4$. The damage data are assumed to represent their own class, from observation 3476. Outlying observations within the remaining dataset are determined using the robust Minimum Covariance Determinant (MCD) algorithm \cite{fastmcd,robust_og}. % In consequence, a three-class classification problem is defined, according to the colours in Figure~\ref{z24data}: normal data (blue), outlying data due to environmental effects (green), and damage (red), corresponding to $y_i \in \{1,2,3\}$ respectively. Clearly, it is undesirable for an engineer to investigate the bridge following each data acquisition. Therefore, if active learning can provide an improved classification performance, compared to passive learning (random sampling) with the same sample budget, this demonstrates the relevance of active methods to SHM. \subsubsection{Results: Active learning} The model is applied \textit{online} to the frequency data from the Z24 bridge. % To provide a online performance metric, the dataset is divided into two equal subsets: one is used for training and querying by the active learner $\{\mathcal{D}_l, \mathcal{D}_u\}$, the other is used as a distinct/independent test set. The $f_1$ score is used as the performance metric (throughout this work); this is a weighted average of precision and recall \cite{murphy}, with values between 0 and 1; a perfect score corresponds to $f_1 = 1$. Precision (P) and recall (R) can be defined in terms of numbers of true positives ($TP$), false positives ($FP$) and false negatives ($FN$) for each class, $k \in Y$ \cite{murphy}, \begin{subequations} \begin{equation} P_k = \frac{TP_k}{TP_k + FP_k} \end{equation} \begin{equation} R_k = \frac{TP_k}{TP_k + FN_k} \end{equation} \end{subequations} The (macro) $f_1$ score is then defined by \cite{murphy}, \begin{subequations}\label{eq:f1} \begin{equation} f_{1,k} = \frac{2P_kR_k}{P_k + R_k} \end{equation} \begin{equation} f_{1} = \frac{1}{K} \sum_{k \in Y}{f_{1,k}} \end{equation} \end{subequations} Figure \ref{z24AL} illustrates improvements in classification performance when active learning is used to label 25\% and 12.4\% of the measured data. % Active learning is compared to the \textit{passive} learning benchmark, where the same number of data are labelled according to a random sample, rather than uncertainty measures. % Throughout the monitoring regime, if the GMM is used to select the training data, the predictive performance increases. % Most notably, drops in the $f_1$ score (corresponding to new classes being discovered) are less significant when active learning is used to select data; particularly when class two (environmental effects) is introduced. % This is because new classes are \emph{unlikely} given the current model, i.e.\ uncertainty measure (a). % Intuitively, novel classes are discovered sooner via uncertainty sampling. % For a range of query budgets and additional SHM applications refer to \cite{onlineAL}. % Code and animations of uncertainty sampling for the Z24 data are available at \url{https://github.com/labull/probabilistic_active_learning_GMM}. \begin{figure}[pt] \centering \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_11a.pdf} \caption{\label{a}} \label{z24ra} \end{subfigure} \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_11b.pdf} \caption{\label{b}} \label{z24rb} \end{subfigure} \caption{Online classification performance ($f_1$ score) for the Z24 data, for query budgets of (\subref{a}) 25\%; (\subref{b}) 12.5\% of the total dataset -- adapted from \protect\cite{onlineAL}.}\label{z24AL} \end{figure} \subsection{Semi-supervised updates to Gaussian Mixture Models} While active learning considered the unlabelled data $\mathcal{D}_u$ for querying, the observations only contribute to the model once labelled; in other words, once included in the labelled set $\mathcal{D}_l$. % A semi-supervised model, however, can consider both the labelled \textit{and} unlabelled data when approximating the parameters. % Therefore, ${\vec{\theta}}$ is estimated given \emph{both} labelled and unlabelled observations, such that the posterior becomes $p(\vec{\theta}\mid\mathcal{D}_l,\mathcal{D}_u)$. % This is advantageous for SHM, \textit{unlabelled} observations can also contribute to the model estimate; reducing the dependance on costly supervised data. % Continuing the probabilistic approach, the original DGM in Figure \ref{fig:DG_GMM} can be updated (relatively simply) to become semi-supervised -- shown in Figure~\ref{fig:DGM_SS}. The inclusion of $\mathcal{D}_u$ introduces another latent variable $\tilde{y}_i$, and, as a result, obtaining the posterior distribution over the parameters becomes less simple. % One solution adopts an expectation maximisation (EM) approach \cite{dempster1977maximum}. % The implementation here involves finding the maximum \textit{a posteriori} (MAP) estimate of the parameters $\vec{\hat{\theta}}$ (the mode of the full posterior distribution), while maximising the likelihood of the model. % Specifically, from the joint distribution, and using Bayes' theorem, the MAP estimate of the parameters $\vec{\theta}$ given the labelled and unlabelled subsets is, \begin{align} \nonumber \vec{\hat{\theta}}\;|\;\mathcal{D}\; &= \;\mathrm{argmax}_{\vec{\theta}}\left\{\frac{p(\mathcal{D} \,|\, \vec{\theta})p(\vec{\theta})}{p(\mathcal{D})} \right\}\\ &= \;\mathrm{argmax}_{\vec{\theta}}\left\{\frac{p(\mathcal{D}_u \,|\, \vec{\theta})p(\mathcal{D}_l \,|\, \vec{\theta})p(\vec{\theta})}{p(\mathcal{D}_u,\mathcal{D}_l)} \right\}\label{eq:map}\\ \mathcal{D} &\triangleq \mathcal{D}_u \cup \mathcal{D}_l \nonumber \end{align} Again, it is assumed that the data are i.i.d, so that $\mathcal{D}_l$ and $\mathcal{D}_u$ can be factorised. % Thus, the marginal likelihood of the model (the denominator of equation~(\ref{eq:map})), considers both the labelled and unlabelled data -- this is referred to as the \textit{joint likelihood}, and it is the value that is maximised while inferring the parameters of the model through EM. \begin{figure} \centering \begin{tikzpicture} \tikzstyle{RV}=[circle, fill=white!100, minimum size = 3.5em, thick, draw = black!90, node distance = 4em] \tikzstyle{constant}=[circle, inner sep=0pt, fill=black!100, minimum size = 1.2mm, draw = black!80, node distance = 2.5em] \tikzstyle{plate}=[rectangle, thick, rounded corners, draw=black!50, label={[yshift=17pt, xshift=-4.5em]south east:#1}]{}; \tikzstyle{connect}=[-latex, thick] \node[RV, fill=black!10](X){$\vec{x}_{i}$}; \node[RV](sigma)[left=of X]{$\vec{\Sigma}_{k}$}; \node[RV](mu)[below=of sigma]{$\vec{\mu}_{k}$}; \node[RV, fill=black!10](Y)[right=of X]{$y_i$}; \node[RV, fill=black!10](Xu)[right=of mu]{$\vec{\tilde{x}}_{i}$}; \node[RV](Yu)[right=of Xu]{$\tilde{y}_i$}; \node[RV](Pi)[right =of Yu]{$\lambda_k$}; \node[constant](alpha)[right=of Pi, label=below:$\vec{\alpha}$]{}; \node[constant](sigma_0)[left=of sigma, label=left:$\vec{S}_0$]{}; \node[constant](nu)[below = 1.6em of sigma_0, label=left:$\nu_0$]{}; \node[constant](kappa_0)[left=of mu, label=left:$\kappa_0$]{}; \node[constant](mu_0)[above = 1.6em of kappa_0, label=left:$\vec{m}_0$]{}; \node[plate=\small{$i = 1:n$}, inner sep=1.6em, fit= (X) (Y)]{}; \node[plate=\small{$i = 1:m$}, inner sep=1.6em, fit= (Xu) (Yu)]{}; \node[plate=\small{$k = 1:K$}, inner sep=1.6em, fit= (sigma) (mu)]{}; \node[plate=\small{$k = 1:K$}, inner sep=1.6em, fit= (Pi)]{}; \path (nu) edge [connect] (sigma) (sigma_0) edge [connect] (sigma) (kappa_0) edge [connect] (mu) (mu_0) edge [connect] (mu) (mu) edge [connect] (X) (sigma) edge [connect] (X) (mu) edge [connect] (Xu) (sigma) edge [connect] (Xu) (Y) edge [connect] (X) (Yu) edge [connect] (Xu) (Pi) edge [connect] (Y) (Pi) edge [connect] (Yu) (alpha) edge [connect] (Pi) (sigma) edge [connect] (mu); \end{tikzpicture} \caption{DGM of the semisupervised GMM, given the labelled $\mathcal{D}_l$ and unlabelled data $\mathcal{D}_u$. For the unsupervised set, $\vec{\tilde{x}}_i$ is the only observed variable, while $\tilde{y}_i$ is a latent variable. Adapted from \protect\cite{bull_2019thesis}. }\label{fig:DGM_SS} \end{figure} The EM algorithm iterates E and M steps until convergence in the joint (log) likelihood. During each E-step, the parameters are fixed, and the unlabelled observations are classified using the current model estimate $p\left(\tilde{\vec{y}}\mid\tilde{\vec{X}}, \mathcal{D}\right)$. The M-step corresponds to finding the $\vec{\hat{\theta}}$, given the predicted labels from the E step \textit{and} the absolute labels for the supervised data. % This involves some minor modifications to the conventional MAP estimates, such that the contribution of the unlabelled data is shared between classes, weighted according to the posterior distribution $p\left(\tilde{\vec{y}}\mid\tilde{\vec{X}}, \mathcal{D}\right)$ \cite{barber2012bayesian,BULL2020106653}. % Pseudo-code is provided in Algorithm~\ref{EM}; Matlab code for the semi-supervised GMM is also available at \url{https://github.com/labull/semi_supervised_GMM}. \begin{algorithm}[pt] \caption{\textsl{Semi-supervised EM for a Gaussian Mixture Model}} \label{EM} \SetAlgoLined \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{~~Labelled data $\mathcal{D}_l$, unlabelled data $\mathcal{D}_u$} \Output{~~Semi-supervised MAP estimates of $\vec{\hat{\theta}} = \left\{\hat{\vec{\mu}},\hat{\vec{\Sigma}}\right\}$} \BlankLine \BlankLine \textit{Initilise} $\vec{\hat{\theta}}$ using the labelled data, $\vec{\hat{\theta}} = \textmd{argmax}_{\vec{\theta}}\left\{p(\vec{\theta}\,|\,\mathcal{D}_l)\right\}$\; \While{ the joint log-likelihood $\log\left\{p\left(\mathcal{D}_l,\mathcal{D}_u\right)\right\}$ improves}{ \textit{E-step:} use the current model $\vec{\hat{\theta}}\mid \mathcal{D}$ to estimate class-membership for the unlabelled data $\mathcal{D}_u$, i.e.\ $p\left(\tilde{\vec{y}}\mid\tilde{\vec{X}}, \mathcal{D}\right)$\; \textit{M-step:} update the MAP estimate of $\vec{\hat{\theta}}$ given the component membership for \textit{all} observations $\vec{\hat{\theta}} := \textmd{argmax}_{\vec{\theta}}\left\{p(\vec{\theta}\,|\,\mathcal{D}_l, \mathcal{D}_u)\right\}$\;} \end{algorithm} \subsubsection{Semi-supervised learning with the Gnat aircraft data} A visual example of improvements to a GMM via semi-supervision was shown in Figure~\ref{fig:gmm_ss_eg}. To quantify potential advantages for SHM, the method is also applied to experimental data from aircraft experiments, originally presented in \cite{BULL2020106653}. % For details behind the Gnat aircraft data, refer to \cite{partIII}. Briefly, during the tests, the aircraft was excited with an electrodynamic shaker and band-limited white noise. Transmissibilty data were recorded using a network of sensors distributed over the wing. Artificial damage was introduced by sequentially removing one of nine inspection panels in the wing. 198 measurements were recorded for the removal of each panel, such that the total number of (frequency domain) observations is 1782. % Over the network of sensors, nine transmissibilties were recorded \cite{partIII}. Each transmissibility was converted to a one-dimensional novelty detector, with reference a distinct set of normal data, where all the panels were intact \cite{genetic}. % Therefore, the data represent a nine-class classification problem, one class for the removal of each panel, such that $y_i = \{1,\ldots,9\}$. The measurements are nine-dimensional $\vec{x}_i \in \mathbb{R}^9$, each feature is a novelty index, representing one of nine transmissibilities. % When applying semi-supervised learning, $1/3$ of the total data were set aside as an independent test-set. % The remaining $2/3$ were used for training, i.e.\ $\mathcal{D} = \mathcal{D}_l \cup \mathcal{D}_u$. Of the training data $\mathcal{D}$, the number of labelled observations $n$ was increased (in 5\% increments) until all the observations are labelled. % The results are compared to standard supervised learning for the same budget $n$. % The changes in the classification performance through semi-supervised updates are shown in Figure~\ref{fig:res2a}; inclusion of the unlabelled data consistently improves the $f_1$ score. % For very low proportions of labelled data $<1.26\%$ ($m \gg n$), semi-supervised updates can decrease the predictive performance, this is likely due to the unlabelled data outweighing the labelled instances in the likelihood cost function. % Notably, the maximum increase in the $f_1$ score is $0.0405$, corresponding to a 3.83\% reduction in the classification error for 2.94\% labelled data. % Such improvements to the classification performance for for low proportions of labelled data should highlight significant advantages for SHM, reducing the dependence on large sets of costly supervised data. \begin{figure}[pt] \centering \includegraphics[width=\linewidth]{figures/fig_13.pdf} \caption{Classification performance ($f_1$ score) for the supervised GMM vs. the semi-supervised GMM. Left: $f_1$ for an increasing proportion of labelled data. Right: the gain in $f_1$ score through semi-supervised updates, the red line highlights zero-gain. Adapted from \protect\cite{BULL2020106653}.}\label{fig:res2a} \end{figure} \subsection{Dirichlet Process Clustering of Streaming Data} Returning to the streaming data recorded from the Z24 bridge, an alternative perspective considers that labels are not needed to \textit{infer} the model. In this case, an \textit{unsupervised} algorithm could be used to cluster data online, and labels could be assigned to the resulting clusters \textit{outside} of the inference, within the wider SHM scheme -- as suggested by \cite{rogers2019}. % However, if $y_i$ is unobserved for the purposes of inference, the number of class components $K$ becomes an additional latent variable, unlike the GMM from previous case studies. % As aforementioned, the Dirichlet Process Gaussian Mixture Model (DPGMM) is one solution to this problem. % The DPGMM allows for the probabilistic selection of $K$ through the a Dirichlet process prior. % Initially, this involves defining a GMM in a Bayesian manner, using the same priors as before; however, by following \cite{rasmussen2000igmm}, it is possible to take the limit $K \rightarrow \infty$ to form an infinite Gaussian mixture model. % Surprisingly, this concept can be shown through another simple modification to the first DGM in Figure~\ref{fig:DG_GMM}, leading to Figure~\ref{fig:DGM_IGMM}. % The generative equations remain the same as (\ref{eq:c_likeli}), (\ref{eq:c_prior}), (\ref{eq:NIW}), and (\ref{eq:dir}). \begin{figure}[pt] \centering \begin{tikzpicture} \tikzstyle{RV}=[circle, fill=white!100, minimum size = 3.5em, thick, draw = black!90, node distance = 5em] \tikzstyle{constant}=[circle, inner sep=0pt, fill=black!100, minimum size = 1.2mm, draw = black!80, node distance = 4em] \tikzstyle{plate}=[rectangle, thick, rounded corners, draw=black!50, label={[yshift=17pt, xshift=-4.5em]south east:#1}]{}; \tikzstyle{connect}=[-latex, thick] \node[RV, fill=black!10](X){$\tilde{\vec{x}}_{i}$}; \node[RV](sigma)[left=of X]{$\vec{\Sigma}_{k}$}; \node[RV](mu)[below=of sigma]{$\vec{\mu}_{k}$}; \node[RV](Y)[below=of X]{$\tilde{y}_i$}; \node[RV](Pi)[right=of Y]{$\lambda_k$}; \node[constant](alpha)[right=of Pi, label=below:${\alpha}$]{}; \node[constant](sigma_0)[left=of sigma, label=left:$\vec{S}_0$]{}; \node[constant](nu)[below = 1.6em of sigma_0, label=left:$\nu_0$]{}; \node[constant](kappa_0)[left=of mu, label=left:$\kappa_0$]{}; \node[constant](mu_0)[above = 1.6em of kappa_0, label=left:$\vec{m}_0$]{}; \node[plate=\small{$i = 1:n$}, inner sep=2em, fit= (X) (Y)]{}; \node[plate=\small{$k = 1:\infty$}, inner sep=2em, fit= (sigma) (mu)]{}; \node[plate=\small{$k = 1:\infty$}, inner sep=2em, fit= (Pi)]{}; \path (nu) edge [connect] (sigma) (sigma_0) edge [connect] (sigma) (kappa_0) edge [connect] (mu) (mu_0) edge [connect] (mu) (mu) edge [connect] (X) (sigma) edge [connect] (X) (Y) edge [connect] (X) (Pi) edge [connect] (Y) (alpha) edge [connect] (Pi) (sigma) edge [connect] (mu); \end{tikzpicture} \caption{DGM for the infinite Gaussian mixture model.}\label{fig:DGM_IGMM} \end{figure} A collapsed Gibbs sampler can be used to perform efficient online inference over this model \cite{neal2000markov}. % Although potentially faster algorithms for variational inference exist \cite{blei2006variational}, it can be more practical to implement the Gibbs sampler when performing inference online. % The nature of the Gibbs sampling solution is that each data point is assessed conditionally in the sampler, this allows the addition of new points online, rather than batch updates \cite{rogers2019}. Within the Gibbs sampler, only components $k=\{1,\ldots,K+1\}$ need to be considered to cover the full set of possible clusters \cite{rasmussen2000igmm}. % As with the GMM, there are two conjugate pairs in the model; therefore, the predictive equations remain analytical (leading to a \textit{collapsed} Gibbs sampler). In brief/general terms: while fixing the parameters, the Gibbs scheme determines the likelihood of an observation $\tilde{\vec{x}}_i$ being sampled from an existing cluster $k = \{1,\ldots,K\}$, or an (as of yet) unobserved cluster $k = K+1$ (i.e.\ the prior). % Given the posterior over the $K+1$ classes, the cluster assignment $\tilde{y}_i$ is sampled, and the model parameters are updated accordingly. % This process is iterated until convergence. % \subsubsection{Applications to the Z24 bridge data} In terms of monitoring the streaming Z24 data, any new observations that relate to existing clusters will update the associated parameters. If a new cluster is formed, indicating novelty, this triggers an alarm. In this case, the cluster must contain at least 50 observations to indicate novelty; for details refer to \cite{rogers2019}. Upon investigating the structure, an appropriate description can be assigned to the unsupervised cluster index (outside of the inference). % As before, the Z24 data are normalised in an online manner, thus, the hyperparemeters of the prior $p(\vec{\mu},\vec{\Sigma})$ encode this knowledge. % The choice of the dispersion value $\alpha$, defining $p(\vec{\lambda})$, is more application dependent -- % as discussed in the restaurant analogy, this determines the likelihood that new clusters will be generated. % In \cite{rogers2019}, sensible values for online SHM applications were found to be between $0<\alpha<20$; for the Z24 data, this is set to $\alpha = 10$. % As with the active GMM, a small set of data from the start of the monitoring regime make up an initial training set. % Figure~\ref{fig:time_cluster_z24} shows the algorithm progress for the streaming data. A normal condition cluster (red) is quickly established. As the temperature cools, three more cluster are created (orange, cyan and green) corresponding to the progression of freezing of the deck. Two additional clusters are also created: dark blue around point 800 and light blue close to point 1700. % From inspection of the feature space \cite{rogers2019}, it is hypothesised that the light blue cluster corresponds to a shift and rotation in the normal condition; therefore, this leads to another \emph{normal} cluster. As the corresponding normal data are now non-Gaussian, they are better approximated by two mixture components. % Finally, the magenta cluster is created following two observations of damage, showing the ability of the DPGMM implementation to detect a change in behaviour corresponding to damage, as well as environmental effects. % \begin{figure}[pt] \centering \includegraphics[width=.8\textwidth]{figures/fig_15.pdf} \caption{Figure showing online DP clustering applied to the Z24 bridge data using the first four natural frequencies as the features. Vertical lines indicate that a new cluster has been formed. Adapted from \protect\cite{rogers2019}.} \label{fig:time_cluster_z24} \end{figure} The DPGMM has automatically inferred seven clusters given the data and the model. While three classes were originally defined (as in the active and semi-supervised case), this representation is equally interpretable following system inspections to describe each component. % Additionally, the DPGMM is likely to better approximate the underlying density, as each class of data can be described by a number of Gaussian components, rather than one. % That is, in this case: three clusters describe the normal condition (blues and red), three clusters cover various environmental effects (orange, cyan and green), and one represents the damage condition (magenta). % The results shown on the Z24 data demonstrate the ability of the online DP algorithm to deal with recurring environmental conditions while remaining sensitive to damage. % The DPGMM is incorporated into an SHM system for online damage detection, and it is shown to categorise multiple damaged and undamaged states, while automatically inferring an appropriate number of mixture components $K$ in the mixture model. % The method requires little user input, and it updates online with simple feedback to the user as to when inspection is likely required. If desired, the unsupervised clusters can be assigned meaningful descriptions, to be interpreted by the end user. \subsection{Multi-task learning} In the final case study, supervised data from different structures (each represented by their own domain) are considered simultaneously to improve the performance of an SHM task. % In the following example, each domain $\mathscr{D}_t$ corresponds to supervised training data recorded from a different system; the task $\mathcal{T}$ corresponds to a predictive SHM model. % By considering the data from a group (or population) of \textit{similar} structures in a latent space, the amount of training data can (in effect) be increased. % Multi-task learning should be particularly useful in SHM, where training data are often incomplete for individual systems. % If a predictive model can be improved by considering the data collected from various \textit{similar} structures, this should highlight the potential benefit of multi-task learning. % \subsubsection{Kernelised Bayesian transfer learning} Referring back to task $\mathcal{T}$ and domain $\mathscr{D}$ objects, it is assumed that there are $T$ (binary) classification tasks over the heterogeneous domains $\{\mathscr{D}_t\}_{t=1}^T$. % In other words, the label space $\mathscr{Y}$ is consistent across all tasks (in this case, normal or damaged), while the feature space $\mathscr{X}_t$ can change dimensionality, potentially leading to $d_t \neq d_{t^{\prime}}$. % For each task, there is an i.i.d.\ training set of observations $\vec{X}_t$ and labels $\vec{y}_t$, where $\vec{X}_t = \left\{ \vec{x}_i^{(t)} \in \mathbb{R}^{d_t} \right\}_{i=1}^{n_t}$ and $\vec{y}_t = \left\{ y^{(t)}_i \in \left\{ -1, +1 \right\} \right\}_{i=1}^{n_t}$. % Each domain has a task specific kernel function $k_t$ to determine the similarities between observations and the associated kernel matrix $\vec{K}_t[i,j] = k_t\left(\vec{x}_i^{(t)}, \vec{x}_j^{(t)}\right)$, such that $\vec{K}_t \in \mathbb{R}^{n_t \times n_t}$. % Note: when subscripts/superscripts are cluttered, square bracket notation is used to index matrices and vectors. % Figure~\ref{fig:kbtl_flow} is useful to visualise KBTL. The model can be split into two main parts: (i) the first projects data from different tasks into a shared subspace using kernel-based dimensionality reduction, (ii) the second performs \textit{coupled} binary classification in the shared subspace, using common classification parameters. In terms of notation, the kernel embedding for each domain $\vec{K}_t$ is projected into a shared latent subspace by an optimal projection matrix $\vec{A}_t \in \mathbb{R}^{n_t \times R}$, where $R$ is the dimensionality of the subspace. % Following projection, there is a representation of each domain in the shared latent subspace, $\left\{ \vec{H}_t = \vec{A}_t^{\top}\vec{K}_t\right \}_{t=1}^{T}$. % In this shared space, a \emph{coupled} discriminative classifier is inferred for the projected data from each domain $\left\{ \vec{f}_t = \vec{H}_t^{\top}\vec{w} + \vec{1}b\right \}_{t=1}^{T}$. This implies the same set of parameters $\left\{ \vec{w}, b\right\}$ are used across all tasks. % \begin{figure}[pt] \centering \begin{tikzpicture} \node (n0) [] at (0cm,0cm) {$\vdots$}; \node (n1) [X1,above=.3cm of n0] {$\vec{X}_1^{\top}$}; \node (n3) [XT,below=1.5em of n0] {$\vec{X}_T^{\top}$}; \node (N1) [left=0.05cm of n1.west] {$n_1$}; \node (NT) [left=0.05cm of n3.west] {$n_T$}; \node (d1) [above=0.05cm of n1.north] {$d_1$}; \node (dT) [below=0.05cm of n3.south] {$d_T$}; \node (k0) [right=1.8cm of n0] {$\vdots$}; \node (k1) [K1,above=.3cm of k0] {$\vec{K}_1$}; \node (k2) [] {$\vdots$}; \node (k3) [KT,below=1.5em of k0] {$\vec{K}_T$}; \node (NN1) [above=0.05cm of k1.north] {$n_1$}; \node (NNT) [below=0.05cm of k3.south] {$n_T$}; \node (a1) [A1,above=2.3cm of k0] {$\vec{A}_1^{\top}$}; \node (a3) [AT,below=2.3cm of k0] {$\vec{A}_T^{\top}$}; \node (r1) [left=0.05cm of a1.west] {$R$}; \node (rT) [left=0.05cm of a3.west] {$R$}; \node (h0) [right=1.2cm of k0] {$\vdots$}; \node (h1) [H1,above=.3cm of h0] {$\vec{H}_1$}; \node (h2) [] {$\vdots$}; \node (h3) [HT,below=1.5em of h0] {$\vec{H}_T$}; \node (rr1) [above=0.05cm of h1.north] {$R$}; \node (rrT) [below=0.05cm of h3.south] {$R$}; \node (bw0) [right=2.4cm of k0] {}; \node (bw1) [B,above=.1em of bw0] {$b$}; \node (bw2) [W,below=.1em of bw0.south] {$\vec{w}$}; \node (bd) [above=0.01cm of bw1.north] {$1$}; \node (bn) [left=0.01cm of bw1.west] {$1$}; \node (wn) [left=0.01cm of bw2.west] {$R$}; \node (f0) [right=1.8cm of h0] {$\vdots$}; \node (f1) [f1,above=.3cm of f0] {$\vec{f}_1$}; \node (f2) [] {$\vdots$}; \node (f3) [fT,below=1.5em of f0] {$\vec{f}_T$}; \node (dd1) [above=0.05cm of f1.north] {$1$}; \node (ddT) [below=0.05cm of f3.south] {$1$}; \node (y0) [right=1cm of f0] {$\vdots$}; \node (y1) [y1,above=.3cm of y0] {$\vec{y}_1$}; \node (y2) [] {$\vdots$}; \node (y3) [yT,below=1.5em of y0] {$\vec{y}_T$}; \node (dy1) [above=0.05cm of y1.north] {$1$}; \node (dyT) [below=0.05cm of y3.south] {$1$}; \draw[->,>=latex] (n1.east) -- (k1.west); \draw[->,>=latex] (n3.east) -- (k3.west); \draw[->,>=latex] (k1.east) -- (h1.west); \draw[->,>=latex] (k3.east) -- (h3.west); \draw[->,>=latex] (a1.east) -- (h1.west); \draw[->,>=latex] (a3.east) -- (h3.west); \draw[->,>=latex] (h1.east) -- (f1.west); \draw[->,>=latex] (h3.east) -- (f3.west); \draw[->,>=latex] (f1.east) -- (y1.west); \draw[->,>=latex] (f3.east) -- (y3.west); \draw[->,>=latex] (bw1.east) -- (f1.west); \draw[->,>=latex] (bw1.east) -- (f3.west); \draw[->,>=latex] (bw2.east) -- (f1.west); \draw[->,>=latex] (bw2.east) -- (f3.west); \end{tikzpicture} \caption{Visualisation of KBTL -- adapted from \protect\cite{gonen2014kernelized}.}\label{fig:kbtl_flow} \end{figure} In a Bayesian manner, prior distributions are associated with the parameters of the model. For the $n_t \times R$ task-specific projection matrices, $\vec{A}_t$, there is an $n_t \times R$ matrix of priors, denoted $\vec{\Lambda}_t$. For the weights of the coupled classifier, the prior is $\vec{\eta}$, and for the bias $b$ the prior is $\gamma$. These are standard priors given the parameter types in the model -- for details refer to \cite{gonen2014kernelized}. Collectively, the priors are $\vec{\Xi} = \left\{ \left\{\vec{\lambda}_t\right\}_{t=1}^{T}, \vec{\eta}, \gamma \right\}$ and the latent variables are $\vec{\Theta} =\left\{\left\{\vec{H}_t,\vec{A}_t, \vec{f}_t \right\}_{t=1}^{T}, \vec{w}, b \right\}$; the observed variables (training data) are given by $\left\{\vec{K}_t, \vec{y}_t \right\}_{t=1}^{T}$. % The DGM associated with the model is shown in Figure~\ref{fig:DGM_kbtl}; this highlights the variable dependences and the associated prior distributions. % The distributional assumptions are \emph{briefly} summarised, for details, refer to~\cite{gonen2014kernelized}. % The prior for the elements $\vec{A}_t[i,s]$ of the projection matrix are (zero mean) normally distributed, with variance $\vec{\Lambda}_t[i,s]^{-1}$; in turn, the prior over $\vec{\Lambda}_t[i,s]$ is Gamma distributed. % As a result, the observations are normally distributed in the latent space, i.e.\ $\vec{H}_t[s,i]$. % For the coupled classifier, the prior for the bias $b$ is assumed to be (zero mean) normally distributed, with variance $\gamma^{-1}$, such that $\gamma$ is Gamma distributed. % Similarly, the weights $\vec{w}[s]$ are (zero mean) normally distributed, with variance $\vec{\eta}[s]^{-1}$, such that $\vec{\eta}[s]$ is Gamma distributed. % This leads to normal distributions over the functional classifier $\vec{f}_t[i]$. % The label predictive equations are given by $p(y^{(t)}_* \mid f^{(t)}_*)$, passing $f^{(t)}_*$ through a truncated Gaussian, parameterised by $\nu$ \cite{Gardner2020b}. % The hyperparameters associated with these assumptions are shown in the DGM, Figure~\ref{fig:DGM_kbtl}. % To infer the parameters of the model, approximate inference is required. Following \cite{gonen2014kernelized}, a variational inference scheme is used; this utilises a lower bound on the marginal likelihood, to infer an \emph{approximation}, denoted $q$, of the full joint distribution of the parameters $p(\vec{\Theta}, \vec{\Xi} \mid \left\{\vec{K}_t, \vec{y}_t \right\}_{t=1}^{T})$ of the model. To achieve this, the posterior distribution is factorised as follows, \begin{align} p\left(\vec{\Theta}, \vec{\Xi} \mid \left\{\vec{K}_t, \vec{y}_t \right\}_{t=1}^{T}\right) &\approx q(\vec{\Theta}, \vec{\Xi}) \nonumber\\ &= \prod_{t=1}^T\left(q(\vec{\Lambda}_t)q(\vec{A}_t)q(\vec{H}_t)\right)q(\gamma)q(\vec{\eta})q(b,\vec{w})\prod_{t=1}^T q(\vec{f}_t) \end{align} Each approximated factor is defined as in the full conditional distribution \cite{gonen2014kernelized}. % The lower bound can be optimised with respect to each factor separately, while fixing the remaining factors (iterating until convergence). % \begin{figure}[pt] \centering \begin{tikzpicture} \tikzstyle{RV}=[circle, fill=white!100, minimum size = 3em, thick, draw = black!90, node distance = 3em] \tikzstyle{constant}=[circle, inner sep=0pt, fill=black!100, minimum size = 1.2mm, draw = black!80, node distance = 2.5em] \tikzstyle{plate}=[rectangle, thick, rounded corners, draw=black!50, label={[yshift=17pt, xshift=-4.5em]south east:#1}]{}; \tikzstyle{connect}=[-latex, thick] \node[RV](lambda){$\vec{\Lambda}_t$}; \node[RV](A)[right=of lambda]{$\vec{A}_t$}; \node[RV](H)[above=of A]{$\vec{H}_t$}; \node[RV, fill=black!10](K)[above=of lambda]{$\vec{K}_t$}; \node[RV, fill=black!10](y)[right=of A]{$\vec{y}_t$}; \node[RV](f)[right=of H]{$\vec{f}_t$}; \node[RV](w)[above=of f]{$\vec{w}$}; \node[RV](eta)[right=of w]{$\vec{\eta}$}; \node[RV](b)[right=of f]{$b$}; \node[RV](gamma)[right=of b]{$\gamma$}; \node[constant](alpha_l)[left=of lambda, label=left:$\alpha_\lambda$]{}; \node[constant](beta_l)[below=of alpha_l, label=left:$\beta_\lambda$]{}; \node[constant](nu)[right=of y, label=right:$\nu$]{}; \node[constant](beta_g)[below=of gamma, label=below:$\beta_\gamma$]{}; \node[constant](alpha_g)[left=of beta_g, label=below:$\alpha_\gamma$]{}; \node[constant](beta_et)[right=of eta, label=right:$\beta_\eta$]{}; \node[constant](alpha_et)[above=of beta_et, label=right:$\alpha_\eta$]{}; \node[plate=\small{$t = 1:T$}, inner sep=1.3em, fit= (K) (H) (f) (lambda) (A) (y)]{}; \path (K) edge [connect] (H) (H) edge [connect] (f) (lambda) edge [connect] (A) (A) edge [connect] (H) (f) edge [connect] (y) (w) edge [connect] (f) (b) edge [connect] (f) (gamma) edge [connect] (b) (eta) edge [connect] (w) (alpha_l) edge [connect] (lambda) (beta_l) edge [connect] (lambda) (alpha_g) edge [connect] (gamma) (beta_g) edge [connect] (gamma) (alpha_et) edge [connect] (eta) (beta_et) edge [connect] (eta) (nu) edge [connect] (y); \end{tikzpicture} \caption{Directed graphical model for binary classification KBTL.}\label{fig:DGM_kbtl} \end{figure} \subsubsection{Numerical + experimental example: Shear-building structures} A numerical case study, supplemented with experimental data, is used for demonstration -- an extension of the work in \cite{imac_kbtl}. % A population of six different shear-building structures is considered, five are simulated, and one is experimental. % A domain and task are associated with each structure (such that $T=6$) -- the experimental rig and (simulated) lumped-mass models are shown in Figure \ref{fig:dofs}. % For each structure (domain) there is a two-class classification problem (task), which is viewed as binary damage detection (normal or damaged). % \begin{figure}[pt] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{figures/fig_18a.pdf} \caption{\label{a}} \label{fig:scehm_a} \end{subfigure} \begin{minipage}{.59\textwidth} \centering \resizebox{.85\linewidth}{!}{% \includegraphics[width=\linewidth]{figures/fig_18bc.pdf} } \end{minipage} \caption{Shear structures: (a) test rig; (b) a nominal representation of the five simulated systems; (c) depicts the cantilever beam component where $\{k_i\}^d_{i=1} = 4k_b$. % }\label{fig:dofs} \end{figure} Each \textit{simulated} structure is represented by $d$ mass, stiffness and damping coefficients, i.e.\ $\{m_i, k_i, c_i\}^d_{i=1}$. % The masses have length $l_m$, width $w_m$, thickness $t_m$, and density $\rho$. The stiffness elements are calculated from four cantilever beams in bending, $4k_b = 4(3EI/l_b^3)$, where $E$ is the elastic modulus, $I$ the second moment of area, and $l_b$ the length of the beam. % The damping coefficients are specified rather than derived from a physical model. % Damage is simulated via an open crack, using a reduction in $EI$ \cite{Christides1984}. % For each structure, each observation is a random draw from a base distribution for $E$, $\rho$ and $c$. % The properties of the five simulated structures are shown in Table \ref{tab:props}. \begin{table}[h] \centering \caption{Properties of the five simulated structures. Degrees-of-freedom (DOF) are denoted $d$.}\label{tab:props} \resizebox{\linewidth}{!}{% \begin{tabular}{ccccccc} \hline \rotatebox{-90}{\textbf{Domain~}} & \rotatebox{-90}{DOF} & \rotatebox{-90}{\makecell{Beam \\ dim.}} & \rotatebox{-90}{\makecell{Mass \\ dim.}} & \rotatebox{-90}{\makecell{Elastic \\ mod.}} & \rotatebox{-90}{Density} & \rotatebox{-90}{\makecell{Damping \\ coeff.}} \\ ($t$) & ($d_t$) & $\{l_b,\,w_b,\,t_b\}$ & $\{l_m,\,w_m,\,t_m\}$ & $E$ & $\rho$ & $c$ \\ & & $\mathrm{mm}$ & $\mathrm{mm}$ & $\mathrm{GPa}$ & $\mathrm{kg/m^3}$ & $\mathrm{Ns/m}$ \\ \hline 1 & 4 & $\{185, 25, 6.35\}$ & $\{350, 254, 25\}$ & $\gaussianDist{71}{1.0\times10^{-9}}$ & $\gaussianDist{2700}{10}$ & $\gammaDist{50}{0.1}$ \\ 2 & 8 & $\{200, 35, 6.25\}$ & $\{450, 322, 35\}$ & $\gaussianDist{70}{1.2\times10^{-9}}$ & $\gaussianDist{2800}{22}$ & $\gammaDist{8}{0.8}$ \\ 3 & 10 & $\{177, 45, 6.15\}$ & $\{340, 274, 45\}$ & $\gaussianDist{72}{1.3\times10^{-9}}$ & $\gaussianDist{2550}{25}$ & $\gammaDist{25}{0.2}$ \\ 4 & 3 & $\{193, 32, 5.55\}$ & $\{260, 265, 32\}$ & $\gaussianDist{75}{1.5\times10^{-9}}$ & $\gaussianDist{2600}{15}$ & $\gammaDist{20}{0.1}$ \\ 5 & 5 & $\{165, 46, 7.45\}$ & $\{420, 333, 46\}$ & $\gaussianDist{73}{1.4\times10^{-9}}$ & $\gaussianDist{2650}{20}$ & $\gammaDist{50}{0.1}$ \\ \hline \end{tabular} } \end{table} The experimental structure is constructed from aluminium 6082, with dimensions nominally similar to those in Table \ref{tab:props}. Observational data (the first three natural frequencies) were collected via model testing, where an electrodynamic shaker applied up to 6553.6 Hz broadband white-noise excitation containing 16384 spectral lines (0.2 Hz resolution). Forcing was applied to the first storey, and three uni-axial accelerometers measured the response at all storeys. % Damage was artificially introduced as a 50\% saw-cut to the-mid point of the front-right beam in Figure~\ref{fig:dofs}a. In each domain, the damped natural frequencies act as features, such that ${\vec{X}_t[i,:] = \{\omega_{i}\}^d_{i=1}}$. Therefore, as each domain has different DOFs/dimensions, heterogeneous transfer is required. % The label set is consistent across all domains, corresponding to normal or damaged, i.e $y_i \in \{-1,1\}$ respectively. % The training and test data for each domain are summarised in Table \ref{tab:datapoints}. % The training data have various degrees of class imbalance, to reflect scenarios where certain structures in SHM provide more information about a particular state. \begin{table}[h] \centering \caption{Number of data for all domains (numerical and experimental*).}\label{tab:datapoints} \begin{small} \begin{tabular}{lcccc} \hline \textbf{Domain} & \multicolumn{2}{c}{\textbf{Training}} & \multicolumn{2}{c}{\textbf{Testing}} \\ (t) & $y = -1$ & $y = +1$ & $y = -1$ & $y = +1$ \\ \hline 1 & 250 & 100 & 500 & 500 \\ 2 & 100 & 25 & 500 & 500 \\ 3 & 120 & 20 & 500 & 500 \\ 4 & 200 & 150 & 500 & 500 \\ 5 & 500 & 10 & 500 & 500 \\ 6* & 3 & 3 & 2 & 2 \\ \hline \end{tabular} \end{small} \end{table} Figure~\ref{fig:KBTL_hspace} shows the coupled binary classifier in the (expected) shared latent subspace for all the data $\left\{\vec{H}_t\right\}_{t=1}^T$. % The observations associated with each of the six domains are distinguished via different markers. % The left plot shows the test data and their predicted labels given $\vec{f}_t$, while the right plot shows the ground truth labels. % KBTL has successfully embedded and projected data from different domains into a shared latent space ($R=2$), where the data can be categorised by a coupled discriminative classifier. % It can also be seen that, due to class imbalance (weighted towards the undamaged class $-1$ for each structure), there is greater uncertainty in the damaged class ($+1$), leading to more significant scatter in the latent space. % \begin{figure}[pt] \centering \includegraphics[width=.8\textwidth]{figures/fig_19.pdf} \caption{The KBTL probabilistic decision boundary for the coupled classification model in the shared subspace. Markers $\{\times, \square, \star, *, \diamond, \triangle, \bullet\}$ correspond to tasks and domains $\{1,2,3,4,5,6\}$ respectively.}\label{fig:KBTL_hspace} \end{figure} The classification results for each domain are presented in Figure~\ref{fig:kbtl_f1}. % An observations is considered to belong to class $+1$ if $p(\vec{y}_{t}[*] = \,+1 \mid \vec{f}_t[*]) \geq 0.5$. % KBTL is compared to a relevance vector machine (RVM) \cite{tipping2000relevance} as a benchmark -- learnt for each domain independently. % It is acknowledged that the RVM differs in implementation; however, similarities make it useful for comparison as a standard (non multi-task) alternative to KBTL. % Multi-task learning has accurately inferred a general model. % For domains $\{1,2,3,5,6\}$, the SHM task is improved by considering the data from all structures in a shared latent space. % In particular, extending the (effective) training data has improved the classification for domain 5. This is because there are few training data associated with the damage class for domain 5 (see Table~\ref{tab:datapoints}); therefore, considering damage data from similar structures (in the latent space) has proved beneficial. % Interestingly, for domain four ($t=4$) there is a marginal \emph{decrease} in the classification performance. % Like domain one, domain four has \textit{less} severe class imbalance, thus, it appears that the remaining domains (with severe class imbalance) have negatively impacted the score for this specific domain/task. % These results highlight that the data from a group (or population) of \textit{similar} structures can be considered together, to increase the (effective) amount of training data \cite{PBSHMMSSP1,PBSHMMSSP2,PBSHMMSSP3}. % This can lead to significant improvements in the predictive performance of SHM tools -- particularly those learnt from small sets of supervised data. % \begin{figure}[pt] \centering \includegraphics[width=.7\textwidth]{figures/fig_20-eps-converted-to.pdf} \caption{KBTL classification performance, given an independent test set: $f_1$-scores across each domain compared to an RVM benchmark.}\label{fig:kbtl_f1} \end{figure} \section{Conclusions} Three new techniques for statistical inference with SHM signals have been collected and summarised (originally introduced in previous work), including % partially-supervised learning (semi-supervised/active learning), Dirichlet process clustering, and multi-task learning. % Primarily, each approach looks to address, from a different perspective, the issues of incomplete datasets and missing information, which lead to incomplete training-data. % The algorithms consider that: a) label information (to describe what measurements represent) is likely to be incomplete; b) the available data \textit{a priori} will usually correspond to a \textit{subset} of the expected \textit{in situ} conditions only. % Considering the importance of uncertainty quantification in SHM, probabilistic methods are suggested, which can be (intuitively) updated to account for missing information. % The case study applications for each mode of inference highlight the potential advantages for SHM. % Partially-supervised methods for active and semi-supervised learning were utilised to manage the cost system inspections (to label data), while considering the unlabelled instances, both offline and online. Dirichlet process clustering has been applied to streaming data, as an unsupervised method for automatic damage detection and classification. % Finally multi-task learning was applied to model shared information between systems -- to extend the data available for training, this approach considers multiple (potentially incomplete) datasets associated with different tasks (structures). \section{Data Availability} Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request. \section{Acknowledgements} The authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC) through grant references EP/R003645/1, EP/R004900/1, EP/S001565/1 and EP/R006768/1.
proofpile-arXiv_065-3812
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Light detection and ranging (LiDAR) and camera sensors are commonly combined in developing autonomous driving vehicles. LiDAR sensor, owing to its direct 3D measurement capability, has been extensively applied to obstacle detection~\cite{xiao2018hybrid}, tracking~\cite{di2019behavioral}, and mapping~\cite{loamlivox} applications. The integrated onboard camera could also provide rich color information and facilitate the various LiDAR applications. With the recent rapid growing resolutions of LiDAR sensors, the demand for accurate extrinsic parameters becomes essential, especially for applications such as dense point cloud mapping, colorization, \textcolor{black}{and accurate and automated 3D surveying}. In this letter, our work deals with the accurate extrinsic calibration of high-resolution LiDAR and camera sensors. Current extrinsic calibration methods rely heavily on external targets, such as checkerboard~\cite{ACSC,koo2020analytic,zhou2018automatic} or specific image pattern~\cite{chen2020novel}. By detecting, extracting, and matching feature points from both image and point cloud, the original problem is transformed into and solved with least-square equations. Due to its repetitive scanning pattern and the inevitable vibration of mechanical spinning LiDAR, e.g., Velodyne\footnote{https://velodynelidar.com/}, the reflected point cloud tends to be sparse and of large noise. This characteristic may mislead the cost function to the unstable results. Solid-state LiDAR, e.g., Livox~\cite{livox}, could compensate for this drawback with its dense point cloud. However, since these calibration targets are typically placed close to the sensor suite, the extrinsic error might be enlarged in the long-range scenario, e.g., large-scale point cloud colorization. In addition, it is not always practical to calibrate the sensors with targets being prepared at the start of each mission. \begin{figure} \centering \includegraphics[width=1\linewidth]{introduction.pdf} \caption{Point cloud of three aligned LiDAR scans colored using the proposed method. The accompanying experiment video has been uploaded to \textcolor{black}{https://youtu.be/e6Vkkasc4JI}.} \label{fig:color_pc} \vspace{-0.6cm} \end{figure} To address the above challenges, in this letter, we propose an automatic pixel-level extrinsic calibration method in targetless environments. This system operates by extracting natural edge features from the image and point cloud and minimizing the reprojection error. The proposed method does not rely on external targets, e.g., checkerboard, and is capable of functioning in both indoor and outdoor scenes. \textcolor{black}{Such a simple and convenient calibration allows one to calibrate the extrinsic parameters before or in the middle of each data collection or detect any misalignment of the sensors during their online operation that is usually not feasible with the target-based methods}. Specifically, our contributions are as follows: \begin{itemize} \item We carefully study the underlying LiDAR measuring principle, which reveals that the commonly used depth-discontinuous edge features are not accurate nor reliable for calibration. \textcolor{black}{We propose a novel and reliable depth-continuous edge extraction algorithm that leads to more accurate calibration parameters.} \item We evaluate the robustness, consistency, and accuracy of our methods and implementation in various indoor and outdoor environments and compare our methods with other state-of-the-art. Results show that our method is robust to initial conditions, consistent to calibration scenes, and achieves pixel-level calibration accuracy in natural environments. Our method has an accuracy that is on par to (and sometimes even better than) target-based methods \textcolor{black}{and is applicable to both emerging solid-state and conventional spinning LiDARs.} \item Based on the analysis, we develop a practical calibration software and open source it on GitHub to benefit the community. \end{itemize} \section{Related Works} Extrinsic calibration is a well-studied problem in robotics and is mainly divided into two categories: target-based and targetless. The primary distinction between them is how they define and extract features from both sensors. Geometric solids~\cite{kummerle2020,gong2013,park2014} and checkerboards~\cite{ACSC,koo2020analytic,zhou2018automatic} have been widely applied in target-based methods, due to its explicit constraints on plane normals and simplicity in problem formulation. As they require extra preparation, they are not practical, especially when they need to operate in a dynamically changing environment. Targetless methods do not detect explicit geometric shapes from known targets. Instead, they use the more general plane and edge features that existed in nature. In~\cite{zhu2020camvox}, the LiDAR points are first projected onto the image plane and colored by depth and reflectivity values. Then 2D edges are extracted from this colormap and matched with those obtained from the image. Similarly, authors in~\cite{pandey2012automatic} optimize the extrinsic \textcolor{black}{calibration} by maximizing the mutual information between the colormap and the image. In~\cite{scaramuzza2007extrinsic,levinson2013automatic}, both authors detect and extract 3D edges from the point cloud by laser beam depth discontinuity. Then the 3D edges are back-projected onto the 2D image plane to calculate the residuals. The accuracy of edge estimation limits this method as the laser points do not strictly fall on the depth discontinuity margin. Motion-based methods have also been introduced in~\cite{nagy2019,lidarcalib} that the extrinsic is estimated from sensors' motion and refined by appearance information. \textcolor{black}{This motion-based calibration typically requires the sensor to move along a sufficient excited trajectory~\cite{lidarcalib}}. Our proposed method is a targetless method. Compared to \cite{pandey2012automatic, zhu2020camvox}, we directly extract 3D edge features in the point cloud, which suffer from no occlusion problem. Compared to \cite{scaramuzza2007extrinsic,levinson2013automatic}, we use depth-continuous edges, which proved to be more accurate and reliable. Our method works for a single pair LiDAR scan and achieves calibration accuracy comparable to target-based methods \cite{ACSC, zhou2018automatic}. \section{Methodology}\label{sec:methodology} \subsection{Overview}\label{sec:overview} Fig. \ref{fig:line_constraints} defines the coordinate frames involved in this paper: the LiDAR frame $L$, the camera frame $C$, and the 2D coordinate frame in the image plane. Denote ${}^C_L \mathbf T = ({}^C_L \mathbf R, {}^C_L \mathbf t) \in SE(3)$ the extrinsic between LiDAR and camera to be calibrated. Due to the wide availability of edge features in natural indoor and outdoor scenes, our method aligns these edge features observed by both LiDAR and camera sensors. Fig. \ref{fig:line_constraints} further illustrates the number of constraints imposed by a single edge to the extrinsic. As can be seen, the following degree of freedom (DoF) of the LiDAR pose relative to the camera cannot be distinguished: (1) translation along the edge (the red arrow D, Fig. \ref{fig:line_constraints}), (2) translation perpendicular to the edge (the green arrow C, Fig. \ref{fig:line_constraints}), (3) rotation about the normal vector of the plane formed by the edge and the camera focal point (the blue arrow B, Fig. \ref{fig:line_constraints}), and (4) rotation about the edge itself (the purple arrow A, Fig. \ref{fig:line_constraints}). As a result, a single edge feature constitutes two effective constraints to the extrinsic ${}^C_L \mathbf T$. To obtain sufficient constraints to the extrinsic, we extract edge features of different orientations and locations, as detailed in the following section. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{line_constraints.pdf} \caption{\textcolor{black}{Constraints imposed by an edge feature. The blue line represents the 3D edge, its projection to the image plane (the gray plane) produces the camera measurement (the red line). The edge after a translation along the axis C or axis D, or a rotation about the axis A (i.e., the edge itself) or axis B (except when the edge passes through the camera origin), remains on the same yellow plane and hence has the same projection on the image plane. That means, these four pose transformations on the edge (or equivalently on ${}^C_L \mathbf T$) are not distinguishable. }} \label{fig:line_constraints} \vspace{-0.6cm} \end{figure} \subsection{Edge Extraction and Matching}\label{sec:edge} \subsubsection{Edge Extraction} Some existing works project the point cloud to the image plane and extract features from the projected point cloud, such as edge extraction \cite{zhu2020camvox} and the mutual information correlation \cite{pandey2012automatic}. A major problem of the feature extraction after points projection is the multi-valued and zero-valued mapping caused by occlusion. As illustrated in Fig. \ref{fig:view_error} (a), if the camera is above the LiDAR, region A is observed by the camera but not LiDAR due to occlusion, resulting in no points after projection in this region (zero-valued mapping, the gap in region A, Fig. \ref{fig:view_error} (b)). On the other hand, region B is observed by the LiDAR but not the camera, points in this region (the red dots in Fig. \ref{fig:view_error} (a)) after projection will intervene the projection of points on its foreground (the black dots in Fig. \ref{fig:view_error} (a)). As a result, points of the foreground and background will correspond to the same image regions (multi-valued mapping, region B, Fig. \ref{fig:view_error} (b)). These phenomenon may not be significant for LiDARs of low-resolution \cite{pandey2012automatic}, but is evident in as the LiDAR resolution increases (see Fig. \ref{fig:view_error} (b)). Extracting features on the projected point clouds and match them to the image features, such as \cite{pandey2012automatic}, would suffer from these fundamental problems and cause significant errors in edge extraction and calibration. \begin{figure}[t] \vspace{-0.3cm} \centering \includegraphics[width=0.9\linewidth]{view_error.pdf} \caption{Multi-valued mapping and zero-valued mapping.} \label{fig:view_error} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{edge_type_new.png} \caption{Depth-discontinuous edge and depth-continuous edge.} \label{fig:edge_type} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \vspace{-0.3cm} \centering \includegraphics[width=1\linewidth]{spot_error.pdf} \caption{Foreground objects inflation and bleeding points caused by laser beam divergence angle.} \label{fig:bleeding_points} \end{figure} To avoid the zero-valued and multi-valued mapping problem caused by projection, we extract edge features directly on the LiDAR point cloud. There are two types of edges: depth-discontinuous and depth-continuous. As shown in Fig. \ref{fig:edge_type} (a), depth-discontinuous edges refer to edges between foreground objects and background objects where the depth jumps. In contrast, depth-continuous edges refer to the plane joining lines where the depth varies continuously. Many existing methods \cite{scaramuzza2007extrinsic, levinson2013automatic} use the depth-discontinuous edges as they can be easily extracted by examining the point depth. However, carefully investigating the LiDAR measuring principle, we found that depth-discontinuous edges are not reliable nor accurate for high accuracy calibration. As shown in Fig. \ref{fig:bleeding_points}, a practical laser pulse is not an ideal point but a beam with a certain divergence angle (i.e., the beam divergence angle). When scanning from a foreground object to a background one, part of the laser pulse is reflected by the foreground object while the remaining reflected by the background, producing two reflected pulses to the laser receiver. In the case of the high reflectivity of the foreground object, signals caused by the first pulse will dominate, even when the beam centerline is off the foreground object, this will cause fake points of the foreground object beyond the actual edge (the leftmost yellow point in Fig. \ref{fig:bleeding_points} (a)). In the case the foreground object is close to the background, signals caused by the two pulses will join, and the lumped signal will lead to a set of points connecting the foreground and the background (called the {\it bleeding points}, the yellow points in Fig. \ref{fig:bleeding_points} (a)). The two phenomena will mistakenly inflate the foreground object and cause significant errors in the edge extraction (Fig. \ref{fig:bleeding_points} (b)) and calibration. To avoid the foreground inflation and bleeding points caused by depth-discontinuous edges, we propose to extracting depth-continuous edges. The overall procedure is summarized in Fig.~\ref{fig:extract_edge}: we first divide the point cloud into small voxels of given sizes (e.g., $1m$ for outdoor scenes and $0.5m$ for indoor scenes). For each voxel, we repeatedly use RANSAC to fit and extract planes contained in the voxel. Then, we retain plane pairs that are connected and form an angle within a certain range (e.g., $[\ang{30},\ang{150}]$) and solve for the plane intersection lines (i.e., the depth-continuous edge). As shown in Fig. \ref{fig:extract_edge}, our method is able to extract multiple intersection lines that are perpendicular or parallel to each other within a voxel. Moreover, by properly choosing the voxel size, we can even extract curved edges. \textcolor{black}{Fig. \ref{fig:edge_contrast} shows a comparison between the extracted depth-continuous and depth-discontinuous edges when overlaid with the image using correct extrinsic value. The depth-discontinuous edge is extracted based on the local curvature as in \cite{liu2021balm}. It can be seen that the depth-continuous edges are more accurate and contain less noise. } \begin{figure} \centering \includegraphics[width=1\linewidth]{edge_extraction.pdf} \caption{Depth-continuous edge extraction. In each voxel grid, different colors represent different voxels. Within the voxel, different colors represent different planes, and the white lines represent the intersections between planes.} \label{fig:extract_edge} \vspace{-0.3cm} \end{figure} \begin{figure} \centering \vspace{-0.2cm} \includegraphics[width=1\linewidth]{edge_contrast.jpg} \caption{\textcolor{black}{Comparison between extracted depth-continuous edges and depth-discontinuous edges.}} \label{fig:edge_contrast} \vspace{-0.3cm} \end{figure} For image edge extraction, we use the Canny algorithm~\cite{canny}. The extracted edge pixels are saved in a $k$-D tree ($k=2$) for the correspondence matching. \subsubsection{Matching}\label{sec:matching} The extracted LiDAR edges need to be matched to their corresponding edges in the image. For each extracted LiDAR edge, we sample multiple points on the edge. Each sampled point ${}^L \mathbf P_i \in \mathbb{R}^3$ is transformed into the camera frame (using the current extrinsic estimate ${}^C_L \bar{\mathbf T} = ({}^C_L \bar{\mathbf R} , {}^C_L \bar{\mathbf t} ) \in SE(3)$) \vspace{-0.1cm} \begin{equation} {}^C \mathbf P_i = {}^C_L \bar{\mathbf T} ( {}^L \mathbf P_i ) \in \mathbb{R}^3, \vspace{-0.2cm} \label{eq:transform} \end{equation} where ${}_L^C \bar{\mathbf T}({}^L \mathbf P_i ) = {}_L^C \bar{\mathbf R} \cdot {}^L\mathbf P_i + {}_L^C \bar{\mathbf t}$ denotes applying the rigid transformation ${}_L^C \mathbf T $ to point ${}^L \mathbf P_i$. The transformed point ${}^C \mathbf P_i$ is then projected onto the camera image plane to produce an expected location ${}^C \mathbf p_i \in \mathbb{R}^2$ \vspace{-0.2cm} \begin{equation} {}^C \mathbf p_i = \boldsymbol{\pi}({}^C \mathbf P_i) \vspace{-0.1cm} \end{equation} where $\boldsymbol{\pi}(\mathbf P)$ is the pin-hole projection model. Since the actual camera is subject to distortions, the actual location of the projected point $\mathbf{p}_i = ( u_i, v_i)$ on the image plane is \vspace{-0.1cm} \begin{equation} \mathbf p_i = \mathbf f ({}^C \mathbf p_i) \label{eq:distort} \vspace{-0.1cm} \end{equation} where $\mathbf f(\mathbf p)$ is the camera distortion model. We search the $\kappa$ nearest neighbor of $\mathbf p_{i}$ in the $k$-D tree built from the image edge pixels. Denotes $\mathbf Q_i=\{\mathbf q_i^j; j = 1, \cdots, \kappa\}$ the $\kappa$ nearest neighbours and \begin{equation} \begin{aligned} \mathbf q_i = \frac{1}{\kappa} \sum_{j=1}^\kappa \mathbf q_{i}^j; \quad \mathbf{S}_i = \sum_{j=1}^{\kappa} (\mathbf {q}_i^j - \mathbf q_i ) (\mathbf {q}_i^j - \mathbf q_i )^T. \end{aligned} \end{equation} Then, the line formed by $\mathbf Q_i$ is parameterized by point $\mathbf q_i$ lying on the line and the normal vector $\mathbf n_i$, which is the eigenvector associated to the minimal eigenvalue of $\mathbf S_i$. Besides projecting the point ${}^L \mathbf P_i$ sampled on the extracted LiDAR edge, we also project the edge direction to the image plane and validate its orthogonality with respect to $\mathbf n_i$. This can effectively remove \textcolor{black}{error matches} when two non-parallel lines are near to each other in the image plane. Fig. \ref{fig:lidar_camera_edge} shows an example of the extracted LiDAR edges (red lines), image edge pixels (blue lines) and the correspondences (green lines). \begin{figure}[t] \vspace{-0.3cm} \centering \includegraphics[width=1\linewidth]{match_edge.pdf} \caption{LiDAR edges (red lines), image edge pixels (blue lines) and their correspondences (green lines).} \label{fig:lidar_camera_edge} \vspace{-0.5cm} \end{figure} \vspace{-0.2cm}\subsection{Extrinsic Calibration} \subsubsection{Measurement noises} The extracted LiDAR edge points ${}^L \mathbf P_i$ and the corresponding edge feature $(\mathbf n_i, \mathbf q_i)$ in the image are subject to measurement noises. Let ${}^I\mathbf w_{i} \in \mathcal{N}(\mathbf 0, {}^I \boldsymbol{\Sigma}_i) $ be the noise associated with $\mathbf q_i$ during the image edge extraction, its covariance is ${}^I \boldsymbol{\Sigma}_i = \sigma_I^2 \mathbf I_{2 \times 2}$, where $\sigma_I = 1.5$ indicating the one-pixel noise due to pixel discretization. For the LiDAR point ${}^L \mathbf P_i$, let ${}^L\mathbf w_{i}$ be its measurement noise. In practice, LiDAR measures the bearing direction by encoders of the scanning motor and the depth by computing the laser time of flight. Let $\boldsymbol{\omega}_i \in \mathbb{S}^2$ be the measured bearing direction and $\boldsymbol{\delta}_{\boldsymbol{\omega}_i} \sim \mathcal{N}(\mathbf 0_{2\times 1}, \boldsymbol{\Sigma}_{\boldsymbol{\omega}_i})$ be the measurement noise in the tangent plane of $\boldsymbol{\omega}_i$ (see Fig. \ref{fig:perturbation}). Then using the $\boxplus$-operation encapsulated in $\mathbb{S}^2$ \cite{he2021embedding}, we obtain the relation between the true bearing direction $\boldsymbol{\omega}_i^{\text{gt}}$ and its measurement $\boldsymbol{\omega}_i$ as below: \begin{equation} \label{eq:bearing_model} \boldsymbol{\omega}_i^{\text{gt}} = \boldsymbol{\omega}_i \boxplus_{\mathbb{S}^2} \boldsymbol{\delta}_{\boldsymbol{\omega}_i} \triangleq e^{ \lfloor \mathbf N(\boldsymbol{\omega}_i) \boldsymbol{\delta}_{\boldsymbol{\omega}_i} \times \rfloor} \boldsymbol{\omega}_i \end{equation} where $\mathbf N(\boldsymbol{\omega}_i) = \begin{bmatrix} \mathbf N_1 & \mathbf N_2 \end{bmatrix} \in \mathbb{R}^{3 \times 2} $ is an orthonormal basis of the tangent plane at $\boldsymbol{\omega}_i$ (see Fig. \ref{fig:perturbation} (a)), and $\lfloor \ \times \rfloor$ denotes the skew-symmetric matrix mapping the cross product. The $\boxplus_{\mathbb{S}^2}$-operation essentially rotates the unit vector $\boldsymbol{\omega}_i$ about the axis $ \boldsymbol{\delta}_{\boldsymbol{\omega}_i}$ in the tangent plane at $\boldsymbol{\omega}_i$, the result is still a unit vector (i.e., remain on $\mathbb{S}^2$). \begin{figure}[t] \vspace{-0.3cm} \centering \includegraphics[width=1\linewidth]{perturbation_and_residual.pdf} \caption{(a) Perturbation to a bearing vector on $\mathbb{S}^2$; (b) Projection of a LiDAR edge point, expressed in the camera frame ${}^C \mathbf P_i$, to the image plane $\mathbf p_i$, and the computation of residual $\mathbf r_i$.} \label{fig:perturbation} \vspace{-0.6cm} \end{figure} Similarly, let $d_i$ be the depth measurement and $\delta_{d_i} \sim \mathcal{N}(0, {\Sigma}_{{d}_i}) $ be the ranging error, then the ground-\textcolor{black}{truth} depth $d_i^{\text{gt}}$ is \begin{equation} \label{eq:ranging_model} d_i^{\text{gt}} = d_i + \delta_{d_i} \end{equation} Combining (\ref{eq:bearing_model}) and (\ref{eq:ranging_model}), we obtain the relation between the ground-\textcolor{black}{truth} point location ${}^L \mathbf P_i^{\text{gt}}$ and its measurement ${}^L \mathbf P_i$: \begin{equation} \begin{split} {}^L \mathbf P_i^{\text{gt}} &= d_i^{\text{gt}} \boldsymbol{\omega}_i^{\text{gt}} = \left( d_i + \delta_{d_i} \right) \left( \boldsymbol{\omega}_i \boxplus_{\mathbb{S}^2} \boldsymbol{\delta}_{\boldsymbol{\omega}_i} \right) \\ &\approx \underbrace{d_i \boldsymbol{\omega}_i}_{{}^L \mathbf P_i} + \underbrace{ \boldsymbol{\omega}_i \delta_{d_i} - d_i \lfloor \boldsymbol{\omega}_i \times \rfloor \mathbf N(\boldsymbol{\omega}_i) \boldsymbol{\delta}_{\boldsymbol{\omega}_i}}_{{}^L \mathbf w_i} \end{split} \end{equation} Therefore, \begin{equation} \begin{split} {}^L \mathbf w_i &= \underbrace{\begin{bmatrix} \boldsymbol{\omega}_i & - d_i \lfloor \boldsymbol{\omega}_i \times \rfloor \mathbf N(\boldsymbol{\omega}_i) \end{bmatrix}}_{\mathbf A_i } \begin{bmatrix} \delta_{d_i} \\ \boldsymbol{\delta}_{\boldsymbol{\omega}_i} \end{bmatrix} \sim \mathcal{N}(\mathbf 0, {}^L \boldsymbol{\Sigma}_i), \\ {}^L \boldsymbol{\Sigma}_i &= \mathbf A_i \begin{bmatrix} \Sigma_{d_i} & \mathbf 0_{1 \times 2} \\ \mathbf 0_{2 \times 1} & \boldsymbol{\Sigma}_{\boldsymbol{\omega}_i} \end{bmatrix}\mathbf A_i^T. \end{split} \label{eq:meas_noise_cov} \end{equation} This noise model will be used to produce a consistent extrinsic calibration as detailed follow. \subsubsection{Calibration Formulation and Optimization}\label{sec:formulation} Let ${}^L \mathbf P_{i}$ be an edge point extracted from the LiDAR point cloud and its corresponding edge in the image is represented by its normal vector $\mathbf n_i \in \mathbb{S}^1 $ and a point $\mathbf q_i \in \mathbb{R}^2$ lying on the edge (Section \ref{sec:matching}). Compensating the noise in ${}^L \mathbf P_{i}$ and projecting it to the image plane using the \textcolor{black}{ground-truth} extrinsic should \textcolor{black}{lie exactly on the edge} $(\mathbf n_i, \mathbf q_i)$ extracted from the image (see (\ref{eq:transform} -- \ref{eq:distort}) and Fig. \ref{fig:perturbation} (b)): \begin{equation} 0 = \mathbf n_i^T \left( \mathbf f \! \left(\boldsymbol{\pi} \! \left({}^C_L \mathbf T \left( {}^L\mathbf P_{i} \! + \! {}^L\mathbf w_{i} \right) \right) \right) \! - \! \left( \mathbf q_i \! + \! {}^I \mathbf w_i \right) \right) \label{eq:formulation} \end{equation} where ${}^L\mathbf w_{i} \in \mathcal{N}(\mathbf 0, {}^L \boldsymbol{\Sigma}_i) $ and ${}^I\mathbf w_{i} \in \mathcal{N}(\mathbf 0, {}^I \boldsymbol{\Sigma}_i) $ are detailed in the previous section. Equation (\ref{eq:formulation}) implies that one LiDAR edge point imposes one constraint to the extrinsic, which is in agreement with Section \ref{sec:overview} that an edge feature imposes two constraints to the extrinsic as an edge consists of two independent points. Moreover, (\ref{eq:formulation}) imposes a nonlinear equation for the extrinsic ${}^C_L \mathbf T$ in terms of the measurements ${}^L \mathbf P_{i}, \mathbf n_i, \mathbf q_i$ and unknown noise ${}^L\mathbf w_{i}, {}^I\mathbf w_{i}$. This nonlinear equation can be solved in an iterative way: let ${}^C_L \bar{\mathbf T}$ be the current extrinsic estimate and parameterize ${}^C_L \mathbf T$ in the tangent space of ${}^C_L \bar{\mathbf T}$ using the $\boxplus$-operation encapsulated in $SE(3)$ \cite{hertzberg2013integrating, he2021embedding}: \begin{equation} {}^C_L \mathbf T = {}^C_L \bar{\mathbf T} \boxplus_{SE(3)} \delta \mathbf T \triangleq \text{Exp}(\delta \mathbf T) \cdot {}^C_L \bar{\mathbf T} \label{e:local_param} \end{equation} where $$ \delta \mathbf T = \begin{bmatrix} \delta \boldsymbol{\theta} \\ \delta \mathbf t \end{bmatrix} \in \mathbb{R}^6; \ \text{Exp}(\delta \mathbf T) = \begin{bmatrix} e^{\lfloor \delta \boldsymbol{\theta} \times \rfloor} & \delta \mathbf t \\ 0 & 1 \end{bmatrix} \in SE(3). $$ Substituting (\ref{e:local_param}) into (\ref{eq:formulation}) and approximating the resultant equation with first order terms lead to \begin{equation} \begin{split} 0 &= \mathbf n_i^T \left(\mathbf f \! \left(\boldsymbol{\pi} \! \left({}^C_L \mathbf T \left( {}^L\mathbf P_{i} \! + \! {}^L\mathbf w_{i} \right) \right) \right) \! - \! \left( \mathbf q_i \! + \! {}^I \mathbf w_i \right) \right) \\ &\approx \mathbf r_i + \mathbf J_{\mathbf T_i} \delta \mathbf T + \mathbf J_{\mathbf w_i} \mathbf w_i \end{split} \label{eq:approx} \end{equation} where \begin{equation} \begin{split} \mathbf r_i & = \mathbf n_i^T \left( \mathbf f \! \left(\boldsymbol{\pi} \! \left({}^C_L \bar{\mathbf T} ({}^L\mathbf P_{i}) \right) \right) \! - \! \mathbf q_i \right) \in \mathbb{R} \\ \mathbf J_{\mathbf T_i} &= \mathbf n_i^T \frac{\partial \mathbf f (\mathbf p)}{\partial \mathbf p} \frac{\partial \boldsymbol{\pi} (\mathbf P)}{\partial \mathbf P} \begin{bmatrix} -\lfloor ({}^C_L \bar{\mathbf T} ({}^L \mathbf P_i)) \times \rfloor & \mathbf I \end{bmatrix}\in \mathbb{R}^{1 \times 6} \\ \mathbf J_{\mathbf w_i} &= \begin{bmatrix} \mathbf n_i^T \frac{\partial \mathbf f (\mathbf p)}{\partial \mathbf p} \frac{\partial \boldsymbol{\pi} (\mathbf P)}{\partial \mathbf P} {}^C_L \bar{\mathbf R} & -\mathbf n_i^T \end{bmatrix} \in \mathbb{R}^{1 \times 5} \\ \mathbf w_i &= \begin{bmatrix} {}^L \mathbf w_i \\ {}^I \mathbf w_i \end{bmatrix} \in \mathcal{N}(\mathbf 0, \boldsymbol{\Sigma}_i), \boldsymbol{\Sigma}_i = \begin{bmatrix} {}^L \boldsymbol{\Sigma}_i & \mathbf 0 \\ \mathbf 0 & {}^I \boldsymbol{\Sigma}_i \end{bmatrix} \in \mathbb{R}^{5 \times 5} \end{split} \label{eq:grad} \end{equation} The calculation of $\mathbf r_i$ is illustrated in Fig. \ref{fig:perturbation} (b). Equation (\ref{eq:approx}) defines the constraint from one edge correspondence, stacking all $N$ such edge correspondences leads to \begin{equation} \begin{split} \underbrace{\begin{bmatrix} 0 \\ \vdots \\ 0 \end{bmatrix}}_{\mathbf 0} &\approx \underbrace{\begin{bmatrix} \mathbf r_1 \\ \vdots \\ \mathbf r_N \end{bmatrix}}_{\mathbf r} + \underbrace{ \begin{bmatrix} \mathbf J_{\mathbf T_1} \\ \vdots \\ \mathbf J_{\mathbf T_N} \end{bmatrix}}_{\mathbf J_{\mathbf T}} \delta \mathbf T + \underbrace{ \begin{bmatrix} \mathbf J_{\mathbf w_1} & \cdots & \mathbf 0 \\ \vdots & \ddots & \vdots \\ \mathbf 0 & \cdots & \mathbf J_{\mathbf w_N} \end{bmatrix}}_{\mathbf J_{\mathbf w}} \underbrace{\begin{bmatrix} \mathbf w_1 \\ \vdots \\ \mathbf w_N \end{bmatrix}}_{\mathbf w} \end{split} \label{eq:all_eqn} \end{equation} where$$\mathbf w \sim \mathcal{N}(\mathbf 0, \boldsymbol{\Sigma}), \ \boldsymbol{\Sigma} = \text{diag}(\boldsymbol{\Sigma}_1, \cdots, \boldsymbol{\Sigma}_N)$$ Equation (\ref{eq:all_eqn}) implies \begin{equation} \begin{split} \mathbf v \triangleq - \mathbf J_{\mathbf w} \mathbf w = \mathbf r + \mathbf J_{\mathbf T} \delta \mathbf T \sim \mathcal{N}(\mathbf 0, \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T). \label{eq:distr} \end{split} \end{equation} Based on (\ref{eq:distr}), we propose our maximal likelihood (and meanwhile the minimum variance) extrinsic estimation: \begin{equation} \begin{split} & \max_{\delta \mathbf T} \log p(\mathbf v; \delta \mathbf T) = \max_{\delta \mathbf T} \log \frac{e^{-\frac{1}{2} \mathbf v^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf v }}{\sqrt{(2 \pi)^N \det \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)}} \\ &= \min_{\delta \mathbf T} (\mathbf r + \mathbf J_{\mathbf T} \delta \mathbf T)^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} (\mathbf r + \mathbf J_{\mathbf T} \delta \mathbf T) \end{split} \label{eq:cost_func} \end{equation} The optimal solution is \begin{equation} \delta \mathbf T^* = - \left( \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf J_{\mathbf T} \right)^{-1} \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf r \label{eq:Tstar} \end{equation} This solution is updated to ${}^C_L \bar{\mathbf T}$ \begin{equation} {}^C_L \bar{\mathbf T} \leftarrow {}^C_L \bar{\mathbf T} \boxplus_{SE(3)} \delta \mathbf T^*. \label{eq:update} \end{equation} The above process ((\ref{eq:Tstar} and (\ref{eq:update})) iterates until convergence (i.e., $\| \delta \mathbf T^* \| < \varepsilon$) and the converged ${}^C_L \bar{\mathbf T}$ is the calibrated extrinsic. \iffalse \subsubsection{Estimation Accuracy} We use the average weighted projection error AWPE (unit: pixel) to evaluate the accuracy of the converged extrinsic value ${}^C_L \bar{\mathbf T}$: \begin{equation} \begin{split} \text{AWPE} &= \frac{1}{N} \mathbf 1^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-\frac{1}{2}} \mathbf r \\ & = \frac{1}{N} \sum_{i=1}^{N} \left( \mathbf J_{\mathbf w_i} \boldsymbol{\Sigma}_i \mathbf J_{\mathbf w_i}^T \right)^{-\frac{1}{2}} \mathbf r_i \end{split} \label{eq:AWPR} \end{equation} If the converged extrinsic estimate ${}^C_L \bar{\mathbf T}$ is exactly the true value, the first order approximation in (\ref{eq:approx}) at convergence needs only to be conducted in terms of noise $\mathbf w_i $, and hence $\mathbf r_i = - \mathbf J_{\mathbf w_i} \mathbf w_i \sim \mathcal{N}(0, \mathbf J_{\mathbf w_i} \boldsymbol{\Sigma}_i \mathbf J_{\mathbf w_i}^T)$. This further implies the expectation $E(\mathbf r_i^T \left( \mathbf J_{\mathbf w_i} \boldsymbol{\Sigma}_i \mathbf J_{\mathbf w_i}^T \right)^{-1} \mathbf r_i) = 1$, meaning that the expectation of AWPE is zero. Otherwise, biased extrinsic estimation will cause non-zero AWPE. \fi \subsubsection{Calibration Uncertainty} Besides the extrinsic calibration, it is also useful to estimate the calibration uncertainty, which can be characterized by the covariance of the error between the ground-\textcolor{black}{truth} extrinsic and the calibrated one. To do so, we multiply both sides of (\ref{eq:all_eqn}) by $ \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} $ and solve for $\delta \mathbf T$: \begin{equation} \begin{split} \delta \mathbf T &\approx \underbrace{-\left( \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf J_{\mathbf T} \right)^{-1} \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf r}_{\delta \mathbf T^*} \\ &- \left( \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf J_{\mathbf T} \right)^{-1} \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf J_{\mathbf w} \mathbf w \\ &\sim \mathcal{N} \left(\delta \mathbf T^*, \left( \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf J_{\mathbf T} \right)^{-1} \right) \end{split} \label{eq:Cov} \end{equation} which means that the ground truth $\delta \mathbf T$, the error between the ground truth extrinsic ${}^C_L \mathbf T$ and the estimated one ${}^C_L \bar{\mathbf T}$ and parameterized in the tangent space of ${}^C_L \bar{\mathbf T}$, is subject to a Gaussian distribution that has a mean $\delta \mathbf T^*$ and covariance equal to the inverse of the Hessian matrix of (\ref{eq:cost_func}). At convergence, the $\delta \mathbf T^*$ is near to zero, and the covariance is \begin{equation} \boldsymbol{\Sigma}_{\mathbf T} = \left( \mathbf J_{\mathbf T}^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w}^T \right)^{-1} \mathbf J_{\mathbf T} \right)^{-1} \label{eq:estiamte_cov} \end{equation} We use this covariance matrix to characterize our extrinsic calibration uncertainty. \vspace{-0.3cm}\subsection{Analysis of Edge Distribution on Calibration Result}\label{sec:analysis} The Jacobian $\mathbf J_{\mathbf T_i}$ in (\ref{eq:grad}) denotes the sensitivity of residual with respect to the extrinsic variation. In case of very few or poorly distributed edge features, $\mathbf J_{\mathbf T_i}$ could be very small, leading to large estimation uncertainty (covariance) as shown by (\ref{eq:estiamte_cov}). In this sense, the data quality is automatically \textcolor{black}{and quantitatively} encoded by the covariance matrix in (\ref{eq:estiamte_cov}). In practice, it is usually useful to have a quick and rough assessment of the calibration scene before the data collection. This can be achieved by analytically deriving the Jacobian $\mathbf J_{\mathbf T_i}$. Ignoring the distortion model and substituting the pin-hole projection model $\boldsymbol{\pi}(\cdot)$ to (\ref{eq:grad}), we obtain \begin{equation} \begin{aligned} \mathbf J_{T_{i}} &= \mathbf n_{i}^T \begin{bmatrix} \frac{-f_x X_{i}{ Y_i}}{ Z_i^2}\hspace{-0.2cm}&\hspace{-0.2cm}f_x+\frac{f_x X_i^2}{ Z_i^2}\hspace{-0.2cm}&\hspace{-0.2cm}\frac{-f_x{ Y_i}}{{Z_i}}\hspace{-0.2cm}&\hspace{-0.2cm}\frac{f_x}{{ Z_i}}\hspace{-0.2cm}&\hspace{-0.2cm}0\hspace{-0.2cm}&\hspace{-0.2cm}-\frac{f_x { X_i}}{ Z_i^2}\\ -f_y-\frac{f_y Y_i^2}{Z_i^2}\hspace{-0.2cm}&\hspace{-0.2cm}\frac{f_y {X_i} {Y_i}}{Z_i^2}\hspace{-0.2cm}&\hspace{-0.2cm}\frac{f_y { X_{i}}}{{ Z_i}}\hspace{-0.2cm}&\hspace{-0.2cm}0\hspace{-0.2cm}&\hspace{-0.2cm}\frac{f_y}{{ Z_i}}\hspace{-0.2cm}&\hspace{-0.2cm}\frac{-f_y { Y_i}}{ Z_i^2} \end{bmatrix} \end{aligned} \end{equation} where ${}^C \mathbf P_i = \begin{bmatrix} X_i & Y_i & Z_i \end{bmatrix}^T$ is the LiDAR edge point ${}^L \mathbf P_i$ represented in the camera frame (see (\ref{eq:transform})). It is seen that points near to the center of the image after projection (i.e., small $X_i/Z_i, Y_i/Z_i$) lead to small Jacobian. Therefore it is beneficial to have edge features equally distributed in the image. Moreover, since LiDAR noises increase with distance as in (\ref{eq:meas_noise_cov}), the calibration scene should have moderate depth. \subsection{\textcolor{black}{Initialization and Rough Calibration}} \textcolor{black}{The presented optimization-based extrinsic calibration method aims for high-accuracy calibration but requires a good initial estimate of the extrinsic parameters that may not always be available. To widen its convergence basin, we further integrate an initialization phase into our calibration pipeline where the extrinsic value is roughly calibrated by maximizing the percent of edge correspondence ($P.C.$) defined below: \begin{equation} P.C. =\frac{N_{match}}{N_{sum}} \end{equation} where $N_{sum}$ is the total number of LiDAR edge points and $N_{match}$ is the number of matched LiDAR edge points. The matching is based on the distance and direction of a LiDAR edge point (after projected to the image plane) to its nearest edge in the image (see Section \ref{sec:matching}). The rough calibration is performed by an alternative grid search on rotation (grid size $0.5^{\circ}$) and translation (grid size $2$cm) over a given range. } \vspace{-0.2cm}\section{Experiments and Results} In this section, we validate our proposed methods in a variety of real-world experiments. We use a solid-state LiDAR called Livox AVIA, which achieves high-resolution point-cloud measurements at stationary due to its non-repetitive scanning \cite{livox}, and an Intel Realsense-D435i camera (see Fig.~\ref{fig:sensor_suite}). The camera intrinsic, including distortion models, has been calibrated beforehand. During data acquisition, we fix the LiDAR and camera in a stable position and collect point cloud and image simultaneously. The data acquisition time is 20 seconds for the Avia LiDAR to accumulate sufficient points. \begin{figure}[t] \vspace{-0.2cm} \centering \includegraphics[width=0.8\linewidth]{sensor_suite.jpg} \caption{\textcolor{black}{Our sensor suite. Left is Livox Avia\protect\footnotemark[1] LiDAR and Intel Realsense-D435i\protect\footnotemark[2] camera, \textcolor{black}{which is used for the majority of the experiments (Section IV.A and B)}. Right is spinning LiDAR (OS2-64\protect\footnotemark[3] )and industry camera (MV-CA013-21UC\protect\footnotemark[4]), \textcolor{black}{which is used for verification on fixed resolution LiDAR (Section IV.C)} . Each sensor suite has a nominal extrinsic, e.g., for Livox AVIA, it is $(0,-\frac{\pi}{2},\frac{\pi}{2}).$ for rotation (ZYX Euler Angle) and zeros for translation. }} \label{fig:sensor_suite} \vspace{-0.5cm} \end{figure} \footnotetext[1]{https://www.livoxtech.com/avia} \footnotetext[2]{https://www.intelrealsense.com/depth-camera-d435i} \footnotetext[3]{https://ouster.com/products/os2-lidar-sensor} \footnotetext[4]{https://www.rmaelectronics.com/hikrobot-mv-ca013-21uc} \vspace{-0.2cm}\subsection{Calibration Results in Outdoor and Indoor Scenes} We test our methods in a variety of indoor and outdoor scenes shown in Fig. \ref{fig:scenarios}, and validate the calibration performance as follows. \begin{figure}[ht] \vspace{-0.2cm} \centering \includegraphics[width=1\linewidth]{muti_scenes.pdf} \centering \caption{Calibration scenes.} \label{fig:scenarios} \vspace{-0.6cm} \end{figure} \subsubsection{\textcolor{black}{Robustness and Convergence Validation}}\label{sec:convergence} \textcolor{black}{To verify the robustness of the full pipeline, we test it on each of the 6 scenes individually and also on all scenes together. For each scene-setting, 20 test runs are conducted, each with a random initial value uniformly drawn from a neighborhood ($\pm 5^{\circ}$ for rotation and $\pm 10$cm for translation) of the value obtained from the CAD model. Fig. \ref{fig:cost_value} shows the percent of edge correspondence before and after the rough calibration and the convergence of the normalized optimization cost $\frac{1}{N_{match}}\mathbf r^T \left( \mathbf J_{\mathbf w} \boldsymbol{\Sigma} \mathbf J_{\mathbf w} \right)^{-1} \mathbf r$ during the fine calibration. It is seen that, in all 7 scene settings and $20$ test runs of each, the pipeline converges for both rough and fine calibration.} \begin{figure}[h] \vspace{-0.1cm} \centering \includegraphics[width=1\linewidth]{rough_and_norm_cost.png} \caption{\textcolor{black}{Percent of edge correspondence before and after the rough calibration (left) and cost during the optimization (right).} } \label{fig:cost_value} \vspace{-0.2cm} \end{figure} \textcolor{black}{Fig. \ref{fig:extrinsic_distrubution} shows the distribution of converged extrinsic values for all scene settings. It is seen that in each case, the extrinsic value converges to almost the same value regardless of the large range of initial value distribution. A visual example illustrating the difference before and after our calibration pipeline is shown in Fig. \ref{fig:projection_validation}. Our entire pipeline, including feature extraction, matching, rough calibration and fine calibration, takes less than 60 seconds} \begin{figure}[h] \vspace{-0.1cm} \centering \includegraphics[width=1\linewidth]{extrinsic_distribution_boxplot.pdf} \caption{\textcolor{black}{Distribution of converged extrinsic values for all scene settings. The displayed extrinsic has its nominal part removed. }} \label{fig:extrinsic_distrubution} \vspace{-0.2cm} \end{figure} \iffalse To better verify the extrinsic parameters, we perform projection verification. In Fig. \ref{fig:projection_validation}, we project LiDAR points to the camera image plane, color them by intensity, and overlay them with the camera image. It can be seen alignment between the the LiDAR projection and the camera images is greatly improved with the optimization. Fig. \ref{fig:color_scene0} further colors the point cloud with the converged extrinsic and compare with the actual images (Fig. \ref{fig:color_scene0} (B) and (D) with the colored point cloud rendered at the same view angle (Fig. \ref{fig:color_scene0} (A) and (C)). It is seen that the point cloud rendering is very close to the actual images, indicating the high accuracy of our calibration. \begin{figure} \centering \includegraphics[width=1\linewidth]{color_cloud_scene6.pdf} \caption{\textcolor{black}{To be modified. Add all extrinsic values. }Colored point cloud for scene 6. A and C are locally enlarged views of the point cloud. B and D are parts of the camera image corresponding to point cloud in A and C.} \label{fig:color_scene0} \end{figure} \fi \begin{figure} \vspace{-0.3cm} \centering \includegraphics[width=1\linewidth]{projection_contrast.jpg} \caption{\textcolor{black}{LiDAR projection image overlaid on the camera image with an initial (left) and calibrated extrinsic (right). The LiDAR projection image is colored by Jet mapping on the measured point intensity.}} \label{fig:projection_validation} \vspace{-0.4cm} \end{figure} \subsubsection{Consistency Evaluation} To evaluate the consistency of the extrinsic calibration in different scenes, we compute the standard deviation of each degree of freedom of the extrinsic: \begin{equation} \begin{aligned} \sigma_k&=\sqrt{ \left( \boldsymbol{\Sigma}_{\mathbf T} \right)_{k, k}},\ \ k\in \{ 1,2,...,6 \} \end{aligned} \label{standart deviation} \end{equation} where ${\Sigma}_{\mathbf T}$ is computed from (\ref{eq:estiamte_cov}). \textcolor{black}{As shown in Fig. \ref{fig:extrinsic_distrubution}, the converged extrinsic in a scene is almost identical regardless of its initial value, we can therefore use this common extrinsic to evaluate the standard deviation of the extrinsic calibrated in each scene. The results are summarized in Fig. \ref{fig:extrinsic and standard deviation}. It is seen that the $3\sigma$ of all scenes share overlaps and as expected, the calibration result based on all scenes have much smaller uncertainty and the corresponding extrinsic lies in the $3\sigma$ confidence level of the other 6 scenes. These results suggest that our estimated extrinsic and covariance are consistent. } \begin{figure} \vspace{-0.2cm} \centering \includegraphics[width=1\linewidth]{extrinsic_distribution_with_joint.png} \caption{\textcolor{black}{Calibrated extrinsic with $3\sigma$ bound in all 7 scene settings. The displayed extrinsic has its nominal part removed. } } \label{fig:extrinsic and standard deviation} \vspace{-0.2cm} \end{figure} \subsubsection{Cross Validation} We evaluate the accuracy of our calibration via cross validation among the six individual scenes. To be more specific, we calibrate the extrinsic in one scene and apply the calibrated extrinsic to the calibration scene and all the rest five scenes. We obtain the residual vector $\mathbf r$ whose statistical information (e.g., mean, median) reveal the accuracy quantitatively. The results are summarized in Fig.~\ref{fig:cross_validation}, where the $20\%$ largest residuals are considered as outliers and removed from the figure. It is seen that in all calibration and validation scenes (36 cases in total), around $50\%$ of residuals, including the mean and median, are within one pixel. This validates the robustness and accuracy of our methods. \begin{figure} \centering \includegraphics[width=1\linewidth]{cross_validation_v2.png} \caption{Cross validation results. The 6 box plots in $i$-th subfigure summarizes the statistical information (from up to down: maximum, third quartile, median, first quartile, and minimum) of residuals of the six scenes applied with extrinsic calibrated from scene $i$. } \label{fig:cross_validation} \vspace{-0.4cm} \end{figure} \subsubsection{Bad Scenes} As analyzed in Section \ref{sec:analysis}, our method requires a sufficient number of edge features distributed properly. This does put certain requirements to the calibration scene. We summarize the scenarios in which our algorithm does not work well in Fig.\ref{fig:bad_scene}. The first is when the scene contains all cylindrical objects. Because the edge extraction is based on plane fitting, round objects will lead to inaccurate edge extraction. \textcolor{black}{Besides, cylindrical objects will also cause parallax issues, which will reduce calibration accuracy}. The second is the uneven distribution of edges in the scene. For example, most of the edges are distributed in the upper part of the image, which forms poor constraints that are easily affected by measurement noises. The third is when the scene contains edges only along one direction (e.g., vertical), in which the constraints are not sufficient to uniquely determine the extrinsic parameters. \begin{figure} \centering \includegraphics[width=1\linewidth]{bad_scenes.pdf} \caption{Examples of bad scenarios.} \label{fig:bad_scene} \vspace{-0.5cm} \end{figure} \subsection{Comparison Experiments} We compare our methods with \cite{ACSC, zhou2018automatic}, which both use checkerboard as a calibration target. The first one ACSC\cite{ACSC} uses point reflectivity measured by the LiDAR to estimate the 3D position of each grid corner on the checkerboard and computes the extrinsic by solving a 3D-2D PnP problem. Authors of \cite{ACSC} open sourced their codes and data collected on a Livox LiDAR similar to ours, so we apply our method to their data. Since each of their point cloud data only has 4 seconds data, it compromises the accuracy of the edge extraction in our method. To compensate for this, we use three (versus 12 used for ACSC \cite{ACSC}) sets of point clouds in the calibration to increase the number of edge features. We compute the residuals in (\ref{eq:grad}) using the two calibrated extrinsic, the quantitative result is shown in {Fig. \ref{fig:method_contrast}} (a). The second method \cite{zhou2018automatic} estimates the checkerboard pose from the image by utilizing the known checkerboard pattern size. Then, the extrinsic is calibrated by minimizing the distance from LiDAR points (measured on the checkerboard) to the checkerboard plane estimated from the image. The method is designed for multi-line spinning LiDARs that have much lower resolution than ours, so we adopt this method to our LiDAR and test its effectiveness. The method in \cite{zhou2018automatic} also considers depth-discontinuous edge points, which are less accurate and unreliable on our LiDAR. So to make a fair comparison, we only use the plane points in the calibration when using \cite{zhou2018automatic}. To compensate for the reduced number of constraints, we place the checkerboard at more than 36 different locations and poses. In contrast, our method uses only one pair of data. Fig. \ref{fig:iros_contrast} shows the comparison results on one of the calibration scenes with the quantitative result supplied in {Fig. \ref{fig:method_contrast}} (b). It can be seen that our method achieves similar calibration accuracy with \cite{zhou2018automatic} although using no checkerboard information (e.g., pattern size). We also notice certain board inflation (the blue points in the zoomed image of Fig. \ref{fig:iros_contrast}), which is caused by laser beam divergence explained in Section \ref{sec:edge}. The inflated points are 1.4cm wide and at a distance of 6m, leading to an angle of ${0.014}/{6} = 0.13^\circ$, which agrees well with half of the vertical beam divergence angle (0.28\textdegree). \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{method_contrast.png} \caption{Comparison of residual distribution.} \label{fig:method_contrast} \vspace{-0.2cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{contrast_iros.pdf} \caption{Colored point cloud: (a) our method; (b) Zhou \cite{zhou2018automatic}.} \label{fig:iros_contrast} \vspace{-0.6cm} \end{figure} \subsection{\textcolor{black}{Applicability to Other Types of LiDARs}} \textcolor{black}{Besides Livox Avia, our method could also be applied to conventional multi-line spinning LiDARs, which possess lower resolution at stationary due to the repetitive scanning. To boost the scan resolution, the LiDAR could be moved slightly (e.g., a small pitching movement) so that the gaps between scanning lines can be filled. A LiDAR (-inertial) odometry \cite{xu2021fast} could be used to track the LiDAR motion and register all points to its initial pose, leading to a much higher resolution scan enabling the use of our method. To verify this, We test our methods on another sensor platform (see Fig. \ref{fig:sensor_suite}) consisting of a spinning LiDAR (Ouster LiDAR OS2-64) and an industrial camera (MV-CA013-21UC). The point cloud registration result and detailed quantitative calibration results are presented in the supplementary material ({https://github.com/ChongjianYUAN/SupplementaryMaterials}) due to the space limit. The results show that our method is able to converge to the same extrinsic for 20 initial values that are randomly sampled in the neighborhood ($\pm 3^{\circ}$ in rotation and $\pm 5$cm) of the value obtained from the CAD model.} \section{Conclusion} This paper proposed a novel extrinsic calibration method for high resolution LiDAR and camera in targetless environments. We analyzed the reliability of different types of edges and edge extraction methods from the underlying LiDAR measuring principle. Based on this, we proposed an algorithm that can extract accurate and reliable LiDAR edges based on voxel cutting and plane fitting. Moreover, we theoretically analyzed the edge constraint and the effect of edge distribution on extrinsic calibration. Then a high-accuracy, consistent, automatic, and targetless calibration method is developed by incorporating accurate LiDAR noise models. Various outdoor and indoor experiments show that our algorithm can achieve pixel-level accuracy comparable to target-based methods. It also exhibits high robustness and consistency in a variety of natural scenes. \bibliographystyle{unsrt}
proofpile-arXiv_065-3826
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Since $3$D Chern-Simons theory is very successful, people are interested in its generalization to higher odd dimension. When studying $5$D Chern-Simons theory, physicists introduced the supersymmetric Yang-Mills equation on $5$D contact manifolds (c.f. \cite{MR2967134} \cite{MR3042942}). In particular, they introduced the {\it contact instanton equation} on $5$D contact manifolds as \begin{equation} \mathcal{F}=\pm\iota_{T}\ast\mathcal{F}, \end{equation} where $\mathcal{F}$ is the curvature for a connection, $T$ is the Reeb vector field and $\iota_{T}$ is the contraction with $T$. Hosomichi-Seong-Terashima \cite{MR2967134} constructed a $5$D $\mathcal{N}=1$ supersymmetric Yang-Mills theory on the five sphere, and showed the fields in a vector multiplet localized to contact instantons by using the localization technique. K\"{a}ll\'{e}n-Zabzine \cite{MR3042942} localized the path integral of Chern-Simons theory on circle fibrations over $4$D sympletic manifolds to contact instantons. On the other hand, Itoh \cite{Itoh2002Contact} has constructed the CR twistor space over $5$D contact manifolds by using differential geometry about two decades ago. Wolf \cite{Wolf1} used the twistor method to study the contact instanton equation on contact manifolds. In the classical flat case, by complexifying $4$D Minkowski space as $\mathbb{C}^{4}$, ones can construct twistor space by using complex geometry method (cf. \cite{MR2583958} \cite{Mason1} \cite{MR1054377}). If we denote a point of $\mathbb{C}^{4}$ by $$\mathbf{y}=\left(\begin{array}{lll} y_{00'} &y_{01'}\\ y_{10'} &y_{11'} \end{array}\right),$$ an {\it $\alpha$-plane} in $\mathbb{C}^{4}$ is the set of all $\mathbf{y}$ satisfying \begin{equation} \left(\begin{array}{lll} y_{00'} &y_{01'}\\ y_{10'} &y_{11'} \end{array}\right) \left(\begin{array}{ll} \pi_{0'}\\ \pi_{1'} \end{array}\right)= \left(\begin{array}{ll} \omega_{0}\\ \omega_{1} \end{array}\right) \end{equation} for fixed $0\neq(\pi_{0'},\pi_{1'})\in\mathbb{C}^{2}$ and $w_{0},$ $w_{1}$ $\in\mathbb{C}$. The moduli space of all $\alpha$-planes is the twistor space $\mathcal{P}_{0}$, which is an open subset of $\mathbb{C}P^3$. Then we have the double fibration \begin{equation*} \xy 0;/r.22pc/: (0,20)*+{\mathbb{C}^{4}\times \mathbb{C}P^1}="1"; (-20,0)*+{ \mathcal{P}_{0} }="2"; (20,0)*+{\mathbb{C}^{4}}="3"; {\ar@{->}_{\eta}"1"; "2"}; {\ar@{->}^{\tau} "1"; "3"}; \endxy, \end{equation*} based on which there exists the Penrose correspondence between the solution of massless field equations and the first cohomology groups of certain line bundles over $\mathcal{P}_{0}$, and Penrose-Ward correspondence between the solutions of ASD Yang-Mills equation and holomorphic bundles over $\mathcal{P}_{0}$, trivial over the complex projective line $\hat{x}=\eta\circ\tau^{-1}(x)$ for any $x\in\mathbb{C}^{4}.$ In this paper, we consider the simplest $5$D contact manifold, $5$D real Heisenberg group, and complexify it as $\mathbb{C}^{5}$. We will generalize this theory to $5$D Heisenberg group. $5$D complex {\it Heisenberg group} $\mathscr H$ is $\mathbb{C}^{5}:=\{(\mathbf{y},t)|\mathbf{y}\in\mathbb{C}^{4},t\in\mathbb{C}\}$ with the multiplication given by \begin{equation} \label{eq:w-multiplication} \begin{split}&(\mathbf{y},t ) \circ (\mathbf{y'},t')=\left ( \mathbf{y}+\mathbf{y'},t+t' + B(\mathbf{ y}, \mathbf{y'})\right), \end{split}\end{equation} where $B(\mathbf{y}, \mathbf{y'})=y_{00'}y'_{11'}-y_{01'}y'_{10'}+y_{10'}y'_{01'} -y_{11'}y'_{00'}.$ We have left invariant vector fields on $ \mathscr H$: \begin{equation} \label{eq:Y}\begin{split} V_{00'}&:=\frac{\partial}{\partial y_{00'}}- {y}_{11'} T, \qquad V_{01'}:=\frac{\partial}{\partial y_{01'}}+ {y}_{10'} T,\\ V_{10'}&:=\frac{\partial}{\partial y_{10'}}- {y}_{01'} T,\qquad V_{11'}:=\frac{\partial}{\partial y_{11'}}+{y}_{00'} T,\qquad T:=\frac{\partial}{\partial t}. \end{split} \end{equation} It is easy to see that \begin{equation} \label{eq:YYT} [V_{00'},V_{11'} ]=[V_{10'},V_{01'} ]=2T, \end{equation} and all other brackets vanish. Consequently, for fixed $0\neq(\pi_{0'},\pi_{1'})\in\mathbb{C}^{2}$, if denote \begin{equation}\label{keyvf}V_{A}:=\pi_{0'}V_{A0'}-\pi_{1'}V_{A1'},\quad A=0,1,\end{equation} we have $$[V_{0},V_{1}]=0.$$ Namely, $span\{V_{0},V_{1}\}$ is an abelian Lie subalgebra and an integrable distribution for fixed $0\neq(\pi_{0'},\pi_{1'})\in\mathbb{C}^{2}$. Their integral surfaces are hyperplanes (cf. (\ref{eq:lienar eq0})), which we also call {\it $\alpha$-planes}. The {\it twistor space} $\mathcal{P}$ is the moduli space of all $\alpha$-planes, which is a $4$D complex manifold. We have the double fibration over $5$D complex Heisenberg group as follows \begin{equation} \label{TOH} \xy 0;/r.22pc/: (0,20)*+{\mathcal{F}=\mathbb{C}^{5}\times \mathbb{C}P^1 }="1"; (-20,0)*+{ \mathcal{P} }="2"; (20,0)*+{\mathscr H\cong \mathbb{C}^{5}}="3"; {\ar@{->}_{\eta}"1"; "2"}; {\ar@{->}^{\tau} "1"; "3"}; \endxy. \end{equation} A connection is called {\it anti-self-dual} (briefly ASD) if it is flat over any $\alpha$-plane. Let $\Phi=\Phi_{00'}\theta^{00'}+\Phi_{10'}\theta^{10'}+\Phi_{01'}\theta^{01'}+\Phi_{11'}\theta^{11'}+\Phi_{T}\theta$ be a $\mathfrak g$-valued connection form on $\mathscr H$, where $\{\theta^{AA'},\theta\}$ are $1$-forms dual to $\{V_{AA'},T\}$. $\Phi$ is ASD if and only if it satisfies the {\it ASD Yang-Mills equation} \begin{equation}\label{eq:ASD} \left\{\begin{array}{l} V_{00'}(\Phi_{10'})-V_{10'}(\Phi_{00'})+[\Phi_{00'},\Phi_{10'}]=0,\\ V_{01'}(\Phi_{10'})+V_{00'}(\Phi_{11'})-V_{10'}(\Phi_{01'})-V_{11'}(\Phi_{00'})+[\Phi_{00'},\Phi_{11'}]+[\Phi_{01'},\Phi_{10'}]=0,\\ V_{01'}(\Phi_{11'})-V_{11'}(\Phi_{01'})+[\Phi_{01'},\Phi_{11'}]=0. \end{array} \right. \end{equation} An open subset $U$ of $\mathscr H$ is called {\it elementary} if for every $\alpha$-plane $\widetilde{Z}=\tau\circ\eta^{-1}(Z)$ ($Z\in \mathcal{P} $) the intersection $\widetilde{Z}\cap U$ is connected and simply connected. Then we have Penrose-Ward correspondence. \begin{thm}\label{pwc} Let $U$ be an elementary open set in $\mathscr H$. There is a one-to-one correspondence between\\ (1) gauge equivalence classes of $ASD$ connections with the gauge group ${\rm GL}(n,\mathbb{C})$ over $U$; \\ (2) holomorphic vector bundles $E'\rightarrow \hat{U}=\eta\circ\tau^{-1}(U)$ such that $E'|_{\hat x}$ is trivial, where $\hat{x}:=\eta\circ\tau^{-1}(x)$ for each $x\in U$. \end{thm} Define the {\it Sub-Laplacian} $$\Delta_{b}:=V_{00'}V_{11'}-V_{10'}V_{01'},$$ and {\it partial exterior differential operators} ${\rm d}_{0}$ and ${\rm d}_{1}$ by \begin{equation}\label{d-1-2} {\rm d}_{0}f:=V_{00'}f\cdot\theta^{00'}+V_{10'}f\cdot\theta^{10'},\qquad {\rm d}_{1}f:=V_{01'}f\cdot\theta^{01'}+V_{11'}f\cdot\theta^{11'} \end{equation} for a function $f\in C^{\infty}(U,\mathbb{C})$. By using Atiyah-Ward ans\"{a}tz, we can construct a family of ASD connections. \begin{thm}\label{ASDA} (1) If $\varphi$ satisfies \begin{equation}\label{eq:compatible-} \Delta_{b}\varphi=0, \end{equation} then the connection form \begin{equation}\label{AASSDD11} \Phi =\left[\begin{array} {cc} \frac{1}{2}({\rm d}_{0}\ln{\varphi}-{\rm d}_{1}\ln{\varphi}) & V_{01'}(\ln{\varphi})\theta^{00'}+V_{11'}(\ln\varphi)\theta^{10'}\\ V_{00'}(\ln{\varphi})\theta^{01'}+V_{10'}(\ln\varphi)\theta^{11'} &\ -\frac{1}{2}({\rm d}_{0}\ln{\varphi}-{\rm d}_{1} \ln{\varphi})\end{array}\right]+\Phi_{T}\theta \end{equation} is ASD. (2) In particular, \begin{equation} \varphi:=\frac{1}{\left\|\mathbf{ y}\right\|^{4}-t^{2}},\qquad {\it where}\quad \left\|\mathbf{ y}\right\|^{2}=\rm{det}\left[\begin{array}{lll} y_{00'} &y_{01'}\\ y_{10'} &y_{11'} \end{array}\right], \end{equation} is the solution to (\ref{eq:compatible-}) on $\mathscr H\setminus\{\left\|\mathbf{ y}\right\|^{4}=t^{2}\}$. \end{thm} Now we consider $5$D real Heisenberg group $\mathscr H^{\mathbb{R}}\cong \mathbb{R}^{5}$ with multiplication given by \begin{equation} \label{eq:w-multiplication-r} \begin{split} &(\mathbf{y},s) \circ(\mathbf{y}', s' ) =\left ( \mathbf{y}+\mathbf{y}', s+s' +\langle\mathbf{ y} , \mathbf{y}'\rangle\right), \end{split}\end{equation} where $\langle\mathbf{ y} , \mathbf{y}'\rangle=2\left(y_{1}y_{2}'-y_{2}y_{1}'-y_{3}y_{4}'+y_{4}y_{3}'\right)$, $\mathbf{ y} , \mathbf{y}'\in \mathbb{R}^{4}$ and $s, s'\in \mathbb{R}$. By the real imbedding $\mathbb{R}^{5}\longrightarrow\mathbb{C}^{5}$ given by \begin{equation}\label{eq:C-L} \left[\begin{array}{lll} y_{00'}&y_{01'}\\ y_{10'}&y_{11'} \end{array}\right]:=\left[\begin{array}{lll} y_{1}+\textbf{i}y_{2}& -y_{3}+\textbf{i}y_{4}\\ y_{3}+\textbf{i}y_{4}& \ \ y_{1}-\textbf{i}y_{2} \end{array}\right],\quad t=-\textbf{i}s, \end{equation} $\mathscr H^{\mathbb{R}}$ is a subgroup of $\mathscr H$. It is a flat model of 5D contact manifolds. Recall the contact instanton equation on $5$D contact manifolds \cite{MR2967134} \cite{MR3042942}. For a connection $\nabla$, let us consider its Yang-Mills action $$YM(\nabla)=-\int_{\mathscr H^{\mathbb{R}}}Tr(F\wedge\ast F),$$ where $F$ is the curvature of $\nabla$ and $\ast$ is the Hodge star over $\mathscr H^{\mathbb{R}}$. $F_{H}$ and $F_{V}$ are horizontal and vertical part of $F$ respectively, which satisfy $\iota_{T}F_{H}=0$ and $\iota_{T}F_{V}\neq0$ respectively. As $F_{H}\wedge\ast F_{V}=0$ and $F_{V}\wedge\ast F_{H}=0$, we have $$YM(\nabla)=-\int_{\mathscr H^{\mathbb{R}}}Tr(F_{H}\wedge\ast F_{H}+F_{V}\wedge\ast F_{V}).$$ Take $F^{+}_{H}$ and $F^{-}_{H}$ as horizontal self-dual and anti-self-dual parts of $F$, which satisfy $\iota_{T}\ast F^{+}_{H}=F^{+}_{H}$ and $\iota_{T}\ast F^{-}_{H}=-F^{-}_{H}$ respectively. $F^{+}_{H}=0$ or $F^{-}_{H}=0$ together with $F_{V}=0$ are critical points of $YM(\nabla)$. The anti-self-dual contact instanton equation \cite{MR2967134} \cite{MR3042942} is \begin{equation}\label{instan111}F^{+}_{H}=0\quad and \quad F_{V}=0,\end{equation} while the self-dual one is $$F^{-}_{H}=0\quad and \quad F_{V}=0.$$ The ASD equations (\ref{eq:ASD}) restricted to $\mathscr H^{\mathbb{R}}$ is exactly $F^{+}_{H}=0$. So it satisfies (\ref{instan111}) without $F_{V}=0.$ If $\varphi$ in (\ref{AASSDD11}) is replaced by \begin{equation*} \varphi^{\mathbb{R}}=\frac{1}{|y|^{4}+s^{2}},\quad where \ |y|=(y_{1}^{2}+y_{2}^{2}+y_{3}^{2}+y_{4}^{2})^{\frac{1}{2}}, \end{equation*} we get ASD connection forms on $5$D real Heisenberg group $\mathscr H^{\mathbb{R}}$. Baston and Easteood \cite{BE} generalize the twistor theory to a general setting based an the double fibration \begin{equation} \label{eq:G/P} \xy 0;/r.22pc/: (0,20)*+{G/(P\cap Q) }="1"; (-20,0)*+{G/Q}="2"; (20,0)*+{G/P}="3"; {\ar@{->}_{\eta}"1"; "2"}; {\ar@{->}^{\tau} "1"; "3"}; \endxy, \end{equation} where $G$ is a complex semisimple Lie group, $P$, $Q$ are its parabolic subgroups. If we take $G={\rm SO}(6,\mathbb{C})$ and suitable subgroups $P$ and $Q$, by using the method in \cite{Wa13}, we can also write down local coordinate charts of the above homogeneous spaces and the mapping $\eta$ and $\tau$ in terms of local coordinates to obtain (\ref{TOH}). The construction of this paper is based on the fact that $V_{0}$ and $V_{1}$ in (\ref{keyvf}) span an abelian subalgebra. Its real version plays a very important role in developing a theory of quaternionic Monge-Amp\`ere operator in \cite{2020The} and tangential $k$-Cauchy-Fueter complex \cite{Ren1} over the Heisenberg group. In Section 2, we introduce the twistor transform for $5$D complex Heisenberg group and derive the ASD Yang-Mills equation. In Section 3, we give Penrose-Ward correspondence between ASD connections and holomorphic vector bundles over the twistor space, which are trivial over a class of projective lines in the twistor space, and construct a family of ASD connections by using Atiyah-Ward ans{\"a}tz. In Section 4, by the real imbedding of $\mathscr H^{\mathbb{R}}$ into $\mathscr H$, we find that the ASD Yang-Mills equation coincides with the horizontal part of the contact instanton equation. In Appendix, by constructing local coordinate charts of homogeneous spaces in the double fibration (\ref{eq:G/P}) with ${\rm G}={\rm SO}(6,\mathbb{C})$, we reproduce the basic ingredients of twistor method for $5$-D complex Heisenberg group. \section{The twistor transform on $5$D complex Heisenberg group} \subsection{$\alpha$-planes and the twistor space of $5$D complex Heisenberg group} Define a symmetric product $\langle\cdot, \cdot\rangle$ on $\mathbb{C}^{2}$ by \begin{equation}\label{eq:beta} \langle\mathbf{ w} , \mathbf{\widetilde{w}}\rangle:=w_{1 } \widetilde{w}_{2}+\widetilde{w}_{1 }{w}_{2 } \end{equation} for $\mathbf{ w}=(w_{1},w_{2})$, $\mathbf{\widetilde{w}}=(\widetilde{w}_{1},\ \widetilde{w}_{2})\in \mathbb{C}^{2}$. If we denote $\mathbf{y}=(\mathbf{y}_{0'},\mathbf{y}_{1'})$ with $\mathbf{y}_{0'}=(y_{00'},y_{10'})$ and $\mathbf{y}_{1'}=(y_{01'},y_{11'})$, the multiplication (\ref{eq:w-multiplication}) of the Heisenberg group can be also written as \begin{equation} \label{eq:w-multiplication2} \begin{split}&(\mathbf{y}_{0'}, \mathbf{ y}_{1'},t ) \circ (\mathbf{\widetilde{y}}_{0'}, \mathbf{\widetilde{y}}_{1'},\widetilde{t})=\left ( \mathbf{y}_{0'}+\mathbf{\widetilde{y}}_{0'}, \mathbf{ y}_{1'}+\mathbf{\widetilde{y}}_{1'},t+\widetilde{t} + \langle\mathbf{ y}_{0'} , \mathbf{\widetilde{y}}_{1'}\rangle-\langle\mathbf{ y}_{1'} , \mathbf{\widetilde{y}}_{0'}\rangle\right). \end{split}\end{equation} Recall the {\it left translation}: for fixed $(\mathbf{y}',t')\in\mathscr{H},$ \begin{equation}\label{l-t} \tau_{(\mathbf{y}',t')}: (\mathbf{y}, t)\mapsto(\mathbf{y}',t')\cdot(\mathbf{y},t) ,\qquad\qquad (\mathbf{y},t)\in\mathscr{H} \end{equation} and the {\it dilation}: \begin{equation} \delta_{r}:(\mathbf{y},t)\mapsto(r\mathbf{y},r^{2}t), \end{equation}on the Heisenberg group. A vector field $V$ over $\mathscr{H}$ is called {\it left invariant} if for any $(\mathbf{y}',t')\in\mathscr{H}$, we have $$\tau_{(\mathbf{y}',t')*}V=V,$$ where $\tau_{(\mathbf{y}',t')}$ is the left translation in (\ref{l-t}). Define \begin{equation} \label{eq:left-invariant} (V_{AA'}f)(\mathbf{y} ,t ):=\left.\frac{\hbox{d}}{\hbox{d}s}f((\mathbf{y} ,t )(se_{AA'})) \right|_{s=0},\qquad (Tf)(\mathbf{y} ,t ):=\left.\frac{\hbox{d}}{\hbox{d}s}f\left((\mathbf{y},t )(se_{0})\right) \right|_{s=0} \end{equation} for $A=0,1,A'=0',1',$ where $e_{AA'}$ is a vector in $\mathbb{C}^{5}$ with all entries vanishing except for the $(AA')$-entry to be $1$, and $e_{0}=(0,0,0,0,1)$. For example, \begin{equation} \label{eq:left-invariant2}\begin{split} V_{00'}f:=\left.\frac{\hbox{d}}{\hbox{d}s}f\left((\mathbf{y} ,t )(se_{00'})\right) \right|_{s=0}&=\left.\frac{\hbox{d}}{\hbox{d}s}f\left((\mathbf{y} ,t )(s,0,0,0,0)\right) \right|_{s=0}\\ &=\left.\frac{\hbox{d}}{\hbox{d}s}f\left(y_{00'}+s,y_{10'},y_{01'},y_{11'},t-sy_{11'})\right) \right|_{s=0}\\ &=\left(\frac{\partial}{\partial{y_{00'}}}-y_{11'}\frac{\partial}{\partial t}\right)f. \end{split} \end{equation} We can describe $\alpha$-planes, the integral surfaces of $V_{0}$ and $V_{1}$, explicitly as follows. $\mathbb{C}^{5}\times\mathbb{C}P^{1}$ is the complex manifold with two coordinate charts $\mathbb{C}^{5}\times\mathbb{C}$ and $\mathbb{C}^{5}\times\mathbb{C}$, glued by the mapping $\kappa: \mathbb{C}^{5}\times\mathbb{C}\setminus \{0\}\rightarrow\mathbb{C}^{5}\times\mathbb{C}\setminus \{0\}$ given by \begin{equation}\label{trantran1}(\mathbf{y},\zeta)\longmapsto(\mathbf{y},\zeta^{-1}). \end{equation} Then if we use the nonhomogeneous coordinates, $\tau:\mathbb{C}^{5}\times\mathbb{C}P^{1}\longrightarrow\mathbb{C}^{5}$ is given by $(\mathbf{y},\zeta)\longrightarrow\mathbf{y} $ and $(\mathbf{y},\widetilde{\zeta})\longrightarrow\mathbf{y} $, and the vector field $V_{A}$ in (\ref{keyvf}) can be rewritten as $$V_{A}=\pi_{1'}V_{A}^{\zeta}=\pi_{0'}\widetilde{V}_{A}^{\widetilde{\zeta}},$$ where $\zeta=\frac{\pi_{0'}}{\pi_{1'}}$, $\widetilde{\zeta}=\frac{\pi_{1'}}{\pi_{0'}}$ and $$V_{A}^{\zeta}=\zeta V_{A0'}-V_{A1'},\qquad\widetilde{V}_{A}^{\widetilde{\zeta}}= V_{A0'}-\widetilde{\zeta}V_{A1'}.$$ Let us check that the integral surfaces of $V_{A}^{\zeta}$ and $V_{A}^{\widetilde{\zeta}}$ lifted to $\mathbb{C}^{5}\times\mathbb{C}P^{1}$ by $\tau$ are the fiber of the mapping $\eta:\mathbb{C}^{5}\times\mathbb{C}\rightarrow\mathbb{C}^{4}$ and $\widetilde{\eta}:\mathbb{C}^{5}\times\mathbb{C}\rightarrow\mathbb{C}^{4}$, respectively. \begin{prop} Let $\eta:\mathbb{C}^{5}\times \mathbb{C}\rightarrow W\cong\mathbb{C}^{4}$ be the mapping given by \begin{equation}\label{eq:psi} \mathbf{\omega}=\eta(\mathbf{y},t,\zeta) = \begin{pmatrix} \eta_{0}(\mathbf{y},t,\zeta)\\ \eta_{1}(\mathbf{y},t,\zeta)\\ \eta_{2}(\mathbf{y},t,\zeta)\\ \eta_{3}(\mathbf{y},t,\zeta) \end{pmatrix} = \begin{pmatrix} y_{00'}+\zeta y_{01'}\\ y_{10'}+\zeta y_{11'}\\ t-\langle \mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},\mathbf{y}_{1'}\rangle\\ \zeta \end{pmatrix}\in W. \end{equation} Then $\tau\circ\eta^{-1}(w)$ is a $2$-D plane parameterized as \begin{equation}\label{eq:lienar eq0} \left\{\begin{array}{l} y_{01'}=s_{0},\\ y_{11'}=s_{1},\\ y_{00'}=\omega_{0}-\zeta s_{0},\\ y_{10'}=\omega_{1}-\zeta s_{1},\\ t=\omega_{2}+s_{1}\omega_{0}+s_{0}\omega_{1}, \end{array} \right. \end{equation} with parameters $s_{0},s_{1}\in \mathbb{C}$. $V_{0}^{\zeta}$ and $V_{1}^{\zeta}$ are tangential to this plane, and so it is an $\alpha$-plane. \end{prop} \begin{proof} It is direct to see that $V_{AA'}(y_{BB'})=\delta_{AB}\delta_{A'B'}$ by the expression of $V_{AA'}$'s in (\ref{eq:Y}). So we have $$V_{A}^{\zeta}(\eta_{j}(\mathbf{y},t,\zeta))=0,\quad j=0,1.$$ Noting that \begin{equation}\label{currr000}\langle \mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},\mathbf{y}_{1'}\rangle=y_{00'}y_{11'}+y_{10'}y_{01'}+2\zeta y_{01'}y_{11'},\end{equation} we have \begin{equation*}\begin{split} V_{0}^{\zeta}(\eta_{2}(\mathbf{y},t,\zeta))& =\zeta\left(-y_{11'}-y_{11'}\right)-\left(y_{10'}-y_{10'}-2\zeta y_{11'}\right)=0,\\ V_{1}^{\zeta}(\eta_{2}(\mathbf{y},t,\zeta))& =\zeta\left(-y_{01'}-y_{01'}\right)-\left(y_{00'}-y_{00'}-2\zeta y_{01'}\right)=0. \end{split}\end{equation*} Thus $V_{0}^{\zeta}$ and $V_{1}^{\zeta}$ are tangential to each fiber of $\eta$. Note that for a fixed point $\mathbf{\omega}=(\omega_{0},\omega_{1},\omega_{2},\zeta)$ $\in W$, $\eta^{-1}(w)$ in $\mathbb{C}^{5}\times\mathbb{C}$ has fixed last coordinate $\zeta$. So $\tau\circ\eta^{-1}(w)$ is the plane determined by \begin{equation}\label{newlss} \eta_{0}(\mathbf{y},t,\zeta)=\omega_{0},\quad\eta_{1}(\mathbf{y},t,\zeta)=\omega_{1}, \quad\eta_{2}(\mathbf{y},t,\zeta)=\omega_{2}. \end{equation} The solutions of linear equations $ \left\{\begin{array}{l} y_{00'}+\zeta y_{01'}=\omega_{0}\\ y_{10'}+\zeta y_{11'}=\omega_{1} \end{array} \right. $ are given by $y_{01'}=s_{0},\ y_{00'}=\omega_{0}-s_{0}\zeta,\ y_{11'}=s_{1},\ y_{10'}=\omega_{1}-s_{1}\zeta$ with parameters $s_{0},s_{1}\in \mathbb{C}$. Then $t$ is given by the last equation in (\ref{eq:lienar eq0}) by the third equation of (\ref{eq:psi}). \end{proof} On the other hand, if $\pi_{0'}\neq0,$ integral surfaces of $\widetilde{V}_{0}^{\widetilde{\zeta}}$ and $\widetilde{V}_{1}^{\widetilde{\zeta}}$ are fibers of the mapping $\widetilde{\eta}:\mathbb{C}^{5}\times\mathbb{C}\longrightarrow \widetilde{W}\cong\mathbb{C}^{4}$ given by \begin{equation}\label{eq:psi2} \widetilde{\mathbf{\omega}}=\widetilde{\eta}(\mathbf{y},t,\widetilde{\zeta}) = \begin{pmatrix} \widetilde{\eta}_{0}(\mathbf{y},t,\widetilde{\zeta})\\ \widetilde{\eta}_{1}(\mathbf{y},t,\widetilde{\zeta})\\ \widetilde{\eta}_{2}(\mathbf{y},t,\widetilde{\zeta})\\ \widetilde{\eta}_{3}(\mathbf{y},t,\widetilde{\zeta}) \end{pmatrix} = \begin{pmatrix} \widetilde{\zeta}y_{00'}+y_{01'}\\ \widetilde{\zeta}y_{10'}+y_{11'}\\ t+\langle\ \widetilde{\zeta}\mathbf{y}_{0'}+\mathbf{y}_{1'},\mathbf{y}_{0'}\rangle\\ \widetilde{\zeta} \end{pmatrix}\in\widetilde{W}, \end{equation} and for $\widetilde{\mathbf{\omega}}=(\widetilde{w}_{0},\widetilde{w}_{1},\widetilde{w}_{2},\widetilde{\zeta})\in\widetilde{W}$, the $\alpha$-plane $\tau\circ\widetilde{\eta}^{-1}(\widetilde{\mathbf{\omega}})$ is given by \begin{equation}\label{eq:lienar eq20} \left\{\begin{array}{l} y_{01'}=\widetilde{s}_{0}\\ y_{11'}=\widetilde{s}_{1}\\ y_{00'}=\widetilde{\omega}_{0}-\widetilde{\zeta}\widetilde{s}_{0}\\ y_{10'}=\widetilde{\omega}_{1}-\widetilde{\zeta}\widetilde{s}_{1}\\ t=\widetilde{\omega}_{2}+\widetilde{s}_{1}\widetilde{\omega}_{0}+\widetilde{s}_{0}\widetilde{\omega}_{1}, \end{array} \right. \end{equation} with parameters $\widetilde{s}_{0},\widetilde{s}_{1}\in \mathbb{C}$. If $(\mathbf{y},t,\zeta)$ satisfies (\ref{newlss}) with $\omega_{0}$, $\omega_{1}$, $\omega_{2}$, $\zeta=\widetilde{\zeta}^{-1}\in\mathbb{C}$, then we have \begin{equation} \widetilde{\eta}_{A}(\mathbf{y},t,\widetilde{\zeta})=\zeta^{-1}\omega_{A},\quad A=0,1, \end{equation} and \begin{equation}\label{lastt11} \begin{split} \widetilde{\eta}_{2}(\mathbf{y},t,\widetilde{\zeta})&=t+\langle\ \zeta^{-1}\mathbf{y}_{0'}+\mathbf{y}_{1'},\mathbf{y}_{0'}\rangle\\ &=t+y_{01'}y_{10'}+y_{11'}y_{00'}+2\zeta^{-1}y_{00'}y_{10'}\\ &=t-\langle \mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},\mathbf{y}_{1'}\rangle+2\zeta^{-1}(y_{00'}+\zeta y_{01'})(y_{10'}+\zeta y_{11'})\\ &=\omega_{2}+2\zeta^{-1}\omega_{0}\omega_{1}, \end{split} \end{equation} by (\ref{currr000}). So $\kappa(\mathbf{y},t,\zeta)=(\mathbf{y},t,\widetilde{\zeta})$ maps a fiber of $\eta$ over $(\omega_{0},\omega_{1},\omega_{2},\zeta)$ to a fiber of $\widetilde{\eta}$ over $(\widetilde{\omega}_{0},\widetilde{\omega}_{1},\widetilde{\omega}_{2},\widetilde{\zeta})$ with the mapping \begin{equation}\label{transi1}\begin{split} \Phi:\quad W\setminus\{\zeta=0\}&\rightarrow \widetilde{W}\setminus\{\widetilde{\zeta}=0\}\\ (w_0,w_1,w_2,\zeta)&\mapsto(\widetilde{\omega}_{0},\widetilde{\omega}_{1},\widetilde{\omega}_{2},\widetilde{\zeta})=(\zeta^{-1}w_0, \zeta^{-1}w_1,w_2+2\zeta^{-1}w_{1}w_{2} ,\zeta^{-1}), \end{split}\end{equation} which glues $W$ and $\widetilde{W}$ to get a complex manifold $\mathcal{P}$. It is the moduli space of all $\alpha$-planes, which is our twistor space. Moreover, we have the commutative diagram \begin{equation}\label{dia11}\xymatrix{ \mathbb{C}^{5}\times\mathbb{C}\ar[r]^-{\kappa}\ar[d]^{\eta}& \mathbb{C}^{5}\times\mathbb{C}\ar[d]^{\widetilde{\eta}}& \\ W\ar[r]_{\Phi}&\widetilde{W}& } \end{equation} In fact, for $\left(\mathbf{y}_{0'},\mathbf{y}_{1'},\zeta\right)\in\mathbb{C}^{5}\times\mathbb{C}$, \begin{equation}\begin{split} \Phi\circ\eta\left(\mathbf{y}_{0'},\mathbf{y}_{1'},\zeta\right) &=\Phi\left(\mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},t-\langle \mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},\mathbf{y}_{1'}\rangle,\zeta\right)\\ &=\left(\zeta^{-1}\mathbf{y}_{0'}+\mathbf{y}_{1'},t+\langle \zeta^{-1}\mathbf{y}_{0'}+\mathbf{y}_{1'},\mathbf{y}_{0'}\rangle,\zeta^{-1}\right)\\ &=\widetilde{\eta}\circ\kappa\left(\mathbf{y}_{0'},\mathbf{y}_{1'},\zeta\right), \end{split}\end{equation} by (\ref{lastt11}). Thus $\eta$ and $\widetilde{\eta}$ are glued to give the mapping $\mathbb{C}^{5}\times \mathbb{C}P^{1}\longrightarrow\mathcal{P}$ in the double fibration (\ref{TOH}). \subsection{ASD Equation} Let $1$-forms $\{\theta^{AA'},\theta\}$ be dual to left invariant vector fields $\{V_{AA'},T\}$ in (\ref{eq:Y}) on $\mathscr H$, i.e. $ \theta^{AA'}(V_{BB'})=\delta_{AB}\delta_{A'B'},\ \theta^{AA'}(T)=0,\ \theta(V_{BB'})=0, \ \theta(T)=1, $ where $A,B=0,1$ and $A',B'=0',1'$. Then ${\rm{d}}u= \sum_{A,A'} V_{AA'}u\cdot\theta^{AA'}+Tu\cdot\theta$ for a function $u$ on $\mathscr H$. By the expression of $V_{AA'}$ in (\ref{eq:Y}), we get that $ \theta^{AA'}={\rm{d}}y_{AA'},\ \theta={\rm{d}}t+y_{11'}{\rm{d}}y_{00'}+y_{01'}{\rm{d}}y_{10'}-y_{10'}{\rm{d}}y_{01'}-y_{00'}{\rm{d}}y_{11'}. $ Exterior differentiation gives us ${\rm{d}}\theta^{AA'}=0$ $(A=0,1, A'=0',1')$ and ${\rm{d}}\theta=-2\theta^{00'}\wedge \theta^{11'}-2\theta^{10'}\wedge \theta^{01'}.$ The curvature of the connection form $\Phi$ is $F =({\rm{d}}+\Phi)^2={\rm{d}}\Phi+\Phi\wedge \Phi$ given by \begin{equation}\label{eq:curvature}\begin{split} F(X,Y)&={\rm{d}}\Phi(X,Y)+\Phi\wedge \Phi(X,Y)\\&=X(\Phi(Y))-Y(\Phi(X))- \Phi([X,Y]) +\Phi(X)\Phi(Y)-\Phi(Y)\Phi(X) \\&=X\Phi_Y-Y \Phi_X- \Phi_{[X,Y]} +[\Phi_X,\Phi_Y]. \end{split} \end{equation}Here we use the notation $\Phi_X:=\Phi(X)$ for a vector field $X$ on $\mathscr H$. Define the $\mathfrak g$-valued differential operators associated to the connection form $\Phi$ \begin{equation}\label{eq:Dj}\begin{split}& \nabla_A =\nabla_{A1'}-\zeta \nabla_{A0'}:=(V_{A1'}+\Phi_{A1'})-\zeta(V_{A0'}+\Phi_{A0'}), \end{split}\end{equation} $A=0,1,$ for fixed $\zeta\in\mathbb{C}$. A connection on $\mathscr{H}$ is {\it ASD } if its curvature vanishes along each $\alpha$-surface, i.e. \begin{equation}\label{eq:yang} F(V_0^{\zeta},V_1^{\zeta})= 0. \end{equation} The ASD condition (\ref{eq:yang}) is equivalent to \begin{equation*} \zeta^{2}F(V_{00'},V_{10'})-\zeta(F(V_{00'},V_{11'})+F(V_{01'},V_{10'}))+F(V_{01'},V_{11'})=0. \end{equation*} Comparing the coefficients of $\zeta^{2}$, $\zeta^{1}$ and $\zeta^{0}$, we get \begin{equation}\label{eq:Y5}F( V_{00'},V_{10'})=0,\quad F( V_{00'},V_{11'})+F( V_{01'},V_{10'})=0,\quad F( V_{01'},V_{11'})=0,\end{equation} which is equivalent to (\ref{eq:ASD}) by (\ref{eq:curvature}). Here $\Phi_{[V_{00'},V_{11'}]}+\Phi_{[V_{01'},V_{10'}]}=0$ by the brackets in (\ref{eq:YYT}). \section{Penrose-Ward correspondence and Atiyah-Ward ans\"{a}tz on $5$D Heisenberg group} \subsection{ Proof of Theorem \ref{pwc}} The proof is similar to the classical case (cf. \cite{MR1054377}). The objects in (1) and (2) are regarded as being specified modulo the usual equivalence relations. Given a $GL(n,\mathbb{C})$ vector bundle $V$ with ASD connection $\nabla$ over $U\subseteq\mathbb{C}^{5}$, to construct a vector bundle $E$ over $\hat{U}$, we assign a copy of the vector space $\mathbb{C}^{n}$ to each point $Z$ of $\hat{U}$ with \begin{equation}\label{NBA} E_{Z}=\{\psi:\nabla\psi|_{\widetilde{Z}\bigcap U}=0\} \end{equation} over $Z$, where $\widetilde{Z}=\tau\circ\eta^{-1}(Z).$ As $\nabla$ is an ASD connection, for fixed $\zeta\in\mathbb{C}$, the integrability condition $F(V_0,V_1)=0$ implies the existence of solutions to the equations $(V_{A}+\Phi_{A})h=0,$ $A=0,1$ on connected and simply connected domain $\widetilde{Z}\bigcap U$. So we have $E_{Z}\neq\emptyset$. Since the whole procedure is holomorphic, we have constructed a holomorphic vector bundle $E$ over $\hat{U}$. By construction (\ref{NBA}), the bundle $E$ is trivial when restricted to the projective line $\hat{x}$ for a point $x$ in $U$. This is because a vector $\psi\in V_{x}$ at $x$ determines a parallel field $\psi$ on each $\alpha$-plane through $x$, and hence determines a point in $E_{Z}$ for every point $Z$ on the line $\hat{x}$ in $\hat{U}$. Namely, each $\psi\in V_{x}\cong\mathbb{C}^{n}$ determines a holomorphic section of $E|_{\hat{x}}$. Therefore $n$ linearly independent $\psi$'s in $V_{x}$ give us $n$ linearly independent sections of $E|_{\hat{x}}$. So $E|_{\hat{x}}$ is trivial. This completes the proof of the first part of the theorem. Conversely, let $E'$ be a holomorphic rank-$n$ vector bundle over $\hat{U}$, such that $E'|_{\hat{x}}$ is trivial for all $x\in U$. We have to construct a connection form $\Phi$ on $U$. Since $\hat{U}$ is covered by two charts $\hat{U}\cap W$ and $\hat{U}\cap\widetilde{W}$ in (\ref{transi1}), the vector bundle $E'$ consists of two parts, which are $E'_{\hat{U}\cap W}\cong (\hat{U}\cap W)\times\mathbb{C}^{n}$ and $E'_{\hat{U}\cap\widetilde{W}}\cong (\hat{U}\cap\widetilde{W})\times\mathbb{C}^{n}$ glued by a holomorphic $n\times n$ transition matrix $F$ on the intersection $\hat{U}\cap W\cap \widetilde{W}$. The transition relation is $$\widetilde{\xi}=F\xi,$$ where $\widetilde{\xi}$ and $\xi$ are column $n$-vectors whose components serve as coordinates on the fibers of $E'$ above $\hat{U}\cap W$ and $\hat{U}\cap\widetilde{W}$, respectively. Consider the pull-back $\eta^{*}E'$, which is a bundle over $\mathcal{F}_{U}=U\times\mathbb{C}P^{1}$, where $\eta$ is given by (\ref{eq:psi}) and (\ref{eq:psi2}). For a point $Z\in\hat{U}$, the restriction of $\eta^{*}E'$ to $\eta^{-1}(Z)\in\mathcal{F}_{U}$ is a product bundle. We define a bundle $E\longrightarrow U$ by $E_{x}=\Gamma(\hat{x},E')$, where $\Gamma$ denotes the space of holomorphic sections. And we have $\eta^{*}E'=\tau^{*}E.$ Recall that the tangential space of leaves of the projection $\eta :\mathcal{F}_{U}\longrightarrow\hat{U}$ are spanned by the vector fields $V_{0}$ and $V_{1}$ on $\mathcal{F}$. We define a partial connection $\nabla$ that allows us to differentiate the sections of $\eta^{*}E'$ along the fibers, where we have $\nabla_{V_{A}}s=V_{A}s$ in the trivialization $\eta^{*}E'|_{\eta^{-1}(Z)}=\eta^{-1}(Z)\times E'_{Z}$. The sections for which $\nabla_{V_{A}}s=V_{A}s$ vanish are the pull-backs to $\mathcal{F}$ of local sections of $E'$. We now pick a local trivialization of $E$ over open subset $U$. This determines a local trivializaton of $\eta^{*}E'$, in which $$\nabla_{A}=V_{A}+\Phi_{A},\quad A=0,1,$$ for some matrix-valued functions $\Phi$ in $\zeta$ and $(\mathbf{y},t)$. Note that $\eta^{*}E'|_{U\times O_0}\cong U\times O_0\times\mathbb{C}^{n}$ and $\eta^{*}E'|_{U\times O_1}\cong U\times O_1\times\mathbb{C}^{n}$, where $O_0\cong\mathbb{C}$ and $O_1\cong\mathbb{C}$ are a covering of $\mathbb{CP}^1$. Their transition function $G$ over the intersection $U\times\left(\mathbb{C}\setminus\{0\}\right)$ is given by \begin{equation*} G(\mathbf{y},t,\zeta)=F\left(\mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},t-\langle \mathbf{y}_{0'}+\zeta \mathbf{y}_{1'},\mathbf{y}_{1'}\rangle,\zeta\right). \end{equation*} Now let us find the holomorphic sections of $E'|_{\hat{x}}$. Find non singular $n\times n$ matrices $f$ and $\widetilde{f}$, with $f$ holomorphic over $W\cap\hat{x}$ and $\widetilde{f}$ holomorphic over $\widetilde{W}\cap{\hat{x}}$, such that $F=\widetilde{f}^{-1}f$ is valid on $W\cap \widetilde{W}\cap \hat{x}$. Since $E'_{\hat{x}}$ is trivial, by the {\it Birkhoff factorization}, such matrices $f$ and $\widetilde{f}$ must exist. Each section of $E'_{\hat{x}}$ is then given by $\xi=f^{-1}\psi$, $\widetilde{\xi}=\widetilde{f}^{-1}\psi$, where $\psi$ is a constant $n$-vector, i.e. $\psi\in\mathbb{C}^{n}.$ Now we identify $\widetilde{f}$ and $f$ with their pulling back to $U\times\mathbb{CP}^1$. So we have that \begin{equation}\label{kpw}\begin{split} 0&=\nabla_{A}\psi=(-V_{A1'}+\zeta V_{A0'})\psi+(-\Phi_{A1'}+\zeta \Phi_{A0'})\psi\\ &=(-V_{A1'}f+\zeta V_{A0'}f)\xi+(-\Phi_{A1'}+\zeta \Phi_{A0'})\psi\\ &=-(V_{A1'}f\cdot f^{-1}+\Phi_{A1'})\psi+\zeta(V_{A0'}f\cdot f^{-1}+\Phi_{A0'})\psi. \end{split} \end{equation} So we have $\Phi_{AA'}=-V_{AA'}f\cdot f^{-1}.$ Moreover, since $V_{A}$ is tangential to the fiber of $\eta$, we have \begin{equation}\label{kpw2}\begin{split} 0&=V_{A}G =V_{A}(\widetilde{f}^{-1}f)=-\widetilde{f}^{-1}V_{A}\widetilde{f}\cdot\widetilde{f}^{-1}f+\widetilde{f}^{-1}V_{A}f\\ &=\widetilde{f}^{-1}(V_{A}f\cdot f^{-1}-V_{A}\widetilde{f}\cdot\widetilde{f}^{-1})f. \end{split} \end{equation} We have $V_{A}f\cdot f^{-1}=V_{A}\widetilde{f}\cdot\widetilde{f}^{-1}$. By Liouville's theorem, both sides must be of the form $-\Phi_{A1'}+ \zeta\Phi_{A0'}$ for $A=0,1$, where $\Phi_{A0'}$ and $\Phi_{A1'}$ are matrix-valued functions over $\mathbb{C}^{5}$. Therefore \begin{equation*} V_{A1'}f\cdot f^{-1}-\zeta V_{A0'}f\cdot f^{-1}=-\Phi_{A1'}+ \zeta\Phi_{A0'}, \end{equation*} i.e. \begin{equation}\label{eq:asd-eq}\begin{split}& ( V_{A1'}+\Phi_{A1'})f-\zeta ( V_{A0'}+\Phi_{A0'})f=0. \end{split}\end{equation} Thus $f$ is the solution to \begin{equation*} \nabla_{A1'}f-\zeta \nabla_{A0'}f=0, \end{equation*} $A=0,1$. Consequently, we have $F(V_{0},V_{1})=0,$ i.e. $\nabla$ is ASD connection. Similarly, $\widetilde{f}$ is solution to \begin{equation}\label{eq:asd-eq2}\begin{split}& ( V_{A0'}+\Phi_{A0'})\widetilde{f}-\widetilde{\zeta}( V_{A1'}+\Phi_{A1'})\widetilde{f}=0, \end{split}\end{equation} i.e. $\nabla_{A0'}\widetilde{f}-\widetilde{\zeta}\nabla_{A1'}\widetilde{f}=0, $ where $A=0,1$. From (\ref{eq:asd-eq}) and (\ref{eq:asd-eq2}), we have \begin{equation}\label{eq:asd-eq3}\begin{split} &\Phi_{A1'}=-V_{A1'}f\cdot f^{-1}|_{\zeta=0},\qquad \Phi_{A0'}=-V_{A0'}\widetilde{f}\cdot\widetilde{f}^{-1}|_{\widetilde{\zeta}=0}, \end{split}\end{equation} where $A=0,1$. So the connection $\Phi$ has the form \begin{equation}\label{eq:asd-eq4} \Phi=-{\rm d}_{1}f\cdot f^{-1}|_{\zeta=0}-{\rm d}_{0}\widetilde{f}\cdot \widetilde{f}^{-1}|_{\widetilde{\zeta}=0}+\Phi_{T}\theta, \end{equation} which is ASD. \qed \subsection{Atiyah-Ward ans\"{a}tz For ${\rm GL}(2,\mathbb{C})$ ASD connection on 5D Heisenberg group} By the construction in the proof of Theorem \ref{pwc}, we get a ${\rm GL}(2,\mathbb{C})$ ASD connection if we find a transition matrix for a rank-2 bundle over $\hat{U}$, which is trivial along the projective line $\hat{x}=\eta\circ\tau^{-1}(x)$ for each $x\in U\subseteq\mathscr H$. Namely, to find a $2\times2$ matrix $$G(\mathbf{y},t,\zeta)=\left(\begin{array} {cc} \zeta&\gamma(\mathbf{y},t,\zeta)\\0&\zeta^{-1} \end{array}\right)$$ defined over $U\times \left(\mathbb{C}\setminus\{0\}\right)$ (the intersection of two coordinate charts of $\mathbb{C}^{5}\times \mathbb{C}P^{1}$), and is constant along the fibers of $\eta$ and trivial over $\hat{\mathbf{x}}$ for each $\mathbf{x}=(\mathbf{y},t)\in U\subseteq\mathbb{C}^{5}$. It defines a function $\hat{G}$ on $\hat{U}\cap W\cap\widetilde{W}$ $$G(\mathbf{y},t,\zeta)= \hat{G}\left(\mathbf{y}_{0'}+\zeta \mathbf{y}_{1'},t-\langle \mathbf{y}_{0'}+\zeta \mathbf{y}_{1'},\mathbf{y}_{1'}\rangle,\zeta\right).$$ This is called {\it Atiyah-Ward Ans\"{a}tz}. In terms of Laurent series of $\zeta$, we write \begin{equation}\label{cure112233000} \gamma=\sum_{i=-\infty}^{+\infty} \gamma_{-i}\zeta^{i}=\gamma_{-}+\gamma_{0}+\gamma_{+}, \end{equation} where $\gamma_{-}:=\sum_{i=1}^{i=\infty} \gamma_{i}\zeta^{-i}$, $\gamma_{+}:=\sum_{i=1}^{i=\infty} \gamma_{-i}\zeta^{i}$. Since $\gamma$ need to be constant along each fiber of $\eta$, we have $V_{A}\gamma\equiv0$, $A=0,1,$ i.e. \begin{equation}\label{eq:induct} V_{A1'}\gamma_{i}=V_{A0'}\gamma_{i+1},\quad A=0,1,\quad i=\cdots,-1,0,1,\cdots. \end{equation} \begin{lem}\label{Poin} Suppose that $U\subseteq\mathbb{C}^{5}$ is elementary and the one form $\omega=g_{0}\theta^{00'}+g_{1}\theta^{10'}$ (or $\omega=g_{0}\theta^{01'}+g_{1}\theta^{11'}$) with $g_{0},g_{1}\in \mathcal{O}(U)$ is $\hbox{d}_{0}$-closed (or $\hbox{d}_{1}$-closed), i.e. $\hbox{d}_{0}\omega=0$ (or $\hbox{d}_{1}\omega=0$). Then there exists a function $f\in \mathcal{O}(U)$, such that $\hbox{d}_{0}f=\omega$ (or $\hbox{d}_{1}f=\omega$). \end{lem} \begin{proof} By definition, $\hbox{d}_{0}\omega=0$ is equivalent to \begin{equation}\label{eq:Y3} V_{10'}g_{0}=V_{00'}g_{1}, \end{equation} and $\hbox{d}_{0}f=\omega$ is equivalent to \begin{equation}\label{eq:Y4} V_{00'}f=g_{0}\quad and\quad V_{10'}f=g_{1}. \end{equation} If we take coordinate transformation $\Psi:\mathbb{C}^{5}\longrightarrow\mathbb{C}^{5}$ given by $$(y_{00'},y_{10'},y_{01'},y_{11'},t):=\Psi(z_{0},z_{1},z_{2},z_{3},z_{4})=\left(z_{0},z_{1},z_{2},z_{3},z_{4}-z_{0}z_{3} -z_{1}z_{2}\right) ,$$ we have $$\Psi_{*}\frac{\partial}{\partial{z_{0}}}=V_{00'},\qquad \Psi_{*}\frac{\partial}{\partial{z_{1}}}=V_{10'},$$ by expression of $V_{AA'}$ in (\ref{eq:Y}). Take $G_{A}=g_{A}\circ\Psi$, $A=0,1,$ and $F=f\circ \Psi$. Then by pulling back, we need to solve \begin{equation}\label{eq:poincare} \frac{\partial F}{\partial z_{0}}=G_{0}, \quad \frac{\partial F}{\partial z_{1}}=G_{1}. \end{equation} Under the condition $\frac{\partial G_{0}}{\partial z_{1}}=\frac{\partial G_{1}}{\partial z_{0}}$, it follows from Poincar\'e's lemma that (\ref{eq:poincare}) has a solution. Therefore, $f=F\circ\Psi^{-1}$ is the solution to (\ref{eq:Y4}). \end{proof} \begin{lem} Suppose $\gamma_{0}$ satisfy the equation (\ref{eq:compatible-}), i.e. $\Delta_{b}\gamma_{0}=0,$ then (\ref{eq:induct}) is solvable. \end{lem} \begin{proof} Inductively, for fixed $i=0,-1,\cdots$, assuming that there exists $\gamma_{i}$ and $\gamma_{i-1}$ satisfying \begin{equation}\label{eq:orig} \left\{\begin{array}{l} V_{01'}\gamma_{i-1}=V_{00'}\gamma_{i},\\ V_{11'}\gamma_{i-1}=V_{10'}\gamma_{i}, \end{array} \right. \end{equation} we need to find $\gamma_{i-2}$ such that \begin{equation}\label{eq:Ward} \left\{\begin{array}{l} V_{01'}\gamma_{i-2}=V_{00'}\gamma_{i-1},\\ V_{11'}\gamma_{i-2}=V_{10'}\gamma_{i-1}. \end{array} \right. \end{equation} Denote \begin{equation*} \Lambda_{i-1}:=V_{00'}\gamma_{i-1}\theta^{01'}+V_{10'}\gamma_{i-1}\theta^{11'}. \end{equation*} Then $\Lambda_{i-1}$ is ${\rm d}_{1}$-closed, since \begin{equation*} \begin{split} {\rm d}_{1}\Lambda_{i-1}& =(V_{01'}V_{10'}-V_{11'}V_{00'})\gamma_{i-1}\theta^{01'}\wedge\theta^{11'} =(V_{10'}V_{01'}-V_{00'}V_{11'})\gamma_{i-1}\theta^{01'}\wedge\theta^{11'}\\& =(V_{10'}V_{00'}-V_{00'}V_{10'})\gamma_{i}\theta^{01'}\wedge\theta^{11'} =0, \end{split} \end{equation*} by using $[V_{00'},V_{11'} ]=[V_{10'},V_{01'} ]=T$ in (\ref{eq:YYT}) in the second identity, (\ref{eq:orig}) and $[V_{00'},V_{10'} ]=0$ in the last identity. It follows from Lemma \ref{Poin} that there exists $\gamma_{i-2}$ such that $d_{1}\gamma_{i-2}=\Lambda_{i-1}$, i.e. equation (\ref{eq:Ward}) is satisfied. So solving the equations (\ref{eq:induct}) for $i=\cdots,-1,0$, is reduced to the solve the equation $d_{1}\Lambda_{0}=0$ by induction. But \begin{equation*}\begin{split} {\rm d}_{1}\Lambda_{0}=&\left(V_{01'}V_{10'}-V_{11'}V_{00'}\right)\gamma_{0}\theta^{01'}\wedge\theta^{11'}\\ =&\left(V_{10'}V_{01'}-V_{00'}V_{11'}\right)\gamma_{0}\theta^{01'}\wedge\theta^{11'}\\ =&-\Delta_{b}\gamma_{0}\theta^{01'}\wedge\theta^{11'}=0. \end{split}\end{equation*} Hence, (\ref{eq:induct}) is solvable for $i=\cdots,-1,0$. On the other hand, for $i=0,1,\cdots$, if there exists $\gamma_{i},\gamma_{i+1}$ such that \begin{equation}\label{eq:ori} \left\{\begin{array}{l} V_{01'}\gamma_{i}=V_{00'}\gamma_{i+1},\\ V_{11'}\gamma_{i}=V_{10'}\gamma_{i+1}, \end{array} \right. \end{equation} we need to show \begin{equation}\label{eq:Ward-A1} \left\{\begin{array}{l} V_{00'}\gamma_{i+2}=V_{01'}\gamma_{i+1},\\ V_{10'}\gamma_{i+2}=V_{11'}\gamma_{i+1} \end{array} \right. \end{equation} has a solution $\gamma_{i+2}$. Denote \begin{equation*} \widetilde{\Lambda}_{i+1}:=V_{01'}\gamma_{i+1}\theta^{00'}+V_{11'}\gamma_{i+1}\theta^{10'}. \end{equation*} As above $\Lambda_{i+1}$ is $d_{0}$-closed, since \begin{equation*} \begin{split} d_{0}\widetilde{\Lambda}_{i+1}& =(V_{00'}V_{11'}-V_{10'}V_{01'})\gamma_{i+1}\theta^{00'}\wedge\theta^{10'}=(V_{11'}V_{00'}-V_{01'}V_{10'})\gamma_{i+1}\theta^{00'}\wedge\theta^{10'}\\& =(V_{11'}V_{01'}-V_{01'}V_{11'})\gamma_{i}\theta^{00'}\wedge\theta^{10'} =0, \end{split} \end{equation*} by using $[V_{00'},V_{11'} ]=[V_{10'},V_{01'} ]=T$ in the second identity, (\ref{eq:ori}) and $[V_{11'},V_{01'} ]=0$ in the third identity. By Lemma \ref{Poin} again, there exists $\gamma_{i+2}$ such that $d_{0}\gamma_{i+2}=\widetilde{\Lambda}_{i+1}$, i.e. the equation (\ref{eq:Ward-A1}) is solvable. The equation (\ref{eq:induct}), for $i=0,1,\cdots$, is also reduced to the equation $$0={\rm d}_{0}\widetilde{\Lambda}_{0}=(V_{00'}V_{11'}-V_{10'}V_{01'})\gamma_{0}\theta^{00'}\wedge\theta^{10'}=\Delta_{b}\gamma_{0}\theta^{00'}\wedge\theta^{10'}.$$ It holds since $\Delta_{b}\gamma_{0}=0$. The lemma is proved. \end{proof} {\it Proof of Theorem \ref{ASDA}. (1)} As in the classical case (cf. e.g. \cite[Example 10.1.2]{Mason1}), we take the Birkhoff decomposition \begin{equation}\label{cur111000} G(\mathbf{y},t,\zeta)=\left(\begin{array} {cc} \zeta&\gamma(\mathbf{y},t,\zeta)\\0&\zeta^{-1} \end{array}\right)=\widetilde{ f}^{-1}f, \end{equation} with \begin{equation}\label{cure112233}\begin{split} f&=\frac{1}{\sqrt{\varphi}}\left(\begin{array} {cc} \zeta&\varphi+\gamma_{+}\\-1&-\zeta^{-1}\gamma_{+} \end{array}\right) \in\mathcal{O}(U\times \mathbb{C}),\\ \widetilde{ f}&=\frac{1}{\sqrt{\varphi}}\left(\begin{array} {cc} 1&-{\zeta}\gamma_{-}\\-{\zeta}^{-1}&\varphi+\gamma_{-} \end{array}\right) \in\mathcal{O}(U\times \mathbb{C}), \end{split} \end{equation} where $U\times\mathbb{C}$ and $U\times\mathbb{C}$ are local coordinate charts of $U\times\mathbb{C}P^{1}$. Their inverses are \begin{equation}\label{cure1122330}\begin{split} f^{-1}&=\sqrt{\varphi}\left(\begin{array} {cc} -\zeta^{-1}\varphi^{-1}\gamma_{+}&-1-\gamma_{+}\varphi^{-1}\\ \varphi^{-1}&\zeta\varphi^{-1} \end{array}\right),\\ \widetilde{ f}^{-1}&=\sqrt{\varphi}\left(\begin{array} {cc} 1+\varphi^{-1}\gamma_{-}&\zeta\gamma_{-}\varphi^{-1}\\ \varphi^{-1}\zeta^{-1}&\varphi^{-1} \end{array}\right), \end{split}\end{equation} respectively. It is direct to check (\ref{cur111000}) by $\widetilde{ f}^{-1}$ and $f$ in (\ref{cure112233})-(\ref{cure1122330}), if $\gamma_{\pm}$ satisfy (\ref{cure112233000}) and $\gamma_{0}=\varphi$. Then \begin{equation} h: =f|_{\zeta=0}=\frac{1}{\sqrt{\varphi}}\left(\begin{array} {cc} 0&\varphi\\-1&-\gamma_{-1} \end{array}\right),\qquad \widetilde{h}: =\widetilde{ f}|_{\widetilde{\zeta}=0}=\frac{1}{\sqrt{\varphi}}\left(\begin{array} {cc} 1&-\gamma_{1}\\0&\varphi \end{array}\right), \end{equation} By (\ref{d-1-2}) and (\ref{eq:induct}), we have ${\rm d}_{1}\gamma_{-1}=V_{00'}\varphi\cdot\theta^{01'}+V_{10'}\varphi\cdot\theta^{11'}$ and ${\rm d}_{0}\gamma_{1}=V_{01'}\varphi\cdot\theta^{00'}+V_{11'}\varphi\cdot\theta^{10'}$, and \begin{equation} h^{-1}=\sqrt{\varphi}\left(\begin{array} {cc} -\varphi^{-1}\gamma_{-1}&-1\\ \varphi^{-1}&0 \end{array}\right),\qquad \widetilde{h}^{-1}=\sqrt{\varphi}\left(\begin{array} {cc} 1&\gamma_{1}\varphi^{-1}\\0&\varphi^{-1} \end{array}\right). \end{equation} It is direct to check that \begin{equation}\label{ASDA1} \begin{split} \Phi & =-{\rm d}_{1}{h} \cdot h^{-1}-{\rm d}_{0}{\widetilde{h}} \cdot \widetilde{h}^{-1}+\Phi_{T}\theta\\& =\left[\begin{array} {cc} \frac{1}{2}(-{\rm d}_{1}\ln{\varphi}+{\rm d}_{0}\ln{\varphi}) & V_{01'}(\ln\varphi)\theta^{00'}+V_{11'}(\ln\varphi)\theta^{10'}\\ V_{00'}(\ln\varphi)\theta^{01'}+V_{10'}(\ln\varphi)\theta^{11'} &\ \frac{1}{2}({\rm d}_{1}\ln{\varphi}-{\rm d}_{0} \ln{\varphi})\end{array}\right]+\Phi_{T}\theta, \end{split} \end{equation} which is ASD by (\ref{eq:asd-eq4}) and Theorem \ref{pwc}. (2)\quad Noting that $V_{AA'}(y_{BB'})=\delta_{AB}\delta_{A'B'}$ and $\left\|\mathbf{ y}\right\|^{2}=y_{00'}y_{11'}-y_{10'}y_{01'}$, we have $$V_{00'}\left\|\mathbf{ y}\right\|^{2}=y_{11'},\quad V_{01'}\left\|\mathbf{ y}\right\|^{2}=-y_{10'},\quad V_{10'}\left\|\mathbf{ y}\right\|^{2}=-y_{01'},\quad V_{11'}\left\|\mathbf{ y}\right\|^{2}=y_{00'}.$$ It's direct to check that \begin{equation*}\begin{split} &V_{00'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})=2y_{11'}(\left\|\mathbf{ y}\right\|^{2}+t),\\ &V_{01'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})=-2y_{10'}(\left\|\mathbf{ y}\right\|^{2}+t),\\ &V_{10'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})=2y_{01'}(-\left\|\mathbf{ y}\right\|^{2}+t),\\ &V_{11'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})=2y_{00'}(\left\|\mathbf{ y}\right\|^{2}-t),\\ \end{split}\end{equation*} and \begin{equation*}\begin{split} &V_{10'}V_{01'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})=-2\left\|\mathbf{ y}\right\|^{2}-2t+4y_{10'}y_{01'},\\ &V_{00'}V_{11'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})=2\left\|\mathbf{ y}\right\|^{2}-2t+4y_{00'}y_{11'}.\\ \end{split}\end{equation*} Consequently, for $\varphi=\frac{1}{\left\|\mathbf{ y}\right\|^{4}-t^{2}}$, we have \begin{equation*}\begin{split} V_{10'}V_{01'}(\varphi)&=-\frac{V_{10'}V_{01'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})}{(\left\|\mathbf{ y}\right\|^{4}-t^{2})^{2}} +2\frac{V_{01'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})V_{10'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})}{(\left\|\mathbf{ y}\right\|^{4}-t^{2})^{3}}\\ &=\frac{2y_{00'}y_{11'}+2y_{10'}y_{01'}+2t}{(\left\|\mathbf{ y}\right\|^{4}-t^{2})^{2}},\\ V_{00'}V_{11'}(\varphi)&=-\frac{V_{00'}V_{11'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})}{(\left\|\mathbf{ y}\right\|^{4}-t^{2})^{2}} +2\frac{V_{11'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})V_{00'}(\left\|\mathbf{ y}\right\|^{4}-t^{2})}{(\left\|\mathbf{ y}\right\|^{4}-t^{2})^{3}}\\ &=\frac{2y_{00'}y_{11'}+2y_{10'}y_{01'}+2t}{(\left\|\mathbf{ y}\right\|^{4}-t^{2})^{2}}.\\ \end{split}\end{equation*} Therefore $V_{10'}V_{01'}(\varphi)=V_{00'}V_{11'}(\varphi),$ i.e. $\varphi$ is the solution to (\ref{eq:compatible-}).\qed \section{The Real Case } \subsection{SD and ASD horizontal form on real Heisenberg group $\mathscr H^{\mathbb{R}}$ } The real Heisenberg group is a flat model of contact manifolds. Let us compare the ASD connection restricted to the real $5$D Heisenberg group with contact instantons on $5$D contact manifolds introduced by physicists (cf.\cite{MR3042942} \cite{MR2967134} \cite{Wolf1}). Take a metric on $\mathscr H^{\mathbb{R}}$ as $g=\rm{d}y_{1}^{2}+\rm{d}y_{2}^{2}+\rm{d}y_{3}^{2}+\rm{d}y_{4}^{2}+ \rm{d}t^{2}.$ The relevant volume element on $\mathscr H^{\mathbb{R}}$ is $\rm{d}V:=\rm{d}y_{1}\wedge \rm{d}y_{2}\wedge \rm{d}y_{3}\wedge \rm{d}y_{4}\wedge \rm{d}t.$ For briefness, we denote $\rm{d}y_{5}:=\rm{d}t$. The {\it Hodge star} $*$ with respect to the metric $g$ on $\mathscr H^{\mathbb{R}}$ is given by $*:\Omega^{k}(\mathscr H^{\mathbb{R}})\longrightarrow \Omega^{5-k}(\mathscr H^{\mathbb{R}})$, $ w\longmapsto *w$, such that \begin{equation}\label{Hodg1} w\wedge*w=\rm{d}V. \end{equation} Namely, we have $*(\rm{d}y_{i_{1}}\wedge\cdots\wedge\rm{d}y_{i_{k}})=\varepsilon_{i_{1}\cdots i_{k}j_{1}\cdots j_{5-k}}\rm{d}y_{j_{1}}\wedge\cdots\rm{d}y_{j_{5-k}},$ where $\varepsilon_{i_{1}\cdots i_{k}j_{1}\cdots j_{5-k}}$ is the permutation index from $\{i_{1},\cdots, i_{k},j_{1},\cdots, j_{5-k}\}$ to $\{1,\cdots,5\}$. Denote by $\Omega^{2}_{H}(\mathscr H^{\mathbb{R}})$ the space of horizontal two forms on $\mathscr H^{\mathbb{R}}$, i.e. $\iota_{T}\omega=0$ for $\omega\in\Omega^{2}_{H}(\mathscr H^{\mathbb{R}})$, and $\Omega^{2}_{V}(\mathscr H^{\mathbb{R}})$ the space of vertical two forms on $\mathscr H^{\mathbb{R}}$. We have the decomposition \begin{equation} \Omega^{2}(\mathscr H^{\mathbb{R}})=\Omega^{2}_{H}(\mathscr H^{\mathbb{R}})\oplus\Omega^{2}_{V}(\mathscr H^{\mathbb{R}}),\qquad \omega=\omega_{H}+\omega_{V}, \end{equation} where $\omega_{H}=\iota_{T}({\rm dt}\wedge\omega)$ and $\omega_{V}={\rm dt}\wedge(\iota_{T}\omega)$. Denote by $\Omega^{2+}_{H}(\mathscr H^{\mathbb{R}})$ $\left(\Omega^{2-}_{H}(\mathscr H^{\mathbb{R}})\right)$ the space of all horizontal (anti-)self-dual spaces of two forms on $\mathscr H$, of which elements satisfy \begin{equation}\label{hsdu}\iota_{T}\ast\omega=\omega\quad (\iota_{T}\ast\omega=-\omega).\end{equation} Then we have the decomposition \begin{equation} \Omega^{2}_{H}(\mathscr H^{\mathbb{R}})=\Omega^{2+}_{H}(\mathscr H^{\mathbb{R}})\oplus\Omega^{2-}_{H}(\mathscr H^{\mathbb{R}})\qquad \omega_{H}=\omega^{+}_{H}+\omega^{-}_{H}, \end{equation} where $\omega^{+}_{H}=\frac{1}{2}(1+\iota_{T}\ast)\omega_{H}\in\Omega^{2+}_{H}(\mathscr H)$ and $\omega^{-}_{H}=\frac{1}{2}(1-\iota_{T}\ast)\omega_{H}\in\Omega^{2-}_{H}(\mathscr H^{\mathbb{R}})$. In fact, noting that $\iota_{T}\ast\iota_{T}\ast=id,$ we have $\iota_{T}\ast\omega^{+}_{H}=\frac{1}{2}\iota_{T}\ast(1+\iota_{T}\ast)\omega_{H}=\frac{1}{2}(\iota_{T}\ast+1)\omega_{H}=\omega^{+}_{H},$ i.e. $\omega^{+}_{H}\in\Omega^{2+}_{H}(\mathscr H)$. And $\iota_{T}\ast\omega^{-}_{H}=\frac{1}{2}\iota_{T}\ast(1-\iota_{T}\ast)\omega_{H}=\frac{1}{2}(\iota_{T}\ast-1)\omega_{H}=-\omega^{-}_{H},$ i.e. $\omega^{-}_{H}\in\Omega^{2-}_{H}(\mathscr H)$. \subsection{Contact instantons on $\mathscr H^{\mathbb{R}}$} The double fibration over real Heisenberg group $\mathscr H^{\mathbb{R}}$ becomes \begin{equation} \label{rfibration} \xy 0;/r.22pc/: (0,20)*+{\mathcal{F}^{\mathbb{R}} }="1"; (-20,0)*+{ \mathcal{P}^{\mathbb{R}} }="2"; (20,0)*+{ \mathscr H^{\mathbb{R}}}="3"; {\ar@{->}_{\eta}"1"; "2"}; {\ar@{->}^{\tau} "1"; "3"}; \endxy \end{equation} where $\mathcal{F^{\mathbb{R}}}=\mathscr H^{\mathbb{R}}\times \mathbb{C}P^1$ and $\mathcal{P^{\mathbb{R}}}=\eta\left(\mathscr H^{\mathbb{R}}\times\mathbb{C}P^{1}\right)$. \begin{cor} \label{prop:YX1} The fiber of the mapping $\eta:\mathbb{R}^{5}\times \mathbb{C}\rightarrow \mathcal{P}^{\mathbb{R}}$ given by \begin{equation}\label{eq:psi21}\begin{split} \omega_{0}:=\eta_{0}(\mathbf{y}_{0'}, \mathbf{ y}_{1'},\mathbf{t},\zeta)&=y_{1}+\mathbf{i}y_{2}+\zeta (-y_{3}+\mathbf{i}y_{4}),\\ \omega_{1}:=\eta_{1}(\mathbf{y}_{0'}, \mathbf{ y}_{1'},\mathbf{t},\zeta)&=y_{3}+\mathbf{i}y_{4}+\zeta (y_{1}-\mathbf{i}y_{2}),\\ \omega_{2}:=\eta_{2}(\mathbf{y}_{0'}, \mathbf{ y}_{1'},\mathbf{t},\zeta)& =-\mathbf{i}s-2\zeta(-y_{3}+\mathbf{i}y_{4})(y_{1}-\mathbf{i}y_{2})- y_{1}^2 - y_{2}^2 + y_{3}^2+ y_{4}^2,\\ \omega_{3}:=\eta_{3}(\mathbf{y}_{0'}, \mathbf{ y}_{1'},\mathbf{t},\zeta)&= \zeta,\\ \end{split}\end{equation} is an abelian subgroup of dimension $4$, whose tangential space is spanned by $\{V_{0},V_{1}\}$. We denote $\hat{y}=(\omega_{0},\omega_{1},\omega_{2},\omega_{3})$. \end{cor} \begin{prop} For x and y $\in$ $\mathbb{R}^{5}$, if $\hat{x}\bigcap\hat{y}\neq\emptyset$, then we have $x=y$. So $\mathcal{P}^{\mathbb{R}}$ is the trivial $\mathbb{C}P^{1}$ bundle over $\mathbb{R}^{5}$. \end{prop} \begin{proof} If we write \begin{equation}\label{sinlonly}\begin{split} &x=(x_{00'},x_{10'},x_{01'},x_{11'},t_{1})=(x_{1}+\textbf{i}x_{2},x_{3}+\textbf{i}x_{4},-x_{3}+\textbf{i}x_{4},x_{1}-\textbf{i}x_{2},-\textbf{i}s_{1}),\\ &y=(y_{00'},y_{10'},y_{01'},y_{11'},t_{2})=(y_{1}+\textbf{i}y_{2},y_{3}+\textbf{i}y_{4},-y_{3}+\textbf{i}y_{4},y_{1}-\textbf{i}y_{2},-\textbf{i}s_{2}), \end{split}\end{equation}by the embedding (\ref{eq:C-L}) of $\mathscr H^{\mathbb{R}}$ into $\mathscr H$, then we have \begin{equation}\label{sinlonly11}\begin{split} &\hat{x}=\eta\circ\tau^{-1}(x)=\left(x_{00'}+\zeta x_{01'},x_{10'}+\zeta x_{11'},-\textbf{i}s_{1}-\langle \mathbf{x}_{0'}+\zeta\mathbf{x}_{1'},\mathbf{x}_{1'}\rangle,\zeta\right) ,\\ &\hat{y}=\eta\circ\tau^{-1}(y)=\left(y_{00'}+\zeta y_{01'},y_{10'}+\zeta y_{11'},-\textbf{i}s_{2}-\langle \mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},\mathbf{y}_{1'}\rangle,\zeta\right) ,\\ \end{split}\end{equation} by (\ref{eq:psi}). If $\hat{x}\bigcap\hat{y}\neq\emptyset$, then there exists $\zeta$ such that equations \begin{equation}\label{eq:linear eq3} \left\{\begin{array}{l} x_{00'}+\zeta x_{01'}=y_{00'}+\zeta y_{01'},\\ x_{10'}+\zeta x_{11'}=y_{10'}+\zeta y_{11'},\\ -\textbf{i}s_{1}-\langle \mathbf{x}_{0'}+\zeta\mathbf{x}_{1'},\mathbf{x}_{1'}\rangle=-\textbf{i}s_{2}-\langle \mathbf{y}_{0'}+\zeta\mathbf{y}_{1'},\mathbf{y}_{1'}\rangle \end{array} \right. \end{equation} must have a solution. So equations \begin{equation*} \begin{pmatrix} x_{00'}-y_{00'}&x_{01'}-y_{01'}\\ x_{10'}-y_{10'}&x_{11'}-y_{11'} \end{pmatrix} \begin{pmatrix} a_{1}\\ a_{2} \end{pmatrix}=0 \end{equation*} has a nontrivial solution. Consequently, its coefficient matrix is singular, i.e. \begin{equation*}\begin{split} 0&=\left|\begin{array}{cccc} x_{00'}-y_{00'}&x_{01'}-y_{01'}\\ x_{10'}-y_{10'}&x_{11'}-y_{11'} \end{array}\right| =\left|\begin{array}{cccc} x_{1}-y_{1}+\textbf{i}x_{2}-\textbf{i}y_{2}&-x_{3}+y_{3}+\textbf{i}x_{4}-\textbf{i}y_{4}\\ x_{3}-y_{3}+\textbf{i}x_{4}-\textbf{i}y_{4}&x_{1}-y_{1}-\textbf{i}x_{2}+\textbf{i}y_{2} \end{array}\right|\\ &=(x_{1}-y_{1})^{2}+(x_{2}-y_{2})^{2}+(x_{3}-y_{3})^{2}+(x_{4}-y_{4})^{2}, \end{split} \end{equation*} i.e., $x_{k}=y_{k}$ for $k=1,\cdots,4$. So $s_{1}=s_{2}$ in (\ref{sinlonly}) by the third equation of (\ref{eq:linear eq3}). This implies $\eta$ in (\ref{rfibration}) is one to one, and so a diffeomorphism. Therefore, $\mathcal{P}^{\mathbb{R}}$ is a topologically trivial $\mathbb{C}P^{1}$ bundle over $\mathbb{R}^{5}$. \end{proof} Recall the real embedding (\ref{eq:C-L}) of $\mathscr H^{\mathbb{R}}$ into $\mathscr H$. Then we have the left invariant complex vector field over $\mathscr H^{\mathbb{R}}$ \begin{equation} \label{eq:realY}\begin{split} V_{00'} &=\frac{1}{2}(\partial_{y_{1}}-\textbf{i}\partial_{y_{2}})-\textbf{i}{(y_{1}-\textbf{i}y_{2})}\partial_{s},\quad V_{01'} =\frac{1}{2}(-\partial_{y_{3}}-\textbf{i}\partial_{y_{4}})+\textbf{i}{(y_{3}+\textbf{i}y_{4})}\partial_{s},\\ V_{10'} &=\frac{1}{2}(\partial_{y_{3}}-\textbf{i}\partial_{y_{4}})+\textbf{i}{(y_{3}-\textbf{i}y_{4})}\partial_{s},\quad V_{11'} =\frac{1}{2}{(\partial_{y_{1}}+\textbf{i}\partial_{y_{2}})}+\textbf{i}{(y_{1}+\textbf{i}y_{2})}\partial_{s},\qquad T =\textbf{i}\partial{s}, \end{split}\end{equation} and the relevant dual forms becomes \begin{equation}\label{rform} \begin{split}& \theta^{00'}={\rm{d}}y_{1}+\textbf{i}{\rm{d}}y_{2},\quad\theta^{01'}=-{\rm{d}}y_{3}+\textbf{i}{\rm{d}}y_{4},\quad \theta^{10'}={\rm{d}}y_{3}+\textbf{i}{\rm{d}}y_{4},\quad \theta^{11'}={\rm{d}}y_{1}-\textbf{i}{\rm{d}}y_{2},\\& \theta=-\textbf{i}{\rm{d}}s+2\textbf{i}(y_{1}{\rm{d}}y_{2}-y_{2}{\rm{d}}y_{1}+y_{4}{\rm{d}}y_{3}-y_{3}{\rm{d}}y_{4}). \end{split} \end{equation} \begin{prop} The space of horizontal self-dual $2$-forms on $\mathscr H^{\mathbb{R}}$ is spanned by $\{S^{0'0'},S^{0'1'}$, $S^{1'1'}\}$ with \begin{equation} S^{0'0'}=\theta^{00'}\wedge\theta^{10'},\quad S^{0'1'}=\theta^{00'}\wedge\theta^{11'}-\theta^{10'}\wedge\theta^{01'},\quad S^{1'1'}=\theta^{01'}\wedge\theta^{11'}. \end{equation} The space of horizontal anti-self-dual 2-forms is spanned by $\{S^{00},S^{01},S^{11}\}$ with \begin{equation} S^{00}=\theta^{00'}\wedge\theta^{01'},\quad S^{01}=\theta^{00'}\wedge\theta^{11'}+\theta^{10'}\wedge\theta^{01'},\quad S^{11}=\theta^{10'}\wedge\theta^{11'}. \end{equation} \end{prop} The curvature of the connection form $\Phi=\Phi_{AB'}\theta^{AB'}+\Phi_{T}\theta$ is \begin{equation}\label{eq:lienar Rcurvature} \begin{split} F=&d\Phi+\Phi\wedge\Phi\\ =&d(\Phi_{AB'})\wedge\theta^{AB'}+d(\Phi_{T})\wedge\theta+\Phi_{T}d\theta+\Phi_{AB'}\Phi_{CD'}\theta^{AB'}\wedge\theta^{CD'}+[\Phi_{AB'},\Phi_{T}]\theta^{AB'}\wedge\theta\\ =&V_{CD'}(\Phi_{AB'})\theta^{CD'}\wedge\theta^{AB'}+T(\Phi_{AB'})\theta\wedge\theta^{AB'}+V_{AB'}(\Phi_{T})\theta^{AB'}\wedge\theta+\Phi_{T}d\theta\\ &+\Phi_{AB'}\Phi_{CD'}\theta^{AB'}\wedge\theta^{CD'}+[\Phi_{AB'},\Phi_{T}]\theta^{AB'}\wedge\theta\\ =&[V_{AB'}(\Phi_{CD'})+\Phi_{AB'}\Phi_{CD'}]\theta^{AB'}\wedge\theta^{CD'}+\Phi_{T}d\theta\\ &+(V_{AB'}(\Phi_{T})-T(\Phi_{AB'})+[\Phi_{AB'},\Phi_{T}])\theta^{AB'}\wedge\theta, \end{split} \end{equation} by relabeling indices. Here we use the Einstein convention of summation over repeated indices. Then we have \begin{equation}\label{eq:lienar RcurvatureV} F_{V}=\theta\wedge\iota_{T}F=(V_{AB'}(\Phi_{T})-T(\Phi_{AB'})+[\Phi_{AB'},\Phi_{T}])\theta^{AB'}\wedge\theta, \end{equation} and \begin{equation}\label{eq:lienar RcurvatureH} \begin{split} F_{H}=&\iota_{T}(\theta\wedge F)=[V_{AB'}(\Phi_{CD'})+\Phi_{AB'}\Phi_{CD'}]\theta^{AB'}\wedge\theta^{CD'}+\Phi_{T}d\theta\\ =&(V_{00'}\Phi_{01'}-V_{01'}\Phi_{00'}+[\Phi_{00'},\Phi_{01'}])\theta^{00'}\wedge\theta^{01'}\\ &+(V_{00'}\Phi_{10'}-V_{10'}\Phi_{00'}+[\Phi_{00'},\Phi_{10'}])\theta^{00'}\wedge\theta^{10'}\\ &+(V_{00'}\Phi_{11'}-V_{11'}\Phi_{00'}+[\Phi_{00'},\Phi_{11'}]-2\Phi_{T})\theta^{00'}\wedge\theta^{11'}\\ &+(V_{01'}\Phi_{10'}-V_{10'}\Phi_{01'}+[\Phi_{01'},\Phi_{10'}]+2\Phi_{T})\theta^{01'}\wedge\theta^{10'}\\ &+(V_{01'}\Phi_{11'}-V_{11'}\Phi_{01'}+[\Phi_{01'},\Phi_{11'}])\theta^{01'}\wedge\theta^{11'}\\ &+(V_{10'}\Phi_{11'}-V_{11'}\Phi_{10'}+[\Phi_{10'},\Phi_{11'}])\theta^{10'}\wedge\theta^{11'}. \end{split} \end{equation} So its horizontal self-dual parts is \begin{equation}\label{eq:lienar RcurvatureH^{+}} \begin{split} F_{H}^{+} =&(V_{00'}\Phi_{10'}-V_{10'}\Phi_{00'}+[\Phi_{00'},\Phi_{10'}])S^{0'0'} +(V_{01'}\Phi_{11'}-V_{11'}\Phi_{01'}+[\Phi_{01'},\Phi_{11'}])S^{1'1'}\\ &+\frac{1}{2}\left(V_{00'}\Phi_{11'}-V_{11'}\Phi_{00'}+[\Phi_{00'},\Phi_{11'}] +V_{01'}\Phi_{10'}-V_{10'}\Phi_{01'}+[\Phi_{01'},\Phi_{10'}]\right)S^{0'1'} . \end{split} \end{equation} As a result, we get that \begin{prop} (1)$F_{H}^{+}=0$ is equivalent to $F(V_{0},V_{1})=0$,i.e., \begin{equation}\label{eq:RASD} \left\{\begin{array}{l} V_{00'}(\Phi_{10'})-V_{10'}(\Phi_{00'})+[\Phi_{00'},\Phi_{10'}]=0,\\ V_{01'}(\Phi_{10'})+V_{00'}(\Phi_{11'})-V_{10'}(\Phi_{01'})-V_{11'}(\Phi_{00'})+[\Phi_{00'},\Phi_{11'}]+[\Phi_{01'},\Phi_{10'}]=0,\\ V_{01'}(\Phi_{11'})-V_{11'}(\Phi_{01'})+[\Phi_{01'},\Phi_{11'}]=0. \end{array} \right. \end{equation} (2) $F_{V}=0$ is equivalent to $V_{AB'}(\Phi_{T})-T(\Phi_{AB'})+[\Phi_{AB'},\Phi_{T}]=0$. \end{prop} When restricted to the real Heisenberg group $\mathscr H^{\mathbb{R}}$, the {\it Sub-Laplacian} becomes $$\Delta_{b}:=(X_{1}^{2}+X_{2}^{2}+X_{3}^{2}+X_{4}^{2}),$$ where \begin{equation}\begin{split} X_{1} &=\frac{1}{2}\partial_{x_{1}}+ x_{2}\partial_{s},\quad X_{2} =\frac{1}{2}\partial_{x_{2}}- x_{1}\partial_{s},\quad X_{3} =\frac{1}{2}\partial_{x_{3}}+x_{4}\partial_{s},\quad X_{4} =\frac{1}{2}\partial_{x_{4}}- x_{3}\partial_{s}. \end{split}\end{equation} Note that $[X_{1},X_{2}]=-\partial_{s}$, $[X_{3},X_{4}]=-\partial_{s}$. Then \begin{equation*} \varphi=\frac{1}{|x|^{4}+s^{2}},\qquad {\rm where} \quad |x|=(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2})^{\frac{1}{2}}, \end{equation*} is the solution to the Sub-Laplacian equation \begin{equation}\label{eq:lapr} \Delta_{b}\varphi=0. \end{equation} \begin{cor} If $\varphi$ is the solution to the equation (\ref{eq:lapr}), then the connection form $\Phi$ in (\ref{AASSDD11}) restricted to the real Heisenberg group $\mathscr H^{\mathbb{R}}$ satisfies the horizontal part of the ASD contact instanton equation $F_{H}^{+}=0$. \end{cor} \begin{rem}\label{5kc} By Penrose-Ward correspondence, Wolf \cite{Wolf1} has already characterized the solution to the horizontal part of the ``self-dual" contact instanton equation on a contact manifold as follows. Let $M$ be a $5$D $K$-contact manifold with Cauchy-Riemann twistor space $\pi:Z\rightarrow M$ and integrable {\it Cauchy-Riemann} structure. There is a one-to-one correspondence between \\ (i) rank-r Cauchy-Riemann vector bundles $E_{Z}\rightarrow Z$ such that the restriction $E_{Z}|_{\pi^{-1}(p)}$ is holomorphically trivial for all $p\in M$ and\\ (ii) rank-r complex vector bundles $E_{M}\rightarrow M$ equipped with a connection $\nabla$ and curvature $F=\nabla^{2}$ such that the projection on the contact distribution is $F_{H}\in \Omega^{2}_{+}(M, End E_{M})$, that is, $F_{-}=0$. \end{rem}
proofpile-arXiv_065-3829
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\@startsection{section}{1}{\z@}% {-3ex \@plus -.3ex \@minus -.2ex}% {2.2ex \@plus.2ex}% {\normalfont\normalsize\protect\baselineskip=14.5pt plus.2pt minus.2pt\bfseries}} \def\subsection{\@startsection{subsection}{2}{\z@}% {-3ex\@plus -.2ex \@minus -.2ex}% {2ex \@plus.2ex}% {\normalfont\normalsize\protect\baselineskip=12.5pt plus.2pt minus.2pt\bfseries}} \def\subsubsection{\@startsection{subsubsection}{3}{\z@}% {-2.2ex\@plus -.21ex \@minus -.2ex}% {1.4ex \@plus.2ex} {\normalfont\normalsize\protect\baselineskip=12pt plus.2pt minus.2pt\sl}} \def{\indent \it Proof.}{{\indent \it Proof.}} \newcommand{\yyj}[1]{{\color{black}#1}} \pagestyle{fancy} \fancyhf{}% \fancyhead[RO]{\small\thepage} \fancyhead[LE]{\small\thepage} \setcounter{page}{1} \begin{document} \begin{CJK*}{GBK}{song} \thispagestyle{empty} \vspace*{-13mm} \vspace*{2mm} \title{A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint} \author{Yu-Jie Yuan$^{1,2}$, Yu-Kun Lai$^{3}$, Tong Wu$^{1,2}$, Lin Gao$^{1,2,*}$, and Ligang Liu$^{4}$} \address{1}{Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences} \address{2}{University of Chinese Academy of Sciences} \address{3}{School of Computer Science and Informatics, Cardiff University} \address{4}{University of Science and Technology of China} \let\thefootnote\relax\footnotetext{{}\\[-4mm] \indent\ $^*$Corresponding Author, E-mail: [email protected] } \noindent {\small\bf Abstract} \quad {\small 3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy term. With increasing availability of 3D shapes on the Internet, data-driven methods were proposed to improve the editing results. More recently as the deep neural networks became popular, many deep learning based editing methods have been developed in this field\yyj{, which is naturally data-driven}. \yyj{We mainly survey recent research works from the geometric viewpoint to those emerging neural deformation techniques and categorize them into organic shape editing methods and man-made model editing methods.} Both traditional methods and recent neural network based methods are reviewed. } \vspace*{3mm} \noindent{\small\bf Keywords} \quad {\small Mesh Deformation, Man-made Model Editing, Deformation Representation, Optimization, Deep Learning } \vspace*{4mm} \end{CJK*} \baselineskip=18pt plus.2pt minus.2pt \parskip=0pt plus.2pt minus0.2pt \begin{multicols}{2} \section{Introduction} 3D shapes are one of the most important types of objects in computer graphics and computer vision research. Editing or interactive deformation for 3D shapes provides an intuitive way to produce new shapes based on existing ones, which is fundamental for many applications. Methods for 3D shape editing are therefore one of the research hot spots. In recent years, deep learning has been widely used, and many research fields have developed new solutions based on deep learning, such as deep generation of 3D models~\cite{gao2019sdm, yang2020dsm}, 3D deep reconstruction~\cite{chen2019learning, xu2019disn}, deep neural network based 3D shape analysis methods~\cite{gao2019prs, shi2020symmetrynet}, etc. 3D models can be generally divided into two types, namely organic shapes and man-made models. Fig.1 shows some examples of these two types. Organic shapes such as human bodies, animals, etc. are often deformable, whereas man-made objects tend to comprise of a larger number of (near-)rigid components. Different techniques are therefore needed to cope with these two types of shapes. Neural network based editing methods based on deep learning are also emerging, although they are still at relatively early stage and many open areas remain, which we will discuss later in the survey. Early 3D model editing methods analyze the characteristics of the model itself, and strive to keep these characteristics unchanged during the deformation. For organic shapes, common examples include human bodies and animal shapes, which are articulated as shown in the left of Fig.1. It is possible to bind a skeleton inside the model. On the one hand, the editing of these models typically defines deformation energies to impose constraints on the deformation, such as volume-preserving deformation. On the other hand, by binding skeletons for these models, the user can manipulate the skeleton to drive the deformation of the shape. Skeleton-based deformation is often convenient and leads to good results. However, the binding of the skeleton is not only time-consuming, but also requires professional software and expertise. For man-made models, the main purpose of editing is modify the appearance, or geometric features of the models. For this purpose the topological structure of the model is usually a feature that needs to be maintained. Such kind of methods is referred to as structure-aware editing~\cite{mitra2014structure}. The editing of man-made models is more complicated than the deformation of organic shapes, because organic shapes are typically manifold meshes, while man-made models are often non-manifold with more complex structures. Surveys on other aspects of 3D models have recently been published, such as 3D deep generative models~\cite{chaudhuri2020learning}, 3D deep reconstruction~\cite{han2019image, jin20203d} and 3D deep representation~\cite{xiao2020survey}. However, for 3D shape editing/deformation, existing surveys~\cite{botsch2008linear, gain2008survey} were published over a decade ago, which only cover deformation methods of 3D organic shapes. Methods for editing of man-made models are not reviewed in specialized surveys, and only discussed in loosely related courses~\cite{mitra2014structure,xu2016data}. The rapid development of deep learning in recent years has also led to the emergence and growth of neural network based deformation and editing methods. It is necessary to have an extensive review to summarize the related research and discuss future directions. To this end, we present this survey, reviewing both traditional methods and methods based on deep neural networks, as well as methods applied to both organic shapes and man-made models. \yyj{The structure of this survey is as follows. We divide the editing methods according to different analysis views, namely geometry-based (Sec.~\ref{sec:geometry}) or traditional data-driven based (Sec.~\ref{sec:dataset}). Although the neural-based editing methods also learn from dataset, because it is a new direction that is currently being actively explored and often requires and benefits from a larger amount of data, we will introduce it separately from the traditional data-driven methods in Sec.~\ref{sec:neural}. For these three types, because organic shapes and man-made models have certain differences in representation, and their editing methods also have certain differences, we summarize them for organic shapes and man-made models separately. Skeleton-based and cage-based deformation rely on a handle, which often be named as \textit{proxy}, different from the model itself and usually require weighted interpolation of deformation on the skeleton or cage to obtain the transformation of the shape itself. They will be included in Sec.~\ref{sec:proxy}, which can also use the information from dataset. Finally, we will conclude with existing problems and discuss interesting future research directions (Sec.~\ref{sec:conclusion}). Fig.~\ref{fig:timeline} provides a timeline of representative shape editing methods for organic shapes and man-made models.} \begin{center} \includegraphics[width=0.45\linewidth]{img/organic.pdf} \includegraphics[width=0.45\linewidth]{img/man-made.pdf}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.1.~}Some examples of organic shapes (from~\cite{RigNet}) and man-made models (from~\cite{shapenet}). }% \end{center} \setcounter{figure}{1} \begin{figure*}[t] \centering { \includegraphics[width=0.9\linewidth]{img/timeline.pdf} } \caption{The timeline of representative 3D shape editing methods for two types of 3D models.} \label{fig:timeline} \end{figure*} \section{Attribute-based Model Editing} \label{sec:geometry} In this section, we discuss analyzing various attributes of the surface model, including geometry characteristics and semantic attributes, to define different constraints which then guide the model editing. Organic shapes, a.k.a. deformable models, usually refer to models that are non-rigidly deformable. Human bodies and animal shapes are common examples. The editing of 3D organic shapes mostly uses interactive deformation. In these methods, organic shapes are often represented as triangular meshes. The deformation of organic shapes mainly strives to maintain the geometric details of the original shape and produce natural and reasonable results. The early deformation methods mainly analyze the geometry of the shape and define the constraints for the deformation. We summarize those methods as \emph{geometry-based mesh deformation} methods. The editing of man-made models will be relatively more complicated and difficult, compared to the deformation of organic shapes. On the one hand, the man-made models have different shapes and complex topological structures. On the other hand, the meshes of man-made models are generally not regular and consistent. This has certain obstacles to the direct application of some deformation methods of organic shapes. To achieve the purpose of editing, one should pose some constraints on 3D man-made models to ensure plausible results. One way to obtain those constraints is maintaining the structural relations between different components of the model, which is semantic knowledge. We summarize these \emph{semantic constraints for man-made models} in Sec.~\ref{sec:semantic}. \subsection{Geometry-based Mesh Deformation} \label{sec:geometry} Geometry-based deformation methods typically define energy functions which transform the deformation problem into a constrained optimization problem. The constraints are generally provided by the user by specifying control handles and their positions. Early research works were all around simulating the elastic deformation of objects. Terzopoulos et al.~\cite{terzopoulos1987elastically} proposed the classical elastic energy or the so-called \textit{shell energy} \yyj{which measuring stretching and bending by the change of the first and second fundamental forms} and optimize the energy to obtain deformation results. \yyj{Two follow-up works~\cite{celniker1991deformable, welch1992variational} propose to simplify the energy by replacing the first and second fundamental forms by the first and second order partial derivatives of displacement function. In order to solve the problems of computational complexity and distortion of geometric details, many works~\cite{botsch2005efficient, shi2006fast, zorin1997interactive, kobbelt1998interactive, guskov1999multiresolution, botsch2006deformation} based on multigrid solver or multi-resolution deformation strategy have been proposed one after another. For those works, please refer to \cite{botsch2008linear} for a thorough introduction.} Follow-up works change the form of energy formulation to facilitate the solution and achieve better results. The most widely used is Laplacian-based mesh editing. The well-known As-Rigid-As-Possible (ARAP) energy is also Laplacian based, and has been applied to deformation until most recently~\cite{liu2019cubic}. We will begin with Laplacian based methods, including ARAP and follow-ups, followed by methods using other formulations. \subsubsection{Laplacian-based Mesh Deformation} \label{sec:lap} Kobbelt et al.~\cite{kobbelt1998interactive} were the first to propose a multi-resolution Laplacian-based deformation method, which is able to approximately solve constrained mesh optimization in real time. The readers may refer to \cite{sorkine2005laplacian, sorkine2006differential} which gives an early summary of Laplacian-based mesh processing. Differential coordinates can capture local properties~\cite{alexa2003differential} used in free-form deformation~\cite{sederberg1986free}, and allow a direct detail-preserving reconstruction of the edited mesh by solving a linear least-squares system~\cite{lipman2004differential}. However, the differential coordinates are defined in a global coordinate system, and thus are not rotation-invariant, so it is necessary to introduce approximated local frames to compensate for some distortions of orientation~\cite{lipman2004differential}. Laplacian coordinates, as pointed out by \cite{lipman2004differential}, are the simplest form of the differential coordinates. Given a triangular mesh model, we denote each vertex of the mesh as $\mathbf{v}_i, i=1,\cdots,n$, where $n$ is the number of vertices. The 1-ring neighborhoods of $\mathbf{v}_i$ are denoted as $N(i)$. Then we can define Laplacian coordinates of the vertex $\mathbf{v}_i$ as \begin{equation} \mathbf{l}_i = \sum_{j \in N(i)} {w_{ij} (\mathbf{v}_j-\mathbf{v}_i)} \end{equation} where $w_{ij}$ is the weight of the edge $\mathbf{e}_{ij}=\mathbf{v}_j-\mathbf{v}_i$. It can be seen that $\mathbf{l}_i$ is a weighted average of position differences between the vertex $\mathbf{v}_i$ and its adjacent vertices, so it describes the local geometry at $\mathbf{v}_i$. By collecting all Laplacian coordinates $\mathbf{l}_i$ and presenting them in the matrix form, it can be written as $\mathbf{l}=\mathbf{LV}$, where $\mathbf{L}$ is a $3n \times 3n$ matrix with elements composed of weights $w_{ij}$. We refer to $\mathbf{L}$ as the Laplacian operator and its elements as the Laplacian coefficients. Sorkine et al.~\cite{sorkine2004laplacian} propose to minimize the differences between Laplacian coordinates before and after deformation to deform the surface models, which can form a sparse linear system and be solved in the least squares sense. Lipman et al.~\cite{lipman2005laplacian} review the above two Laplacian based methods~\cite{lipman2004differential, sorkine2004laplacian} which both preserve shape details when editing mesh models. Many works have improved Laplacian coordinates or proposed other forms of differential coordinates. For example, Yu et al.~\cite{yu2004mesh} propose a gradient domain mesh editing method which deforms meshes by interpolating gradient fields derived from user constraints. Zhou et al.~\cite{zhou2005large} propose a Laplacian coordinate method based on volumetric graph to better preserve the model volume during deformation. The above two methods require the orientation and local frames of the handles as input. So if the users only move the handles, the orientation and local frames are not changed accordingly, leading to shearing and stretching distortions caused by incompatible handle positions and orientations. Pyramid coordinates~\cite{sheffer2004pyramid} and iterative dual Laplacian~\cite{au2006dual} are proposed to solve this problem which preserve rotation invariance and can avoid the distortion caused by incompatible rotation and translation of handles. However, such methods cannot handle large-scale rotations. To deal with the problem that either cannot get the rotation information when the handles are only translated, or cannot handle large-scale rotations, Fu et al.~\cite{fu2007effective} propose to use an affine matrix at each vertex which is linear w.r.t. the vertex position. They further decompose the affine matrix by polar decomposition to extract only rotation and uniform scaling to offset the impact of shearing distortion. Most of the previous gradient based editing methods turning the problem into solving a linear system, but not all the constraints can be formulated linearly and also, non-linear constraints are more flexible. Huang et al.~\cite{huang2006subspace} propose a solution framework that can effectively solve the deformation containing nonlinear constraints, such as skeleton constraints for skeleton-based deformation and volume constraints for volume preservation. Those constraints can be transformed to a non-linear energy minimization problem, but the minimization faces the problems of slow convergence and numerical instability. So they build a coarse cage mesh enclosing the original mesh shape, and use mean value coordinates~\cite{ju2005mean} to transfer the energy and constraints to the cage. \begin{center} \includegraphics[width=0.85\linewidth]{img/organic/arap.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.3.~}\yyj{Moving} a single position constraint, the result with large deformations can be obtained~\cite{sorkine2007rigid}.}% \end{center} \textbf{As-rigid-as-possible (ARAP)} deformation is an important part of Laplacian-based methods. The principle of as-rigid-as-possible (ARAP) was first applied to shape interpolation~\cite{alexa2000rigid} and the deformation of two-dimensional shapes~\cite{igarashi2005rigid}. Sorkine et al.~\cite{sorkine2007rigid} further propose a 3D surface model deformation method that maintains local rigidity. The method is based on minimizing an ARAP energy, which measures non-rigid distortions in local 1-ring neighborhoods of all vertices. We denote the triangle mesh as $S$, and $N(i)$ is the index set of vertices adjacent to vertex $i$. Denote $\mathbf{v}_i \in \mathbb{R}^3$ as the position of the vertex $i$ on the mesh $S$. Also assume that $S$ is to be deformed to $S'$ with the same connectivity and different vertex positions $\mathbf{v}_i'$. The overall deformation energy to measure the rigidity of the entire mesh is the sum of the % distortion energies of each deformation cell $C_i$ (including vertex $i$ and its 1-ring neighbors): \begin{equation} \begin{split} E(S')&=\sum_{i=1}^{n} \bar{w}_i E(C_i, C'_i) \\ &=\sum_{i=1}^{n} \bar{w}_i \sum_{j \in N(i)} w_{ij} {\| (\mathbf{v}'_i-\mathbf{v}'_i) - \mathbf{R}_i(\mathbf{v}_i-\mathbf{v}_j)\|}^2, \end{split} \end{equation} Here, $C_i'$ denotes the deformed cell of $C_i$, $w_{ij}= \frac{1}{2} ( \cot{\alpha_{ij}} + \cot{\beta_{ij}})$ is the cotangent weight, and $\alpha_{ij}$, $\beta_{ij}$ are the angles opposite of the mesh edge $(i,j)$, $\bar{w}_i$ is the cell weight that needs to be pre-determined, which is set to $1$ in \cite{sorkine2007rigid}. We can notice that $E(S')$ depends only on the geometry of $S$ and $S'$, the positions of the vertices $\mathbf{v}_i$ and $\mathbf{v}'_i$. In particular, when the source model $S$ is determined, the only variables in $E(S')$ are the deformed vertex coordinates $\mathbf{v}_i^{'}$. This is because the optimal rotation matrix $\mathbf{R}_i$ is a function of $\mathbf{v}^{'}_i$. \cite{sorkine2007rigid} takes vertex $i$ and its 1-ring neighbors as a cell, and each cell seeks the best rotation matrix $\mathbf{R}_i$ that satisfies the condition as much as possible. The overlaps between the cells ensure continuous deformation. \yyj{They further formulate an iterative optimization framework that is easy to implement, which readers can refer to \cite{sorkine2007rigid} for details.} The deformation shows the advantages on detail preservation and elastic effect. Given only position constraints, reasonable deformation results can be obtained, as shown in Fig.3. Following \cite{alexa2000rigid,igarashi2005rigid,sorkine2007rigid}, many ARAP extensions have been developed. Applied to volume deformation, Chao et al.~\cite{chao2010simple} derive another discretization of ARAP energy from the continuous form. The ARAP energy can be further enhanced by smooth rotations~\cite{levi2014smooth}, which can achieve comparable results on surface mesh models to volumetric ARAP~\cite{chao2010simple} on tetrahedral meshes, as shown in Fig.4. Cuno et al.~\cite{cuno20073d} formulate the ARAP deformation with a Moving Least Squares (MLS) approach. Liu et al.~\cite{liu2011rigid} extend \cite{alexa2000rigid} and propose a new morphing method for surface triangular meshes based on ARAP. Compared to \cite{alexa2000rigid}, their method does not need tetrahedral meshes to represent the shapes, which reduces computation time, and by integrating the translation vector into the energy formulation, eliminates the need for users to specify the fixed vertices when solving the equation. ARAP deformation has also been extended to make the stiffness of deformation controllable. Instead of using 1-ring neighborhoods, Chen et al.~\cite{chen2017rigidity} specify larger neighborhood sizes to better preserve geometric details, and also offer a parameter to adjust physical stiffness. Qin et al.~\cite{qin2020surface} replace 1-ring neighborhoods with a face-based local cell and specify a stiffness parameter for each local cell to simulate deformation of different materials. Le et al.~\cite{le2020stiff} extend ARAP deformation to editing man-made models. They improve stiffness of ARAP deformation by introducing an anisotropic material and a membrane model. However, the deformation focuses on local stretching. Another approach to extending ARAP to anisotropic ARAP was proposed by Colaianni et al.~\cite{colaianni2016anisotropic, colaianni2017anisotropic} that introduces an affine matrix $\mathbf{T}_i$ into the ARAP formulation to enable directional editing: \begin{equation} E(S')=\sum_{i=1}^{n} \bar{w}_i \sum_{j \in N(i)} w_{ij} {\| (\mathbf{v}'_i-\mathbf{v}'_i) - \mathbf{T}_i \mathbf{R}_i (\mathbf{v}_i-\mathbf{v}_j)\|}^2 \end{equation} Different forms of matrix $\mathbf{T}_i$ can realize anisotropic scaling, \yyj{anisotropic shearing or anisotropic rotation}. \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/srarap.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.4.~}Comparison of SR-ARAP (smooth rotation ARAP) with some other deformation methods~\cite{levi2014smooth}. From left to right: source, PriMo~\cite{botsch2006primo}, ARAP surface~\cite{sorkine2007rigid}, ARAP volume~\cite{chao2010simple}, ARAP volume applied to a tetrahedral stratum, ARAP surface with additional term for a smooth map differential, SR-ARAP~\cite{levi2014smooth}.}% \end{center} \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/cubic.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.5.~}ARAP has been applied to deformation until most recently. Cubic stylization~\cite{liu2019cubic} minimize the ARAP formulation with $l_1$ regularization to achieve locally isometric deformations while preserve texture attributes.}% \end{center} ARAP deformation methods are useful, however, they can only achieve interactive rates on coarse meshes~\cite{levi2014smooth}. Some research works investigate acceleration techniques of ARAP. Borosan et al.~\cite{borosan2010hybrid} combine surface-based deformation with cage-based deformation to perform hybrid mesh editing. The user deforms the simplified version of the input shape using ARAP surface modeling~\cite{sorkine2007rigid}, and the deformation is then propagated to the original shape by precomputed Mean Value Coordinates~\cite{ju2005mean}. Manson et al.~\cite{manson2011hierarchical} also perform ARAP on simplified meshes and propose a prototype of hierarchical ARAP. They build coarse meshes using edge contraction, and reverse the edge-collapse process to add details back after deformation on the simplified mesh. Following this acceleration strategy, Liu et al.~\cite{liu2019cubic} achieve cubic stylization of models by minimizing the ARAP energy with an $l_1$ regularization. Sun et al.~\cite{sun2018bi} also achieve hierarchical ARAP by constructing a bi-harmonic surface to decompose the mesh. Zollhofer et al.~\cite{zollhofer2012gpu} propose a GPU-based multi-resolution ARAP implementation, which accelerates the computation of ARAP and allows to pose even high-quality meshes consisting of millions of triangles in real-time. Accelerating the optimization of ARAP has also been addressed in various recent works~\cite{kovalsky2016accelerated, peng2018anderson, rabinovich2017scalable, shtengel2017geometric, zhu2018blended}. ARAP formulation has also been combined with other deformation methods. Zhang et al.~\cite{zhang2010skeleton, zhang2011robust} integrate a skeleton into ARAP surface modeling, effectively extending it to volume modeling. They evenly sample points on the skeleton and connect surface vertices \yyj{with the sampled points} to form skeleton edges, which are also considered in an ARAP energy together with the surface edges. In this way, the method can avoid volume loss, which is a common issue for surface-based deformation. Jacobson et al.~\cite{jacobson2012fast} introduce the ARAP energy into the LBS (Linear Blend Skinning) deformation method to reduce the degrees of freedom that require user specification. They further cluster the vertices based on their Euclidean distances in the \yyj{skinning} weight space, and use the same rotation matrix for all the vertices in the same cluster. Yang et al.~\cite{yang2018biharmonic} propose to combine the ARAP energy with a data-driven energy in deformation transfer. Their method also clusters vertices. However, their clustering is based on the rotation-augmented weight matrix, which is composed of the weight matrix and the ACAP (as-consistent-as-possible) deformation feature~\cite{gao2019sparse} (see Sec.~\ref{sec:ddmeshdeform} for more details). The resulting clusters are more reasonable than using the weight matrix alone. In addition to the above combinations, the ARAP energy has also been extended for use in other applications such as parametrization~\cite{liu2008local}, data-driven interpolation~\cite{gao2013data}, shape optimization~\cite{bouaziz2012shape}, shape decomposition~\cite{huang2009shape}, mass-spring simulation~\cite{liu2013fast}, image registration~\cite{levi2012d,sykora2009rigid}, image warping~\cite{fadaifard2013image}, and video stabilization~\cite{wang2013spatially}. \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/Curvature.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.6.~}Edit the shape by keeping the mean curvature while changing the Gaussian curvature~\cite{fang2020metric}. As the control parameter $\lambda$ increases, the details are preserved and the main structure is exaggerated.}% \end{center} \subsubsection{Other Surface Geometry Properties} In addition to the Laplacian-based methods for analyzing the local characteristics of the mesh, there are many other geometry-based deformation methods that analyze surface mesh characteristics. For example, curvature is an important attribute of the surface, Crane et al.~\cite{crane2011spin} edit shapes by manipulating the mean curvature and boundary data. The deformation is conformal and has less distortion. Fang et al.~\cite{fang2020metric} utilize not only the mean curvature but also Gaussian curvature to perform editing. An example is shown in Fig.6. They also perform conformal surface deformation which preserves local texture features. Stretching the mesh may destroy the geometric details. Alhashim et al.~\cite{alhashim2012detail} propose a shape editing method for stretching, which replicates the geometric details along the stretching direction such that the geometric details are not distorted. They first use a base mesh to represent the general shape of the input, and then use the curve skeleton \yyj{extracted by \cite{au2008skeleton}} to create a curvilinear grid on the desired stretching region, and project the region onto the grid to form a 2D texture. The user specifies the stretching direction by drawing a 3D curve, and the new geometric details will be synthesized according to the 2D texture. Liu et al.~\cite{liu2014scale} present a set of scale-invariant measures consisting of triangle angles and edge angles (dihedral angles of the edges) to represent 3D shapes. The representation is unique for a given shape. Moreover, given one edge and the orientation of one of the triangles containing this edge, the mesh shape can be reconstructed by this representation uniquely. % The reconstruction is through an iterative process that alternatively solves the face normals and vertex coordinates. An ARAP-like formulation is introduced when updating the normals, and when solving for the vertex coordinates, the constraints obtained from the user's edited handles are added. The editing process preserves the local details at different scales. Sparsity has also been widely used in geometry-based mesh deformation. Xu et al.~\cite{xu2015survey} review these methods in geometric modeling and processing that use sparsity, with one section discussing shape deformation based on sparsity. Gao et al.~\cite{gao2012p} introduce general $l_p$ norms to shape deformation, and show that different $p$ values influence the distribution of unavoidable distortions. Deng et al.~\cite{deng2013exploring} explore local modifications of the shape, and propose to use a mixed $l_2/l_1$ norm regularization which provides more local editing. Different from \cite{gao2012p} that applies sparsity penalty on the error function, they impose it on the displacement vectors. In addition to the explicit mesh representation, implicit representations such as distance fields or level sets also provide an efficient representation for some editing operations. Museth et al.~\cite{museth2002level} propose a level set method for surface editing. They define the speed function which describes the velocity at each surface vertex along the surface normal. Different speed functions develop different surface editing operators, such as cut-and-paste operator for copying, removing and merging level set models, and smoothing operator for smoothing the enclosed surface to a predefined curvature value. The method enables easy blending and topological changes of models thanks to the flexibility of implicit representation. Eyiyurekli and Breen~\cite{eyiyurekli2017detail} also operate on level set representations and aim to prevent loss of surface details caused by movements in the direction of the surface normal. Inspired by the idea of multi-resolution deformation, they extract geometric details in advance and store them in the particles on the surface, and then combine the details back when the deformation is completed. \subsection{Semantic Constraints for Man-made Models} \label{sec:semantic} Assuming that the input models are all meshes, the simplest way to edit the 3D model is to change the coordinates of the mesh vertices, but this way lacks the necessary constraints and is difficult to produce reasonable results. Therefore, we prefer to use high-level editing methods to edit multiple vertices at same time, such as Free Form Deformation (FFD)~\cite{sederberg1986free,coquillart1990extended}, which we will discuss with cage-based deformation in Sec.~\ref{sec:cage}. Although this method is simple and straightforward, the users are required to adjust all parameters manually. Structure is only implicitly imposed by using only a few, low-frequency basis functions. It should be noted that structure is an important indicator in the editing of man-made models. \cite{mitra2014structure} summarizes the method of structure-aware shape processing. \begin{center} \includegraphics[width=0.9\linewidth]{img/man-made/non-homogeneous.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.7.~}The non-homogeneous resizing results of stretching a camera which preserve structural features~\cite{kraevoy2008non}.}% \end{center} \begin{center} \includegraphics[width=1.0\linewidth]{img/man-made/iwires.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.8.~}The pipeline of iWIRES~\cite{gal2009iwires}.}% \end{center} \subsubsection{Local adaptivity} Early work on 3D man-made model editing efforts sought to maintain the reasonableness of the 3D shape when scaling 3D models. For example, Kraevoy et al.~\cite{kraevoy2008non} propose to estimate the "vulnerability" of local areas of the shape and adaptively deform the shape. This method prefers axis-aligned stretch when editing, as shown in Fig.7. Xu et al.~\cite{xu2009joint} propose a joint-based shape deformation method, which uses joints to segment a 3D shape into parts, while constraining the relative spatial configuration of adjacent parts. The proposed deformation system edits the models under those joint constraints. \subsubsection{Global relations} It is not enough to only consider local adaptivity, so some methods explore the relationship between different parts or features of the whole model, and use this as a constraint to edit the model. Gal et al. propose iWIRES~\cite{gal2009iwires}, which forms an analyze-and-edit paradigm, as shown in Fig.8. Based on the observation that man-made shapes can be abstracted by some special 1D line segments and their relationships, they abstract the 3D shape into a set of curves and adopt simple methods to edit shapes while retaining geometry features. Utilizing 1D curves to represent the model structure, Li et al.~\cite{li2010analysis} extract a curve network and additional attributes as prior information to reconstruct the 3D model with detailed and interleaving structures from the scanned point cloud. Those extracted high-level curve can be used as handle to edit reconstructed model. Following the similar concept of analyze-and-edit, Zheng et al.~\cite{zheng2011component} decompose the model into several meaningful parts, and abstract the parts to simple geometric parametric primitives, which names as component-wise controller. During editing, the user manipulates one of the controllers and the applied change is automatically propagated to other controllers to maintain the structural relations among them, such as symmetry, coplanarity and parallelism. The final model is reconstructed with respect to the modified controllers. Those controllers also serve as deformation handles for image guided shape editing~\cite{xu2011photo}. Zhang et al.~\cite{zhang2019real} segment a complex input mesh into several different primitives by clustering, which are depicted by a set of shape parameters and vertices coordinates. In editing procedure, they add several different constraints on these parameters to minimize the target energy function. Optimized parameters are then applied to corresponding primitives to change the shape of input mesh. Architectural models such as buildings are also important editing targets. These models are highly structured and often have many repetitive patterns, such as windows. Based on this observation, Bokeloh et al.~\cite{bokeloh2011pattern} first deform the model under user constraints by as rigid as possible deformation method~\cite{sorkine2007rigid} while maintain continuous patterns. They find the repeated patterns in advance by sliding dockers and measure the stretch to determine insertion or deletion of those discrete repeated patterns after the elastic deformation. Also finding those discrete or continuous regular patterns, they~\cite{bokeloh2012algebraic} further build a novel algebraic model of shape regularity and characterize the shape as a collection of those linked translational patterns. For those irregular architecture models, Lin et al.~\cite{lin2011structure} propose an editing method for resizing them. The users are required to specify the box hierarchy and corresponding attribute, such as replicated, scaled and fixed. Those irregular bounding boxes will be transformed into a set of disjoint sequences automatically. These sequences will be processed in turn. During processing, those user-specified operations are performed on corresponding boxes and their enclosed parts, while the remaining sequences are constrained. Milliez et al.~\cite{milliez2013mutable} decompose the model into different parts, each of which undergoes elastic deformation. They use several alternative rest states for each elastic part, so the deformation energy is computed by considering a set of those alternative rest shapes. The method further perform the corresponding model editing based on the jigsaw-puzzle-type local replacement mechanism on the user's interactive operations, such as replacement, stretching and shrinking, merging and cutting. Habbecke and Kobbelt~\cite{habbecke2012linear} linearize the constraints that ensure regional and intuitive control in editing process, making real-time or interactive editing possible. Texture is an important attribute to show the appearance of the model, but it is not considered in the above methods. Cabral et al.~\cite{cabral2009structure} propose an editing method for textured models which update the texture to maintain the texture features while editing the geometry of the model. They use directional autosimilarity, which measures the ability of a texture region to maintain similarity with itself under slight translation. \section{Proxy-based Deformation} \label{sec:proxy} We focus on smoothly interpolating deformation along the surface of 3D models in this section, where we drive a proxy to deform the models. Organic shape has a hinge structure, in addition to directly editing the vertices on the mesh, binding a skeleton to the shape, and driving the surface deformation through the skeleton is also a popular research direction. We summarize these as \emph{skeleton-based mesh deformation} methods. There is also extensive research of cage based deformation methods that utilize a set of enclosing cages as proxies, which is not only suitable for organic shape but also man-made models. We summarize these \emph{cage-based deformation} methods in Sec.~\ref{sec:cage}. \subsection{Skeleton-based Mesh Deformation} \label{sec:skeleton} Skeleton is one of the shape representations that can describe both the topology and the geometry of the shape~\cite{tagliasacchi20163d}. There are various types of 3D skeletons, we refer the readers to \cite{tagliasacchi20163d} for a thorough survey to the state-of-the-art of various 3D skeletons, while we mainly focus on bone-skeleton used for editing and deformation. \subsubsection{Skeleton Based Skinning} % Skeleton-based deformation is most commonly used for the deformation of realistic animated characters. It needs user to bind a skeleton to the shape first, which is termed as the bind time. The user then manipulates the skeleton to deform the shape accordingly, which is the pose time. Most methods propagate the handle transformations to the deformation of each surface vertex through the weighted blend of handle transformations. One of the classical methods that use skeleton to drive the deformation of surface mesh is linear blend skinning (LBS), also known as skeleton subspace deformation (SSD)~\cite{magnenat1988joint}. Let $ \Omega \subset \mathbb{R}^{2} $ or $ \mathbb{R}^{3} $ denote the volumetric domain enclosed by the given shape $S$. We denote the handles by $H_j \subset \Omega, j = 1, ..., n_h$. In fact, LBS is not limited to skeleton-based deformation. A handle can be a single point, a region, a skeleton bone or a vertex of a cage. Here, we focus on the skeleton bone, and others are easy to generalize. A transformation matrix $\mathbf{T}_j$ require user's specification for each handle $H_j$. Then all vertices $\mathbf{v} \in \Omega $ are deformed by their weighted blends: \begin{equation} \mathbf{v}_i^{'} = \sum_{j=1}^{n_h} \mathbf{W}_{ij} \mathbf{T}_j \mathbf{v}_i \label{equ:LBS} \end{equation} where $\mathbf{v}_i^{'}$ is the vertex coordinates after deformation, $\mathbf{v}_i$ is the vertex coordinates before deformation, and $\mathbf{W}_{ij}$ is the skinning weight of handle $H_j$ on vertex $i$. The linear blend weights $\mathbf{W}$ are crucial to the deformation. Usually, the LBS weights are determined by manual assignment or coming from dataset analysis, which not only takes lots of time and effort, but also produce unnatural deformation results due to lack of smoothness. Bang et al.~\cite{bang2018spline} propose a spline interface for users to edit skinning weights interactively. Some early works use bone heat~\cite{baran2007automatic} or an improved version, bone glow~\cite{wareham2008bone}, to assign the skinning weights. Jacobson et al.~\cite{jacobson2011bounded} propose bounded biharmonic weights (BBWs), aiming at enabling users to work freely with the most convenient combination of handle type, making deformation design and control easier. The BBWs produce smooth and intuitive deformation results for any topology of control points, skeletons, and cages. They define the weight vector $\mathbf{W}_j$ of the $j$-th handle (consisting of the control point weights at all vertices) as minimizers of a higher-order shape-aware smoothness functional, namely, the Laplacian energy: \begin{equation} \mathop{\arg\min}_{\mathbf{W}_j, j = 1, ..., n_h} \frac{1}{2} \int_{\Omega}^{} {\|\Delta {\mathbf{W}_j} \|}^2 dV \label{equ:BBW} \end{equation} subject to: $\mathbf{W}_j|_{H_k} = \delta_{jk}$, $\sum_{j=1}^{n_h} \mathbf{W}_j(\mathbf{v}) = 1$ and $0 \leq \mathbf{W}_j(\mathbf{v}) \leq 1, j = 1,...,n_h, \forall \mathbf{v} \in \Omega$, where $\delta_{jk}$ is the Kronecker function, $\mathbf{v}$ is the mesh vertices. It is natural that different control handles do not affect each other. The constraints also guarantee that the deformed shape will not scale and the handles all have positive contributions to the deformation. For the convenient to solve the Laplacian energy Eq.~\ref{equ:BBW}, \cite{jacobson2011bounded} discretizes it using the standard linear FEM Laplacian $\mathbf{M}^{-1}\mathbf{L}$, where $\mathbf{M}$ is the lumped mass matrix and $\mathbf{L}$ is the symmetric stiffness matrix. After discretizing the continuous integral, we can get: \begin{equation} \sum_{j=1}^{n_h} \frac{1}{2} \int_{\Omega}^{} {||\Delta {\mathbf{W}_j} ||}^2 dV \approx \frac{1}{2} \sum_{j=1}^{n_h} {\mathbf{W}_j}^T(\mathbf{L}\mathbf{M}^{-1}\mathbf{L})\mathbf{W}_j \label{equ:discretizing_BBW} \end{equation} Through discretization, the minimization of an integral is converted into a quadratic optimization which is easy to compute. The above constraints are all linear equations or inequalities about $\mathbf{W}_j$. Once we know the matrices $\mathbf{M}$ and $\mathbf{L}$ of the given shape, the only thing left is solving quadratic optimization problem under linear constraints. We can observe from Fig.~\ref{fig:bbw} that the BBWs are smooth and local. \setcounter{figure}{8} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{img/organic/bbw.png} \caption{The bounded biharmonic weights~\cite{jacobson2011bounded} are smooth and local.} \label{fig:bbw} \end{figure*} Directly adding constant bounds to high-order energy leads to more and more oscillation~\cite{jacobson2012smooth}. So Jacobson et al.~\cite{jacobson2012smooth} minimize quadratic energies while avoiding spurious local extrema to wrangle the oscillations. Exploiting dataset to strengthen the BBWs, Yuan et al.~\cite{yuan2019data} use data-driven, ARAP and sparsity terms to optimize the BBWs. The deformation results using optimized weights can better reflect the deformation principle of the example shapes in the dataset. The above methods are suitable for manifold meshes. For non-manifold meshes, such as models obtained from 3D modeling software, they are often not watertight or have multiple components. One way of computing skinning weights is to voxelize the model~\cite{dionne2013geodesic, dionne2014geodesic}. The weights are calculated based on the geodesic distance between each voxel lying on a skeleton ``bone'' and all non-exterior voxels. \cite{dionne2013geodesic, dionne2014geodesic} also allow user modify weights interactively when deform the model to test the effect of the modification. The calculated weights always have an inapplicable area. Eliminating the trouble of assigning weights, Yan et al.~\cite{yan2006skeleton} propose to use skeleton drive the transformation of mesh simplices (triangles in 2D and tetrahedra in 3D) instead of vertices without the need of specification of skinning weights. The vertex connectivity information was directly exploited in their method since simplices include mesh connectivity information. Although LBS is straightforward, easy to implement and has real-time performance, it can lead to well-known artifacts such as "collapsing elbow" and "candy wrapper". Some methods~\cite{lewis2000pose, mohr2003building, wang2007real, chen2011lattice} have been proposed to address these problems. Rumman and Fratarcangeli~\cite{abu2015position} first transform the surface mesh to a tetrahedral mesh where LBS is performed, then add stretch constraint, tetrahedral volume constraint and bind constraint to eliminate the artifacts caused by LBS. The constraints are solved by a parallel Position-Based Dynamics schema. Performing contextual deformation, Weber et al.~\cite{weber2007context} separate surface detail information from skeleton driven pose changes and learn the deformation of skin details from the example characteristic shapes. The editing results can avoid the artifacts of LBS at body elbow. Shi et al.~\cite{shi2008example} also consider detailed motions (or secondary deformations formally) in skeleton based deformation. They utilize LBS to generate primary deformations and learn those physical behaviors from the example sequences. In addition to LBS, there are other alternative skinning methods, such as linear combination of dual quaternion or dual quaternion skinning (DQS)~\cite{kavan2007skinning,kavan2008geometric}. However, it suffers more complex vertex processing~\cite{kavan2009automatic}. So they make improvements~\cite{kavan2009automatic} and only use a few samples of nonlinear function (virtual bones) in some key locations, such as joint areas. Other non-linear techniques, such as log-matrix skinning (LMS)~\cite{alexa2002linear, magnenat2004modeling} and spherical blend skinning (SBS)~\cite{kavan2005spherical} also perform volume-preserving deformation, but will suffer bulges near bent joints~\cite{le2016real}. Kim and Han~\cite{kim2014bulging} propose some post-processing operations such as modifying vertex coordinates and normals to solve the bulge and distortion problems faced by DQS. Another choice could be spline skeletons~\cite{forstmann2006fast, forstmann2007deformation, yang2006curve}. They view the bone as a spline and introduce spline deformation to skinning animation replacing the previous transformation matrix guidance. These methods can produce better results but are nonlinear and often fail encountering large rotation deformations. The differential blending method proposed by {\"O}ztireli et al.~\cite{oztireli2013differential} can solve this problem. They use sketch as the interaction tool, and the selected bones will deform to match the strokes drawn by the user. Jocobson et al.~\cite{jacobson2012fast} combine ARAP~\cite{sorkine2007rigid} with the original LBS formulation, different from some other methods~\cite{wang2002multi, merry2006animation, jacobson2011stretchable} which change the LBS formulation to other forms. All computations of \cite{jacobson2012fast} except for SVD decomposition are linear. When the number of vertex clusters is reasonably selected, real-time deformation can be guaranteed. Also coping ARAP energy into LBS deformation, Thiery and Eisemann~\cite{thiery2018araplbs} propose a method to generate skinning weights given a 3D mesh and corresponding skeleton. They use a variant of bone heat weights~\cite{baran2007automatic} to initialize the weights and optimize both weights and skeleton joints according to the deformation quality to example shapes. Li et al.~\cite{li2011skeleton} propose an automatic implicit skinning method which bound the surface onto the skeleton implicitly. The local surface surrounding the joint is used to parameterize the joint position. The deformation is achieved by Laplacian deformation energy with volumetric constraints which prevent those unnatural collapsing at the joints. Kavan and Sorkine~\cite{kavan2012elasticity} aim to produce visually similar results to physical elastic simulations through skeleton-based skinning method. So they not only propose a new way to calculate the skinning weights but also a new skinning method based on a specific deformer, which they called joint-based deformers. Le et al.~\cite{le2016real} propose to impose orthogonal constraints to prevent those artifacts near the joints suffered by LBS, DQS, LMS, and SBS and guarantee real-time performance. However, they need rest pose with skinning weights and bone transformations as inputs. Artifacts at the joints are often caused by surface self-contact. Physical-based methods can solve the problem of skin collision well and produce visually plausible deformations, but even after highly optimization~\cite{mcadams2011efficient}, it can only be close to real-time and cannot achieve complete real-time interactive posing. Vaillant et al.~\cite{vaillant2013implicit} segment the mesh according to the skeleton bones by~\cite{baran2007automatic}, and then approximate each part with an implicit surface utilizing Hermite Radial Basis Functions (HRBF)~\cite{wendland2004scattered, macedo2011hermite}, and at last merge different parts by union or other methods that perform better. They propose to edit the shape through these field functions and geometric skinning methods. The rigid transformations are also applied to the field functions during deformation. The mesh vertices move along the gradient of field function and stop when they reach the original field value or the point where the gradient is discontinuous, so that the surface contacts can be handled well without collision detection. Based on \cite{vaillant2013implicit}, Vaillant et al.~\cite{vaillant2014robust} further propose a new family of gradient-based composition operators for combining those implicit surfaces which can deal with surface contacts better. They also derive a tangential relaxation scheme from ARAP~\cite{sorkine2007rigid} to track the iso-surface. The deformation results are better than~\cite{vaillant2013implicit}, especially on extreme character movements. Teng et al.~\cite{teng2014simulating} apply the subspace simulation of articulated deformable shapes to deal with self-contact situation. They propose a pose-space cubature scheme to resolve the collision without detecting all collision points. Without the need to input the skeleton or predict the hierarchical structure of the bones, James and Twigg~\cite{james2005skinning} use non-parametric mean shift clustering and least squares method to establish proxy bone transformations and vertex weights to edit and animate the shape. Yoshizawa et al.~\cite{yoshizawa2007skeleton} propose to extract a skeletal mesh from the dense mesh model. The skeletal mesh is deformed by FFD~\cite{sederberg1986free} and the deformation is back-propagated to the dense model using differential coordinates. A hierarchical framework is used to speed up the process. Xie et al.~\cite{xie2015agile} propose a shape editing method for personal fabrication applications where the user edit the shape through the constructed skeleton. They observe that most of the editing made by users are local. Based on this fact, they introduce a domain decomposition method that allows the FEM system to re-assemble the sub-matrices only for the local part modified by the user, while the rest remains unchanged, which can avoid unnecessary calculations for fast convergence. Xu et al.~\cite{xu2018stress} following \cite{xie2015agile}, also use the skeleton to drive the deformation of the model, and locally update the FEM system. Furthermore, they introduce the multi-grid solvers into the analyzing of the stress distribution. For man-made models, they introduce iWIRES~\cite{gal2009iwires} to preserve the characteristic structure of the model. \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/autorig.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.10.~}Example-based rigging results of \cite{le2014robust}.}% \end{center} \subsubsection{Automatic Rigging} In addition to studying how to use skeleton to drive shape deformation, another research direction is how to bind a skeleton to the shape. This problem is called \textit{rigging}. In the traditional workflow, this process often need manual specification with the help of professional 3D modeling software. This process usually consists of two steps. The first one is to specify the joint positions and their connections, and the other is to determine the skinning weights which we have mentioned some methods above. There are some works~\cite{au2008skeleton, jiang2013curve, wang2012robust, tagliasacchi2012mean, tagliasacchi2009curve, qin2019mass, livny2010automatic, huang2013l1} that extract skeleton aiming to discover the shape topology, typically called \textit{curve-skeletons}, while we focus on another type of skeletons, called \textit{bone-skeletons}, which can be directly used for editing. For the early work, Baran et al.~\cite{baran2007automatic} propose an automatic method, called Pinocchio, to generate skeleton and the skinning weights from a single shape. They fit a pre-defined skeleton template to the input shape so may fail when the shape structure is different from that of the skeleton. Feng et al.~\cite{feng2015avatar} transfer high quality rigging to input body scan with the help of the SCAPE model~\cite{anguelov2005scape}. However, they only deal with human body shape. For multi-component characters which is easily accessible on the Internet, Bharaj et al.~\cite{bharaj2012automatically} propose a method to automatically bind skeleton to the character models. The method build contact graph for the components of the input model and exploit graph clustering to obtain the target skeleton with corresponding skinning weights from the input animation skeleton. The mapping from the input skeleton to the target skeleton of the input model are achieved by a novel mapping scheme based on dynamic programming. Also, the quality of the skeleton extracted using the information from the dataset is better than that extracted from one single shape. Most works use a set of example poses to extract a hierarchical, rigid skeleton. Schaefer et al.~\cite{schaefer2007example} use clustering to find the rigid bones of the skeleton, and then solve for the vertex skinning weights which are further used to determine the joint positions and their connections. Aguiar et al.~\cite{de2008automatic} bridge the gap between mesh animation and skeleton-based animation. They also first perform clustering to extract rigid bone transformations and then estimate joint motion parameters and appropriate surface skinning weights. Different from former methods that extract skeleton from the examples of same subject, Hasler er al.~\cite{hasler2010learning} estimate a rigid skeleton including skinning weights from examples of different subjects. The skeleton extracted by their method represent either shape variations or pose variations. With the combination of pose skeleton and shape skeleton, user can control them independently. However, Le et al.~\cite{le2014robust} point out that these data-driven methods, on the one hand, use motion driven clustering which does not model the skeleton structure well, so some specific parameter settings are required. On the other hand, the step-by-step process will cause error accumulations. So they adapt skinning decomposition~\cite{le2012smooth} and add soft constraints converting unorganized bone transformations to hierarchical skeleton structure. They over-estimate the number of the skeleton bones during initialization, and exploit iterative framework to automatically prune the redundant bones and update the skinning weights, joint location and bone transformation. The rigging results are shown in Fig.10. \subsection{Cage-based Deformation} \label{sec:cage} The cage-based deformation method is very similar to the skeleton-based deformation, but the difference is that the skeleton is generally inside the model, while the cage is generally wrapped outside the model. In essence, they both simplify the structure of the model and provide users with the handle to edit models. Free Form Deformation (FFD)~\cite{sederberg1986free} is first proposed to produce digital animation. This technique makes it possible to deform 3D shapes smoothly. Given the lattice vertices $\mathbf{v}_i, i=1,\cdots,n$, we denote the new position of a point inside lattice as $\mathbf{p}'$, and low-frequency basis functions as $\phi_i$, then we can obtain the formulation as follow: \begin{equation} \mathbf{p}' = \sum_{i=1}^{n} {\phi_i(\mathbf{p}) \mathbf{v}_i} \label{equ:FFD} \end{equation} However, limited by the 3D control lattices, FFD is hard to realize complicated deformations like limb movements, so that it is difficult to depict articulated shapes. Cage based deformation (CBD) is an extension of FFD. The control lattice is replaced by a polyhedral mesh which can better approximate the 3D shape and the deformation formulation is the same as Eq.~\ref{equ:FFD}. \subsubsection{Cage Prediction} The first thing of CBD is cage generation which can be divided into two kinds, automatic and user interactive. Automatic methods are typically completely geometric including mesh simplification~\cite{Cohen1996simplification, Ben2009spatial, Deng2011automatic, Sacht2015nested} and voxelization~\cite{Xian2009automatic, Xian2015efficient}. But these methods tend to produce imperfect cages or sometimes fail. The interactive methods~\cite{Le2017interactive} allow users to add cage vertices to produce better cages for deformation but are more time-consuming. Ju et al.~\cite{ju2008reusable} propose a data-driven method to exploit the cage template dataset created by artists for better cage selection in animation. Savoye et al.~\cite{savoye2010cageik} propose a linear cage estimation method for the target shape given the source shape and corresponding cage, which facilitates cage extraction and animation re-editing work. \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/cubicMVC.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.11.~}Deformation results using curved edge networks with cubic mean value coordinates~\cite{li2013cubic}. (a) (b) are source models and (c) (d) are edited results.}% \end{center} \subsubsection{Blending Weights Generation} The next thing of cage based deformation is to establish the relationship between the cage and the interior shape. For this purpose, Mean Value Coordinates (MVC) are first introduced by \cite{floater2003mean, floater2005mean} and applied to the deformation for triangular meshes~\cite{ju2005mean}. Hormann et al.~\cite{hormann2006mean} extend MVC to arbitrary polygon meshes. But these coordinates have a main drawback that they could be negative, which will produce unsatisfactory results. To avoid the negativeness, Joshi et al.~\cite{joshi2007harmonic} propose Harmonic Coordinates which ensure positive values and produce more local deformations, but the computation is time-consuming. Lipman~\cite{lipman2007gpu} improve MVC to avoid negative values, utilizing GPU visibility render. Langer et al.~\cite{langer2006spherical} generalize MVC and vector coordinates~\cite{ju2005geometric} to spherical barycentric coordinates which defined for arbitrary polygonal meshes on a sphere. Those coordinates can also be integrated to existing space-based deformation framework. Later on, Lipman~\cite{lipman2008green} find that the details of mesh surface are not retained when confronting large-scale deformations. In previous methods like MVC and Harmonic Coordinates, only cage vertex positions are considered. Therefore, he suggests to relate the cage's face normals to the interior vertices and proposes new coordinates called Green Coordinates. The Green Coordinates are further extended to complex domain making the deformation better fit the user's input~\cite{weber2009complex}. Unlike the original Green Coordinates, which associate the face normals with the vertex positions, the function of the face normals and the function of the vertices in \cite{ben2009variational} are independent, providing a higher degrees of freedom and a larger deformation space. Yang et al.~\cite{yang2008shape} add global and local stiffness control to the lattice-driven shape deformation. The global stiffness is provided by the width of overlapping lattice cells, and the local stiffness is controlled by the stiffness coefficient. The deformation of the lattice is transferred to the embedded shape by bilinear or trilinear interpolation. Manson and Schaefer~\cite{manson2010moving} propose moving least squares coordinates which suffer the same problem on boundary edges as MVC and Hermite MVC~\cite{dyken2009transfinite} when used for deforming concave shapes. Weber et al.~\cite{weber2012biharmonic} further propose biharmonic coordinates derived from the solutions to the biharmonic equation. They also present thickness-preserving deformation method which is better than As-Similar-As-Possible (ASAP) and ARAP methods~\cite{igarashi2005rigid}. In the context of transfinite interpolation, Li et al.~\cite{li2013cubic} propose Cubic Mean Value Coordinates (CMV). Cage-based deformation is essentially a series of interpolation approaches, which interpolate the control vertices of the cage, so CMV can be also used for cage-based shape deformation, as shown in Fig.11. They show shape deformations under the control of cage networks consisting of straight and curved edges. Most of barycentric coordinates are global, that is, the vertices on the deformed model are determined by the weighted sum of \textit{all} vertices on the cage, which will cause some counter-intuitive deformations, losing good controls for local variations. On the one hand, even for not too many vertices (50-100 vertices) on the cage, the calculation process is time-consuming and may not achieve real-time. On the other hand, since the coordinates are decreasing functions of distance, such as Euclidean distance~\cite{ju2005mean} or geodesic distance~\cite{joshi2007harmonic}, then there are some vertices on the cage that may have little influence on a single vertex of the deformed mesh. So reducing the number of weights is necessary and feasible. Based on these observations, Landreneau et al.~\cite{landreneau2010poisson} propose a Poisson-based weight reduction method which can reduce the number of weights, or saying control points, that affect a single vertex to a user-specified number, while preserving the deformation results the same. They require a certain number (typically 4-6) of example poses in the optimization to achieve better results, and the minimization energy is obtained from Poisson equation solved for the weights by Lagrange multipliers. Their method is also applicable to other deformation methods that require weights, such as skeleton-based deformation methods. However, imposing the sparsity constraint may obtain the non-optimal solution, which will lead to non-smooth results or even bad approximation results; and sometimes there are exceptional vertices, which are affected by more bones or control points than the preset threshold. Therefore, Le et al.~\cite{le2013two} propose a two-layer blend skinning model that performs lossy weight matrix compression to avoid imposing sparsity constraints. They add virtual bones as an intermediary between the original bones and vertices. They first blend the transformations of the original bones to obtain the transformations of the virtual bones, and then blend up to two virtual bones to obtain the transformation of each vertex. Although mainly dealing with skeleton-based deformation in their paper, their method could also be used in cage-based deformation after combining with an objective function similar to the one in \cite{landreneau2010poisson}. Similar to enhancing the locality of the deformation, Zhang et al.~\cite{zhang2014local} propose Local Barycentric Coordinates (LBC) for better local deformation. They introduce total variation (TV) originally used in image smoothing and reconstruction~\cite{chambolle2010introduction}, minimizing which under a couple of constraints of partition of unity, reproduction, and non-negativity. The deformation using LBC can realize multi-scale high-quality editing without any other manual specification. However, LBC has no closed-form expression and must solve a time-consuming optimization problem dealing with dense mesh models, as pointed out by \cite{tao2019fast}. So they propose a new efficient solver for the optimization of LBC. Some works exceed the limits of single cage and lattice. Instead of using a polyhedral mesh cage or control lattice, Botsch et al.~\cite{botsch2007adaptive} propose to use small voxels to enclose the model for space-based deformation. They define a nonlinear elastic energy which supports large-scale deformations. However, the discretization may cause some aliasing problems. Li et al.~\cite{li2010cage} propose a method to directly interpolate points on the mesh without constructing a whole cage for the mesh, instead, they only build an umbrella-like cell interactively on the partial mesh where users are interested. Considering that Green Coordinates can not ensure conformal deformations with an open cage or umbrella-like cell, they also take the local deformation differences of the cage into account. Replacing the cages with the Interior Radial Basis Functions (IRBF) center points, Levi et al.~\cite{levi2013shape} improve the cage-based deformation methods based on VHM method~\cite{ben2009variational}. The harmonic basis functions are replaced by IRBF which are defined with respect to centers on the surface of the model. They also place a set of spheres inside the model to minimize local distortions by preserving the shapes of the spheres. Aiming for multi-level detail and high-quality deformations, Garcia et al.~\cite{garcia2013cages} propose a cage-based deformation method based on star-cages instead of a single cage as traditional methods do. The star-cage consisting of multiple cages that offer easier interaction compared to the single cage. Based on a new representation, sphere-meshes, that can approximate shapes, Thiery et al.~\cite{Thiery2013sphere} use the sphere-mesh hierarchy as a deformation handle to deform shape well. They~\cite{Thiery2016animated} further apply this representation to the approximation of animated mesh sequences and the skinning weights obtained by skinning decomposition can guide pose editing well. In the traditional FFD or cage-based deformation, after determining the lattice or cage, without re-parameterization, the user can only use the existing handle to deform the shape. Zhang et al.~\cite{zhang2020proxy} propose a control lattice with adjustable topology, which does not need to re-parameterize the relation between lattice and enclosed shape again after changing the lattice. This method uses a tailored T-spline volume to create the lattice and further uses a refinement algorithm to obtain a proxy, which is a simplified version of the lattice and fits the enclosed shape better. The user manipulates the proxy, driving the deformation of the lattice and the deformation is then transferred to the shape. There are fewer vertices on the proxy than the lattice, which is more convenient to manipulate. However, the method is essentially based on volumetric lattice, which is not as flexible as cage-based deformation in large-scale deformation. \section{Data-based Deformation with Numerical Model} \label{sec:dataset} With the development of 3D scanning and registration techniques, geometric shape datasets~\cite{anguelov2005scape, Bogo2014faust} are becoming more and more available on the Internet. Analyzing the existing shapes from the shape dataset to provide prior information for deformation becomes an attractive direction. We summarize these as \emph{data-driven mesh deformation} methods. The structural and semantic knowledge of the man-made model can also be obtained by analyzing multiple models. We summarize these as \emph{data-driven analysis for man-made models} methods. \subsection{Data-driven Mesh Deformation} \label{sec:ddmeshdeform} Aforementioned geometry-based methods have some essential weaknesses. On the one hand, they are prone to producing unreasonable deformation results when the user's interaction is insufficient. On the other hand, they have high requirements for meshes and different models tend to require different parameter settings. To address these limitations, data-driven methods exploit plausible deformations from shape datasets and can produce more natural deformation results without manual selection of parameters or a large amount of user constraints. An important pioneering work is Mesh based Inverse Kinematics (MeshIK)~\cite{sumner2005mesh}, based on which a series of works has been proposed to improve or extend the method. We will group these methods according to different deformation representations of shapes. Another active research area is not limited to editing model pose, but also considering shape. \subsubsection{Blend Mesh Representation} In the data-driven deformation of the mesh model, deformation representation is important for representing the model. Euclidean coordinates are the most straightforward way to represent the model, but there are obvious limitations on rotation. \textbf{Gradient-based representation.} Deformation gradient is a straightforward gradient-based representation, which is defined as the affine transformation that optimally describes the mapping of local neighborhoods (triangles or one-ring neighbors) from the source mesh to the target mesh. Sumner and Popovi{\'c}~\cite{sumner2004deformation} use deformation gradients to transfer the deformations between two mesh shapes. Further, Sumner et al.~\cite{sumner2005mesh} propose MeshIK, a method based on principal component analysis (PCA) to analyze the shape dataset and uses the weighted combination of deformation gradients to edit shapes. MeshIK is used to produce stylized surface deformation and in analogy to traditional skeleton-based inverse kinematics for posing skeletons, and hence the name of MeshIK. % Each example shape is represented using a feature vector, containing deformation gradients of triangles describing deformation from a reference model. The deformation gradient has a good property that it is a linear function of the mesh vertices. They further decompose the deformation gradient $\mathbf{T}_{ij}$ for the $j$-th triangle in the $i$-th shape into rotation and scaling/shear components using polar factorization $\mathbf{T}_{ij}=\mathbf{R}_{ij} \mathbf{S}_{ij}$. The rotation is not linear so if it needs to be interpolated linearly, one can map the rotation from 3D rotations $\mathbf{SO}(3)$ to $so(3)$ of skew symmetric $3 \times 3$ matrices~\cite{murray1994mathematical}. The mapping uses the matrix logarithm and can be reversed by the matrix exponential~\cite{murray1994mathematical}. Then the nonlinear span of the deformation gradient for the $j$-th triangle given $m$ example meshes has the following formulation: \begin{equation} \mathbf{T}_j(\mathbf{w})=\exp(\sum_{i=1}^{m} {w_i \log(\mathbf{R}_{ij})} ) \sum_{i=1}^{m} {w_i \mathbf{S}_{ij}}. \end{equation} This constitutes the nonlinear feature space, where $w_i$ is the combination weight for the deformation gradient from the $i$-th shape. As shown in Fig.12, given different example models, the editing will produce different results. Der et al.~\cite{der2006inverse} propose a reduced model for inverse kinematics which is faster than MeshIK~\cite{sumner2005mesh}. They cluster the vertices according to the influence of the control parameters, and replace the same cluster of vertices with a proxy vertex located at the weighted centroid of the cluster. The method takes advantage of the reduced complexity of deformation proxies, \yyj{not relying on geometric complexity} to interactively edit even extremely detailed geometry models. Wampler~\cite{wampler2016fast} exploits the ARAP energy~\cite{sorkine2007rigid} for interpolation between a set of example shapes. The method allows spatially localized interpolation which has more natural transitions. However, it also suffers from the problem of potential non-local editing and requiring to solve a complicated system of equations when a large number of examples are given. \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/meshik.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.12.~}Given different examples, MeshIK~\cite{sumner2005mesh} will produce different deformation results.}% \end{center} MeshIK cannot deal with large-scale deformations where rotations are larger than $180^{\circ}$. Gao et al.~\cite{gao2019sparse} propose a shape editing method based on ACAP (as-consistent-as-possible) deformation features to address this problem. The rotation at each vertex can be represented using an axis-angle representation. However, the direction of the axis (one of the two opposite directions) and the rotation angle (with multiples of $2\pi$ added) are ambiguous. They propose an integer optimization strategy to eliminate the ambiguities, so the proposed feature can express rotations greater than $180^{\circ}$. The method further introduces sparsity constraints into model editing that utilizes the prior information from the model dataset to automatically select a smaller number of basis deformations. It also supports multi-scale editing with high efficiency, as shown in Fig.13. \begin{center} \centering \includegraphics[width=0.9\linewidth]{img/organic/acap_multi.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.13.~}Using ACAP features~\cite{gao2019sparse} along with sparsity constraints enables multi-scale editing. (a) is the reference model. (b) is the deformation result with the simplified mesh. (c)-(e) are the deformed results on the high resolution mesh with both facial and body deformation. Their method automatically selects suitable basis modes for both small-scale facial expression editing and large-scale pose editing.}% \end{center} \textbf{Rotation-invariant representations.} Another direction of research to tackle rotation ambiguities is to develop rotation-invariant representations. Lipman et al.~\cite{lipman2005linear} locally define linear rotation-invariant (LRI) coordinates at each vertex which consist of two discrete forms. The discrete form coefficients w.r.t. orientation % can be used to represent the mesh that facilitate detail-preserving surface editing and shape interpolation. Changing the definition domain from one-ring neighborhoods of the vertices to mesh patches, Baran et al.~\cite{baran2009semantic} propose patch-based rotation-invariant coordinates, which solve the noise sensitivity problem of the original LRI~\cite{lipman2005linear} and accelerate the shape reconstruction. They use patch-based LRI coordinates to project the shape into the shape space and transfer semantic deformations to the target shape. The patch-based LRI representation is further used in data-driven shape interpolation and morphing~\cite{gao2017data} which provide an interface for users to intuitively edit the morphing results. Kircher and Garland~\cite{kircher2008free} propose a differential rotation-invariant surface representation for surface editing. The second-order differences are both rotation-invariant and translation-invariant. The editing can be operated both in time and space. Winkler et al.~\cite{winkler2010multi} use the edge lengths and dihedral angles as a representation for multi-scale shape interpolation. Their method supports input settings for more than two shapes. Further, Fr{\"o}hlich and Botsch~\cite{frohlich2011example} propose to use edge lengths and dihedral angles to represent shape deformation. However, since edge lengths cannot be negative, the method cannot handle extrapolation deformation well. Gao et al.~\cite{gao2016efficient} propose a data-driven shape editing method based on a novel rotation-invariant representation named RIMD (Rotation Invariant Mesh Difference). They decompose the deformation gradient into rotation and scaling/shear matrices, and combine the logarithm of the rotation difference of each edge and the scaling/shear matrix of each vertex to represent the shape. As shown in Fig.14, the rotation difference cancels out global rotations, making it a rotation invariant representation and thus it can handle large-scale deformations. However, when applied for data-driven deformation, it uses global principal components extracted from example shapes, so it is difficult to perform local editing. Besides, the derivatives are calculated in a numerical way, which restricts the editing efficiency. \begin{center} \includegraphics[width=0.9\linewidth]{img/organic/rimd_en.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.14.~}RIMD features can handle large-scale deformations~\cite{gao2016efficient}.}% \end{center} Generalized to deforming mesh sequences, Xu et al.~\cite{xu2007gradient} propose a keyframe based mesh editing method. Once the constraints are specified by users or induced from the environment, the frames with those constraints become keyframes. And the constraints and deformations will be propagated to the whole mesh sequence. Instead of directly editing the input representation of the model, Sumner et al.~\cite{sumner2007embedded} propose to embed the model into a deformation graph which is built by uniformly sampling on the model surface. The graph node $j$ is associated with an affine transformation $\mathbf{R}_j$ and a translation vector $\mathbf{t}_j$ which can map the point $\mathbf{p}$ to a new position by $\mathbf{p}'= \mathbf{R}_j (\mathbf{p}-\mathbf{g}_j) + \mathbf{g}_j + \mathbf{t}_j$, where $\mathbf{g}_j$ is the position of the graph node. Assuming there are $m$ graph nodes, then the final deformed position $\mathbf{v}'$ of the model vertex $\mathbf{v}$ will be determined by the weighted sum of all influences \begin{equation} \mathbf{v}' = \sum_{j=1}^{m} {w_j(\mathbf{v}_i) [\mathbf{R}_j(\mathbf{v}_i-\mathbf{g}_j)+ \mathbf{g}_j + \mathbf{t}_j]} \end{equation} In addition to editing the mesh models, this method can also perform particle simulation, but the disadvantage is that the local details cannot be edited. \textbf{Deformation components.} Given a dataset, we can extract the deformation components of the shape and manipulate those basis to achieve the purpose of editing the shape. Early work~\cite{alexa2000representing} employ principal component analysis (PCA) to extract the deformation components, but the extracted components are global, which are not convenient for users to manipulate directly. Therefore, in combination with sparsity, a series of works propose the extraction of sparse deformation components. For the first work, Neumann et al.~\cite{neumann2013sparse} propose to decompose the animated mesh sequences into sparse localized deformation components (SPLOCS). Those components are spatially localized basis which capture semantic deformations. The user can edit the shape by manipulating those components. However, they operate on vertex coordinates, which are translation and rotation sensitive and thus cannot handle large rotations. Huang et al.~\cite{huang2014sparse} use the deformation gradient to represent the shape, and decompose the deformation gradient into rotation and scale by polar decomposition, and finally use SPLOCS on those vector representations. But this method still cannot handle rotations larger than $180^{\circ}$. Bernard et al.~\cite{bernard2016linear} also aim to find local support deformation components from the example shapes in the dataset. They use matrix factorisation with sparsity and graph-based regularisation terms accounting for smoothness to automatically select the position and size of the local support component. Adopting rotation-invariant representation, Wang et al.~\cite{wang2017articulated} extend~\cite{neumann2013sparse} using the shape representation of edge lengths and dihedral angles. However, the problem that the extrapolation may fail due to the edge length cannot be negative still exists, and the insensitivity to scale will lead to lack of robustness to noise. Edge lengths and dihedral angles representation is also used in \cite{liu2019discrete}, which analyzes the edge lengths vectors and the dihedral angles vectors respectively to extract the adaptive sparse deformation components. Then, by adapting \cite{frohlich2011example}, the method allows users to directly edit vertices and produces deformation results under the guidance of components. Based on Nonlinear Rotation-Invariant Coordinates (NRIC)~\cite{wang2012linear, sassen2020geometric}, Sassen et al.~\cite{sassen2020nonlinear} combine the advantages of principal geodesic analysis~\cite{heeren2018principal} and SPLOCS~\cite{neumann2013sparse} and propose Sparse Principal Geodesic Analysis (SPGA) on the Riemannian manifold of discrete shells. \subsubsection{Blend Shape and Pose} This series of methods model the human body through several parameters (often related to shape and pose of the body), and the editing of the human body can be achieved by different parameter inputs. One of the pioneer work and also one of the most successful work, SCAPE~\cite{anguelov2005scape} uses the deformations of the triangular faces to represent the body shape and pose, separately. The follow-up work, Skinned Multi-Person Linear model (SMPL)~\cite{loper2015smpl}, decomposes body shape into identity-dependent shape and non-rigid pose-dependent shape with vertex-based skinning approach, such as LBS and DQBS. Given shape parameters $\mathbf{\beta} \in \mathbb{R}^{\|\mathbf{\beta}\|}$ and pose parameters $\mathbf{\theta} \in \mathbb{R}^{\|\mathbf{\theta}\|}$, they propose to represent the neutral mesh $T(\beta,\theta)$ by adding a blend shape function, $B_S(\mathbf{\beta})$, which sculpts the subject identity, and a pose-dependent blend shape function, $B_P(\mathbf{\theta})$ to a mean mesh template $\mathbf{\bar{T}}$, \begin{equation} T(\beta,\theta)=\mathbf{\bar{T}}+B_S(\mathbf{\beta})+B_P(\mathbf{\theta}). \label{equ:smpl} \end{equation} The neutral pose is then deformed by some blend skinning methods, \begin{equation} M(\beta,\theta)=W(T(\beta,\theta),J(\beta),\theta,\mathbf{W}), \end{equation} where $W(\cdot)$ represents a standard blend skinning function, and $\mathbf{W}$ is the skinning weights. $J(\beta)$ is a function that determines the joint locations, which transforms rest vertices into rest joint locations. Although SMPL can model human body well, it lacks modeling of non-rigid dynamic deformations caused by body motions. To model them, Dyna model~\cite{pons2015dyna} proposes to use a second-order auto-regressive model which predicts soft-tissue deformations. Specifically, it represents non-rigid deformation of a body, $\hat{\mathbf{T}}(\beta,\delta)$, by the combination of identity and soft-tissue deformations, \begin{equation} \hat{\mathbf{T}}(\beta,\delta)=\mathbf{S}(\beta)+\mathbf{D}(\delta). \end{equation} Further, different from SMPL, Dyna follows the similar idea of SCAPE~\cite{anguelov2005scape}, which describes different human bodies by triangular deformations. Given the edge $\hat{\mathbf{e}}_{i}$ of triangle $i$ in the template mesh, the edge $\mathbf{e}_i$ of triangle $i$ belonging to the mesh at time $t$ to be represented can be represent as, \begin{align} \mathbf{e}_i(\beta,\theta_t,\delta_t)&=\mathbf{R}_i(\theta_t)\hat{\mathbf{T}}_i(\beta,\delta)\mathbf{Q}_i(\theta_t)\hat{\mathbf{e}}_{i} \\ &=\mathbf{R}_i(\theta_t)(\mathbf{S}_i(\beta)+\mathbf{D}_i(\delta_t))\mathbf{Q}_i(\theta_t)\hat{\mathbf{e}}_{i} \end{align} where, $\beta$ and $\theta$ are still body shape coefficients and body pose parameters, respectively. $\mathbf{Q}_i(\theta_t)$ represents pose dependent deformations which are a linear function of $\theta$, $\mathbf{S}_i(\beta)$ represents identity-dependent transformations which are a linear function of $\beta$, $\mathbf{R}_i(\theta_t)$ represents absolute rigid rotations, and $\mathbf{D}_i(\delta_t)$ represents dynamics-dependent deformations which is a linear function of coefficients $\delta_t$. Dynamics deformations are related to body motion, thus, velocities and accelerations. So, the angular velocity and acceleration $(\dot{\theta}_t,\ddot{\theta}_t)$ of body joints and the velocity and acceleration $(v_t,a_t)$ of the root of the body at time $t$ are also the inputs of the model. Let $\hat{\delta}_{t-1}$ and $\hat{\delta}_{t-2}$ be the coefficients representing the history of estimated low-D dynamic deformation, the dynamic control vector of the Dyna model is $\mathbf{x}_t=\{\dot{\theta}_t,\ddot{\theta}_t,v_t,a_t,\hat{\delta}_{t-1},\hat{\delta}_{t-2}\}$ in total. Dynamics deformations also depend on body shape, which is, the shape identity coefficients $\beta$. The dynamics-dependent deformations $\mathbf{D}_i(\delta_t)$ can be further specified as $\mathbf{D}_i(f(\mathbf{x}_t,\beta))$, where $f$ is a function to be learned that maps dynamic control vector $\mathbf{x}_t$ and shape coefficients $\beta$ to the low dimensional representation $\delta_t$ of the dynamics. SMPL can also be extended to model those dynamic deformations by adding dynamic blend shape function, $B_D(\mathbf{x}_t,\beta)$ to Eq.~\ref{equ:smpl}, \begin{equation} T(\beta,\theta_t,\mathbf{x}_t)=\mathbf{\bar{T}}+B_S(\mathbf{\beta})+B_P(\mathbf{\theta_t})+B_D(\mathbf{x}_t,\beta), \end{equation} where $B_D(\mathbf{x}_t,\beta)$ also predicts vertex offsets. This model is named as Dynamic SMPL, abbreviated as DMPL. \begin{center} \includegraphics[width=0.9\linewidth]{img/man-made/co-constrained.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.15.~}\cite{yumer2014co} supports interactive editing through not only abstract handles but also sketches. (a) Input models. (b) User prescribed deformation (top: translation, bottom: silhouette sketching). (c) Constraints resolved by the system. (d) Final models.}% \end{center} \subsection{Data-driven Analysis for Man-made Models} \label{sec:ddanalysis} Data-driven analysis for man-made model editing is to learn some prior information from a model dataset that contains closely related models, such as belonging to the same category or having the same style. The prior information provide plausible variations of the models and add constraints to user editing which ensure the reasonable results. \cite{xu2016data} reviews the methods of data-driven analysis and processing. \subsubsection{Interactive Editing} Fish et al.~\cite{fish2014meta} propose meta-representation to represent the essence of 3D man-made model dataset. The representation is formulated from the correspondence between model segmented parts, which encodes the arrangement rules of the parts. So it can be viewed as a constraint guiding user editing, where models can maintain their familial traits and performing coupled editing, where several shapes can be collectively deformed by directly manipulating the distributions in the meta-representation. Yumer et al.~\cite{yumer2014co} abstract co-constrained handles for model editing. The handles are obtained from the different segmented parts through a co-abstraction method~\cite{yumer2012co}. The co-constraints are generated by clustering the different planes of the abstracted parts. This method supports interactive editing by not only abstract handles but also sketches, as shown in Fig.15. Based on this work, Yumer et al.~\cite{yumer2015semantic} further propose a semantic editing method for 3D models, where users can edit 3D models through semantic attributes. This method establishes a continuous mapping between semantic attributes and model geometry through the relative scores of attributes and geometric differences between models. Although the deformation is continuous, this method cannot add and remove certain parts of the model. The above methods all use the dataset of some categories to learn deformation constraints to edit shapes. These methods have been able to take advantage of the information in the shape dataset, and those pairwise parameter constraints work well during shape editing. However, their parameter pairs are in the same kind, and the constraints on the parameter pairs that may be formed by different kinds of parameters is not considered. Based on this, \cite{fu2016structure} use multivariate regression methods to learn the correlation between parameters. The proposed method can perform both structure-preserving and structure-varying shape editing. Laga et al.~\cite{laga2017modeling} analyze the pairwise co-variation of the geometry and structure of the part. After the user edits a part of the model, it can automatically find a suitable configuration for the entire model to ensure the plausibility of the edited model. \begin{center} \centering \includegraphics[width=0.9\linewidth]{img/man-made/human.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.16.~}The model changes according to the change of the skeleton~\cite{zheng2015ergonomics}.}% \end{center} \subsubsection{Editing for Other Purpose} From a relatively novel perspective, Zheng et al.~\cite{zheng2015ergonomics} want to change the model to suit for the input human body. As shown in Fig.16, the input is a model with semantic labels, and a spatial relationship graph is used to represent the model, where the graph nodes represent the model components, and the edges of the graph represent the spatial relationships of the components. They first establish the contact constraints between the body skeleton and the model (such as buttocks and chair seats). Then the deformation is an optimization process, and an edit propagation algorithm is designed to deform the model according to these constraints and maintaining the model structure. Model editing can also be used for other applications. For example, Ovsjanikov et al.~\cite{ovsjanikov2011exploration} explore the shape dataset through the deformations of a template shape which abstract the shape structure using several boxes. Ishimtsev et al.~\cite{ishimtsev2020cad} propose a data-driven mesh deformation method, named CAD-Deform, to fit the retrieved synthetic CAD models to the real 3D scan. The deformation energy ensure smooth deformation and keeping sharp feature, which also include part-to-part mapping term, and nearest-neighbor mapping term. The former match the deformed mesh and the target scan globally, while the latter make them match more accurately when they get close enough. \section{Neural-based Editing} \label{sec:neural} In this section, we review attempts on the deformation methods based on deep learning in recent years. Combining with deep learning brings new opportunities and challenges to both organic and man-made shape editing methods. \setcounter{figure}{16} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{img/organic/rignet.png} \caption{The network architecture of RigNet~\cite{RigNet}.} \label{fig:rignet} \end{figure*} \subsection{Organic Shape Editing} With the availability of large human body datasets~\cite{pons2015dyna, AMASS:ICCV:2019}, deep neural networks have also been introduced into the editing of organic shapes. \subsubsection{Editing via Learning Mesh Deformation} Tan et al.~\cite{tan2018variational} first propose to use variational autoencoder (VAE) to encode shape deformations. They use RIMD deformation feature~\cite{gao2016efficient} as input and can generate different poses after learning the existing deformations in the dataset. The latent space can be used for shape exploration, which guide the user to find specific shapes they want. However, the network is entirely composed of fully connected layers, which has high memory occupies and thus, cannot handle dense mesh models. To solve this, graph-based convolutions~\cite{defferrard2016convolutional} and mesh pooling~\cite{yuan2020mesh} have been introduced. At the same time, they~\cite{tan2018mesh} propose a convolutional mesh autoencoder utilizing the locality of the convolution operator and sparsity constraints to extract the local deformation components of the deformable shapes. The deformation components can be used to synthesize new shapes. Also extracting deformation components, Yang et al.~\cite{yang2020multiscale} propose to use multi-level VAEs, which can achieve better results. Qiao et al.~\cite{qiao2018learning} propose bidirectional LSTM consists of graph convolutions to generate mesh sequences. As an application of shape deformation and editing, deformation transfer can transfer the user's editing of one shape to another shape. Traditional deformation transfer~\cite{sumner2004deformation, baran2009semantic, Ben2009spatial} need to manually specify several correspondences between source and target shapes. Although Yang et al.~\cite{yang2018biharmonic} propose a method of automatically selecting appropriate key points to transfer the deformation, some candidate points still need to be manually specified. So Gao et al.~\cite{gao2018automatic} first propose a fully automatic shape deformation transfer method. They train SimNet to determine the similarity of two poses of source and target shapes. The proposed VC-GAN combines MeshVAE and CycleGAN~\cite{zhu2017unpaired} to transfer the latent vectors of two input shapes enabling deformation transfer. Wang et al.~\cite{wang2020neural} represent 3D human meshes by a series of parameters including shape, pose, and vertex order, to perform deformation transfer. The method first encodes these parameters of source shape by a permutation invariant encoder to extract pose feature and then use a style transfer decoder together with target identity mesh as condition to generate the target shape with source pose. \subsubsection{Performing Mesh Deformation} Bailey et al.~\cite{bailey2020fast} propose a convolutional neural network for approximating facial deformations which can handle high-frequency deformations. The method separates the process into three parts: a coarse approximation, a refined approximation and an approximation for rigid components of the mesh. The coarse and refined approximations are comprised of two independent convolutional networks by inputting rig parameters. For those segments that only undergo rigid rotations and translations, they are approximated by a faster rigid approximation rather than convolutional networks to improve efficiency. The method also propose a feed-forward neural network to output rig parameters given user-specified control points for inverse kinematics. The skinning also has some neural-based methods in deformation, skeleton and weights prediction. As the first work to introduce the neural network in character deformation, Bailey et al.~\cite{bailey2018fast} split skinning deformation into linear and nonlinear portion. The linear portion is approximated by linear skinning method while the nonlinear portion is approximated by a neural network consisting of two fully connected layers. Based on the similar idea that decompose the deformation into linear and nonlinear parts, Li et al.~\cite{li2020densegats} propose a graph attention based network to predict the nonlinear effects by inputting mesh graphs and linear deformations, while the linear part is computed with LBS. Liu et al.~\cite{neuroskinning2019} also propose a neural based skinning method which utilizes the graph convolutions. They first construct a graph using the input 3D mesh with its associate skeleton hierarchy. Each graph node encodes the mesh-skeleton attributes. The graph and node attributes are fed into their graph convolution network to predict the skinning weights. Almost the same time, Xu et al.~\cite{AnimSkelVolNet} propose to convert an input 3D shape into a set of geometric representations expressed in a volumetric grid. The input representation is processed through a stack of 3D hourglass modules. Each module outputs joint and bone probabilities in the volumetric grid, which are progressively refined by the following module. The final joint and bone probabilities are processed through a Minimum Spanning Tree (MST) algorithm to extract the final skeleton. They further propose RigNet~\cite{RigNet}, which can predict a skeleton with the skinning weights for the input model with the network shown in Fig.~\ref{fig:rignet}. The method first extracts the geometric feature from the input mesh and predicts candidate joint locations and a attention map indicating the confidence of each candidate joint. After joints are detected, another network learns to determine the root joint and predict whether there is a edge connecting two joints. Finally, a Minimum Spanning Tree algorithm is performed to generate the final skeleton which is sent to another network to predict skinning weights. The method also considers user inputs like how many joints are wanted. The predicted skeleton and skinning weights can be directly used in editing and modeling. Vesdapunt et al.~\cite{vesdapunt2020jnr} propose joint-based representation for 3D face model which rig semantic joints to the face model. The specified joints add prior information which reduce the demand of large amounts of training data. They also propose an autoencoder network to predict the skinning weights which not only enhance the modeling capacity, but also support users to edit the model. NNWarp~\cite{luo2018nnwarp} design a heuristic deformation feature vector including geodesic, potential and digression, and warp linear elastic simulations into nonlinear elastic simulations via a DNN prediction to handle a wide range of geometrically complex bodies, which is faster than existing nonlinear method. Fulton et al.~\cite{fulton2019latent} compress the solid dynamics deformation space into a nonlinear latent space with fewer degrees of freedom through the neural network, while achieving equivalent or even greater simulation fidelity, speed and robustness compared to other model reduction methods. They use the autoencoder architecture and initialize the outer layer with the basis computed by PCA. Also based on the autoencoder, Santesteban et al.~\cite{Santesteban2020softsmpl} combines the non-linear deformation subspace with a regressor composed of a recurrent architecture GRU which regresses soft-tissue deformations. They propose that the soft-tissue deformations are not only related to shape and pose, but also motion, so the regressor also uses the motion descriptor as input. NASA~\cite{deng2019neural} and NiLBS~\cite{jeruzalski2020nilbs} condition the implicit field of articulated shapes on the skinning weights, enabling fast shape query without extra acceleration data structures. \subsection{Man-made Models Editing} Some large man-made model datasets~\cite{modelnet,shapenet,Thingi10K,ABC} are also available on the Internet, which is the fundamental of some work that try to combine 3D shape editing with neural networks to realize the intelligent editing of 3D shapes. \subsubsection{Appearance Editing} Some methods are based on volumetric representation. For example, Yumer et al.~\cite{yumer2016learning} have realized the semantic deformation of 3D shapes by 3D volumetric convolutional network, predicting deformation flow from semantic attributes. However, each semantic attribute is only described by three numbers ($0$, $0.5$, $1.0$, indicating decreasing, keeping the same and increasing respectively), which is lacking in the degree of freedom and controllability of user editing. Liu et al.~\cite{liu2017interactive} realize interactive 3D modeling using adversarial generative networks, as shown in Fig.18. But the edited object is a voxel shape, lacking geometric details, and the resulting shape may not be in line with the user's intentions. As for mesh representation, mesh models generally have inconsistent connectivity. Umetani et al.~\cite{umetani2017exploring} present a parameterization method for efficiently converting a unstructured mesh into a manifold mesh with consistent connectivity using depth information. The parameterization is then fed into an autoencoder, and the plausible deformation space is represented by the latent space of the autoencoder. The users can explore shape variations by directly manipulating the mesh vertices through an interactive interface. Also using autoencoder to optimize on the manifold, DiscoNet~\cite{mehr2019disconet} believes that even if the 3D models belong to the same category, they generally do not lie on a connected manifold. So they propose to use multiple autoencoders (two in their paper) to learn different connected component of the disconnected manifold, without any supervision. Extending the traditional cage based deformation, Wang et al.~\cite{yifan2020neural} propose a neural architecture that predicts source cage and cage offset. The mean value coordinates are computed by a novel MVC layer and a cage-based deformation layer produce the deformed result from the cage offset and mean value coordinates. Also inspired by traditional deformation method, Liu et al.~\cite{liu2021deepmetahandles} propose to use meta-handles, the combinations of control points, as deformation handles and biharmonic coordinates~\cite{wang2015linear} to edit the 3D models. The control points are sampled by farthest point sampling, and meta-handles are predicted by MetaHandleNet. The meta-handles reflect the correlation between the control points. For example, the control points on the two chair armrests should maintain the symmetry of the chair armrests during deformation. At the same time, the plausible deformation range is predicted, and the specific deformation parameters are predicted by DeformNet to deform the source shape to match the target shape. They also propose to use soft rasterizer~\cite{liu2019softrasterizer} and 2D discriminator network to ensure reasonable and realistic deformation. \begin{center} \includegraphics[width=0.9\linewidth]{img/man-made/3DV.png}\\ \vspace{2mm} \parbox[c]{8.3cm}{\footnotesize{Fig.18.~}An example of interactive neural editing of man-made 3D models. The user edits the model, and the network maps it to a latent space, and a new model is generated. The result is a voxel model with less geometric details~\cite{liu2017interactive}.}% \end{center} DualSDF~\cite{hao2020dualsdf} uses two-level representations to perform interactive shape editing and learn a tightly coupled latent space for the two representations by variational autodecoder (VAD) framework~\cite{zadeh2019variational}. The editing operations are performed on coarse primitive-based representation, and the deformation results are presented as signed distance fields. Deng et al.~\cite{deng2020deformed} propose deformed implicit field network(DIF-Net) to represent 3D shape and perform editing. The user freely select one or more 3D points among the surface and specify their new positions. The edited shape is obtained from the latent optimization. The editing also supports adding new structures to the given shape. Also utilizing deformation from a template SDF to represent 3D shapes, Zheng et al.~\cite{zheng2020deep} are able to manipulate shape, but limited to mesh stretching. Wei et al.~\cite{wei2020learning} propose a encoder-decoder network to edit shapes by editing semantic parameters like height, depth, and width of each semantic part of man-made objects. Their method can be divided into two stages: semantic parameter encoding and deformation transfer. To provide semantic parameters supervision for the encoder, they first generate ground truth semantic parameters for shapes synthesized by bounding boxes of segmented shapes in the real dataset and also edit these corresponding synthetic shapes and get corresponding semantic parameters. After encoding original synthetic shapes and deformed synthetic shapes into the semantic latent space, a decoder use shape-level chamfer distance supervision to reconstruct both original shapes and deformed shapes. At inference time, the network encodes a realistic shape into the parameter space and edit shape parameters on the parameter space and decode the reconstructed synthetic shape and edited synthetic shape. As for deformation transfer, by defining deformation field as the vertex replacement on the decoded synthetic shape, each vertex on the real shape finds $k$ nearest points on the synthetic shape and regards the weighted sum of the displacement of these nearest points as the vertex displacement of the realistic shape. In this way, deformation is transferred from the synthetic shape to the realistic shape. In addition, this method can be easily applied to non-rigid models by changing the definition of the semantic parameters e.g. pose and shape for human bodies. Sung et al.~\cite{sung2020deformsyncnet} embed shapes into an idealized latent space where points represent shapes and vectors between points represent shape deformations. The deformation vector can be decoded into a deformation action which can be applied to new shape directly. \subsubsection{Other Forms} In addition to geometry, structure is also editable. Mo et al.~\cite{mo2020structedit} develop a deep neural network based on structural shape representation StructureNet~\cite{mo2019structurenet} to embed shape differences or deltas into the latent space of VAE, enabling multiple kinds of edits with both geometric and structural modifications. Representing 3D man-made models as a set of handles, Gadelha et al.~\cite{gadelha2020learning} adopt a two-branch network architecture to generate shape handles. After training the network, users can edit any handle of the handle set, and the back propagation is used to optimize the latent vector to obtain a result that preserve the overall structure. Reinforcement learning can also be integrated in model editing. For example, Lin et al.~\cite{Lin2020modeling} propose a reinforcement learning based method to edit mesh models. First, the Prim-Agent predicts a sequence of actions to operate the primitives to approximate the target shape given a shape reference and pre-defined primitives. Then the edge loops are added to the output primitives. Second, the Mesh-Agent takes as input the shape reference and the primitive-based representation, and predicts actions to edit the meshes to produce shapes with detailed geometry. Some methods input some guidance to deform the models. Kurenkov et al.~\cite{Kurenkov2018deform} take an image as input and retrieve a template mesh from the repository, then they deform the template 3D shape to match the input image and preserve the topology of the template shape using Free-Form Deformation. \yyj{Also retrieving from the given dataset at first, Uy et al.~\cite{uy2021joint} deform the retrieved set of source models to the target image or scan. The retrieval and deformation modules are trained jointly, in an alternating way. The deformation is part-based and structure-aware, predicted by a general MLP which takes encoded target, global and per-part source codes as inputs.} Wang et al.~\cite{wang20193dn} extracts global features from both the source shape and target input or point cloud. These features are then input to an offset decoder which predicts per-vertex offsets to deform the source to produce a deformed shape similar to the target. Groueix et al.~\cite{groueix2019unsupervised} also perform per-vertex deformation, leveraging not only reconstruction loss, but also cycle-consistency loss. \setcounter{figure}{18} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{img/wholemap.pdf} \caption{The structure of the survey and some representative works in each subjects.} \label{fig:wholemap} \end{figure*} \section{Conclusions} \label{sec:conclusion} In this survey, we have reviewed the history of 3D model editing and the exploration of deep learning based editing methods in recent years. We divide the editing methods into four subjects based on the data sources. In each subject, we further discuss the respective editing methods around organic shapes and man-made models. The former is generally manifold, and the latter is generally designed by an artist and is non-manifold. \yyj{We show the whole map with some representative works in Fig.~\ref{fig:wholemap}.} For organic shapes or deformable models, we first discuss classical Laplacian-based methods, especially ARAP~\cite{sorkine2007rigid} and its large amounts of derivative works. In addition to these surface-based deformation methods, there are also deformation methods based on skeleton and cage. Editing methods learning from dataset take into account the deformation principles of existing models in the dataset, and the deformation results will be more natural. The neural-based methods are also mainly divided into two parts for exploration. On the one hand, they consider the surface meshes and use various deformation representations as input, like traditional data-driven methods. On the other hand, they consider the skinning deformation based on the skeleton to provide intelligent solution strategy for skeleton rigging and weight assignment. For man-made models, keeping the structure of the model from drastic changes, or maintaining the salient features of man-made models is the most important. Therefore, the editing method of the man-made model will maintain the invariance of the local area, and at the same time analyze the correlation between different parts of the model to limit the editing. This invariance can be obtained by analyzing a single model or a large number of models in the dataset. Neural-based editing is a promising direction. Although some works have explored neural-based editing methods for 3D models, there are still many directions that can be improved: \textbf{Organic shapes.} At present, most of the editing methods of organic shapes based on neural networks still use traditional skeleton-based or cage-based skinning methods, while neural network are used for skeleton binding~\cite{RigNet}, cage prediction~\cite{yifan2020neural}, and weight assignment~\cite{neuroskinning2019}. Although there are some methods~\cite{luo2018nnwarp} that explore the direct use of neural networks to predict the displacement of nonlinear deformation, experiments have only been carried out on isotropic materials, and the anisotropic materials need to redesign the framework. Therefore, on the one hand, we still need to design an end-to-end framework which inputs user constraints, such as editing handles and handle displacement, and outputs shape transformation matrix or vertex displacement; on the other hand, we need to study how to relate the selection of the deformation handle with the deformation result, such as optimizing the selection of control points, character rigging, and the prediction of weights according to the deformation results. In the latter, reinforcement learning may be a possible solution, where possible control points are selected by the agent and rewards are given through the deformation results. \textbf{Man-made models.} The neural editing of man-made model still requires to satisfy both easy-manipulated deformation handle, and representations that can fully show the details of the model. The existing neural-based editing methods either use implicit surfaces~\cite{hao2020dualsdf} or manifold surface~\cite{umetani2017exploring} to approximate the non-manifold model, which will lose part of the model details; or directly use FFD or cage-based editing methods on the original non-manifold model, but the handles are limited, such as semantic vector~\cite{yumer2015semantic} and global deformation through cage~\cite{yifan2020neural}. A good handle can be edge loop~\cite{Lin2020modeling} or coarse primitive-based representation~\cite{hao2020dualsdf}. But how to relate these handles with the non-manifold models still needs lots of work. \yyj{As a recently widely studied representation, implicit surface can achieve arbitrary resolution theoretically, which is a potential representation in various areas. In addition to further explorations in 3D model editing, \textit{neural morphing}, that is, morphing two shapes using neural networks, and \textit{neural modeling}, that is, modeling 3D models using neural networks, can also be regarded as possible research directions.} \vspace{2mm} \bibliographystyle{JCST}
proofpile-arXiv_065-3833
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgment} The authors gratefully acknowledge financial support by the Central Innovation Programme ZF4086004LP7 of the German Federal Government. \bibliographystyle{IEEEtran} \subsection{Identification for control}\label{sec:methods:simultaneous} Given the optimization framework in Sec. \ref{sec:methods:framework}, the formulation of identification for control is straightforward: the control synthesis objective \eqref{eq:methods:OCOutputCost} replaces the open-loop identification cost \eqref{eq:methods:cost}, while the constraints \eqref{eq:methods:constraint} and \eqref{eq:methods:OCOutputConstraint}, as well as the parameters $\mathcal{P}$, are merged. For linear systems, the optimization problem takes the form \begin{subequations} \label{eq:methodsD:optimization} \begin{alignat}{2} &\!\min_{\mathcal{P}} &\qquad& ||\mathcal{R}_z(t_\infty)||,\\ &\text{subject to} & & \forall k \in [0,\infty]: \forall m: N_k y_{a,m}[k] \leq D_k \vec{\xi}, \label{eq:methodsD:constraint}\\ & & & \forall t: \mathcal{R}_\mathrm{con}(t) \subseteq \mathcal{Y}_\mathrm{c}, \end{alignat} \begin{equation*} \mathcal{P} := \lbrace A,B,C,D,E,F,\mathcal{V},\mathcal{W},A_c,B_c,C_c,D_c\rbrace, \end{equation*} \end{subequations} which we solve using standard nonlinear programming algorithms. Although the formulation is straight forward, several points must be considered for the actual application: \begin{itemize} \item Constraint \eqref{eq:methodsD:constraint} requires $k$ to go to $t_\infty/t_s$, which can be too large, leading to a high number of linear constraints. During experiments, we discovered that one can choose a $k_\mathrm{max} < t_\infty/t_s$, above which the result of the control synthesis does not change anymore. \item The set of parameters $\mathcal{P}$ is very large. In practice, we do not optimize all parameters. Rather, we make an educated selection based on engineering knowledge. In Sec. \ref{sec_evaluation}, we will give many hints on how such a choice might look like for robot systems. \item The chosen order (system dimension) of the plant model can be too low, such that certain dynamic behaviors of the system are not sufficiently covered. \end{itemize} The last point is a problem which we frequently encountered: a plant model, whose order is too low, lets the synthesis algorithm optimize for a controller that would destabilize the plant. There are two reasons for this behavior: 1) the formal explanation is that a reachset conformant plant model does not transfer stability properties of a model to the real plant \cite{Roehm2019}; rather, all unknown dynamics should be reflected within non-deterministic parameters, including the ones which have been excited by the destabilization; and 2) the chosen test cases for identification (see Def. \ref{def:reachsetConformance}) do not sufficiently excite the relevant unknown dynamics. Regarding the first reason: it is not practical to create a model that covers all dynamics of a physical system, since the synthesis task would become increasingly complex; however, dynamically relevant behavior for the application should be considered. Nonetheless, without experience, it is hard to know a priori, which dynamics is relevant. Regarding the second reason: explicitely testing unstable behavior could pose danger for the robotic hardware, as well as the operator. To encounter these challenges, we propose an iterative synthesis procedure, which is inspired by the one proposed in \cite{VanDenHof1995}: instead of using one model, we propose to approach the synthesis problem with multiple model candidates with differing model orders. Infeasible model candidates will be eliminated after each iteration. From the model candidates that converge to feasible solutions, we choose the best one according to our cost criterium. The iterative process is depicted Fig. \ref{fig:methods:iterative} and goes as follows for each model candidate: \begin{figure} \includegraphics[width=\columnwidth]{figs/iterative.pdf} \caption{Iterative procedure for simultaneous reachability-based identification and control synthesis. The iterations stops, when new test data do not lead to an updated identified model and optimal controller} \label{fig:methods:iterative} \end{figure} \begin{enumerate} \item From an initial set of test cases we solve \eqref{eq:methodsD:optimization}. \item Using this controller, we run the robot and obtain a new set of test data. \item If the new test data is reachset conformant, then the control synthesis has converged. Otherwise, repeat step 1 including the new data. \end{enumerate} \section{Discussion \& Conclusion}\label{sec_discussion} In the case study, we have shown that reachability-based controller synthesis, together with identification, can be a powerful tool to do formal analysis of various control problems in robotics. We are able to make guarantees on the error of tracking controllers, the estimation errors of observers, and compute optimal output-feedback controllers minimizing these errors. The performance of the synthesis greatly depend on the models, that have been chosen for the robot. As we examined in the case study, one should model behaviors, that are relevant to the synthesis problem, such as delay and observer dynamics. Finding these, however, requires application-related experience. The rewards for accurate modeling are smaller identified non-determinisms, such that more feasible solutions can be found, and a faster convergence of the iterative synthesis approach. The number of iterations also depend on the initial dataset. If only a low amount of data is provided in the beginning, it is much more likely to find non-conforming behavior in the subsequent iterations. We provide a general formulation of the reachability-based identification and controller synthesis problem, and present a computationally efficient solution for linear systems. We especially make use of the superposition principle to reduce the amount of set inclusion checks and to analyse the tracking error independently from any reference trajectory. The extension to nonlinear, and hybrid systems is challenging, since the superposition does generally not apply for them. We have focused mostly on computing minimal robustly positive invariant sets. Our approach can also be applied to other safe sets, such as the ones shown in \cite{Gruber2020}. Together with this paper, we provide software tools to replicate the results and to analyze further control problems of linear systems, e.g., other feedback-linearizing controllers of robots. The compositional framework also makes it easy to analyse networked linear systems. In the future, we plan to implement visual tools, such that an identification and synthesis problem can be directly derived and solved from block diagrams descriptions of the system. \section{Reachability-based methods}\label{sec:methods} This section describes the theoretical contribution, which is our methodology for reachability-based identification and control. These methods share a common optimization framework, which we introduce in Sec. \ref{sec:methods:framework}. We subsequently derive reachset conformant model identification in Sec. \ref{sec_confTest}, reachability-based control synthesis in Sec. \ref{sec:methods:controlSynthesis}, and the combination of both in Sec. \ref{sec:methods:simultaneous}. \subsection{Reachability-based optimization framework}\label{sec:methods:framework} In our previous work \cite{Schuermann2017b}, we showed how we can obtain a safe controller by optimizing over the reachable sets of the closed-loop dynamics. We extend this idea to more general system structures. As we will see, all problems at hand, i.e., reachset conformant identification, controller synthesis, and the combination of both, can be solved by optimizing over reachable sets. To do so, we consider the interconnected system, e.g., the closed-loop system resulting from the plant in combination with the controller, and compute its reachable sets $\mathcal{R}$. In all cases, we obtain a linear system with variable parameters, e.g., unknown model or controller parameters. Donating this parameter set as $ \mathcal{P}, $ the optimization problem in its general form can be written as \begin{subequations} \begin{alignat}{2} &\!\min_{\mathcal{P}} &\qquad& \texttt{cost}(\mathcal{R}\label{eq:methods:costgeneral}),\\ &\text{subject to} & & \texttt{constraints}(\mathcal{R}).\label{eq:methods:constraintgeneral} \end{alignat} \end{subequations} The cost and constraints functions both depend on the reachable set or projections of the reachable set on subspaces and are going to be defined in detail in the following subsections. Depending on the type of parameters, the cost and constraint functions might become nonlinear and nonconvex. In this case we can use nonlinear programming to obtain optimal parameters. Overall, we solve all problems at hand with the same set of optimization techniques, while the incorporation of reachability analysis in the synthesis provides formal guarantees and ensures constraint satisfaction despite the presence of disturbances and uncertain measurements. \subsection{Reachability-based controller synthesis}\label{sec:methods:controlSynthesis} We consider a linear time-invariant controller system \begin{align} \label{eq:methods:controller} \begin{split} \dot{\vec{x}}_c(t) &= A_c \vec{x}_c(t) + B_c \vec{u}_c(t),\\ \vec{y}_c(t) &= C_c \vec{x}_c(t) + D_c \vec{u}_c(t), \end{split} \end{align} with subscript $c$. We denote the plant with subscript $p$ and the closed-loop linear system with subscript $z$ (see Fig. \ref{fig:methods:controllerSynthesis}). State $ \vec{x}_c $ describes the internal state of the controller, output $ \vec{y}_c $ is connected to the input of the plant, and input $ \vec{u}_c $ accepts feedback from the plant. We regard the problem of synthesizing a controller, that lets the output of the closed-loop system $\vec{y}_z(t)$ optimally track the reference $\vec{y}_\mathrm{ref}(t)$, while formally guaranteeing the satisfaction of state and input constraints. \begin{figure} \includegraphics[width=\columnwidth]{figs/controllerSynthesis.pdf} \caption{Plant and controller are subsystems of the closed-loop system. The interconnection of $\vec{y}_\mathrm{ref}, \vec{u}_c, \vec{y}_c, \vec{y}_z$ depends on the application.} \label{fig:methods:controllerSynthesis} \end{figure} To do so, we use techniques from \cite{Schuermann2017b,Schuermann2021a}, where we combine the controller synthesis with reachable set computation in a single optimization problem in the form of \eqref{eq:methods:costgeneral}--\eqref{eq:methods:constraintgeneral}. We use the superposition principle \cite{Schuermann2017b} to separate the problems of generating the reference input $\vec{u}_\mathrm{ref}(t)$, and synthesizing the optimal disturbance rejection provided by $\vec{y}_c(t)$, by defining the plant input as \begin{align} \vec{u}_p(t):=\vec{u}_\mathrm{ref}(t) + \vec{y}_c(t), \end{align} which, in turn, means that the output of the closed-loop system will be the sum of the reference and the tracking error $\vec{y}_e(t)$: \begin{align} \vec{y}_z(t)=\vec{y}_\mathrm{ref}(t) + \vec{y}_e(t), \end{align} In the following, we discuss our choice of parameters, constraint, and cost function. \paragraph*{Parameters} The controller is parametrized through the choice of controller matrices $ A_c, B_c, C_c, $ and $ D_c $. Note that a fixed state feedback of the form $ \vec{u}(\vec{x})=K \vec{x} $, as regarded in our previous work in \cite{Schuermann2017b}, is a special case of \eqref{eq:methods:controller}, with $ A_c=B_c=C_c=0 $, $ D_c=K $, and $C=I$ for the plant. \paragraph*{Cost} Analogously to \cite{Schuermann2017b}, we use the norm of the final reachable set $ ||\mathcal{R}_z(t_\infty)|| $ of the closed-loop system as a cost function for the optimization problem. To obtain the final reachable set $ \mathcal{R}_z(t_\infty) $, we compute the reachable set starting from a small initial set until it converges, i.e., it holds that $ \mathcal{R}_z(t+\Delta t) \subseteq \mathcal{R}_z(t), $ for some $ \Delta t \ge 0. $ In this case, the reachable set converged and since we consider time-invariant systems, any future reachable sets will remain in $ \mathcal{R}_z(t) $. By setting $\vec{u}_\mathrm{ref}(t) := 0$, which implies $\vec{y}_\mathrm{ref}(t)=0$, the set $\mathcal{R}_z(t_\infty)$ corresponds to the minimal robust positively invariant set \cite{Gruber2020} of the tracking error. Note that depending on the stability of the system and the numerical computation, this convergence might not happen; therefore, we apply a convergence tolerance criterium similar \cite{Gruber2020} to terminate the computation. \paragraph*{Constraints} We consider constraints sets $\mathcal{U}_p$ and $\mathcal{X}_p$ for the plant input and state by combining them into an output constraint set $\mathcal{Y}_\mathrm{con} = \mathcal{U}_p \times \mathcal{X}_p$, and provide an output signal $\vec{y}_\mathrm{con}(t) = [\vec{u}_p(t),\vec{x}_p(t)]^T$, such that \begin{align} \vec{y}_\mathrm{con}(t) &\in \mathcal{Y}_\mathrm{con},~\forall t \in \mathbb{R}^+_0.\label{eq:methods:OutputConstraint} \end{align} In order to ensure that the constraints are satisfied despite the decoupled synthesis of the tracking controller and reference trajectory, we divide the input constraints into two parts $ \mathcal{U}_\mathrm{ref} $ and $ \mathcal{Y}_{c} $ for the reference trajectory and for the tracking controller, respectively. We choose these sets such that \begin{align}\label{eq:methodsC:referenceSet} \mathcal{U}_\mathrm{ref} \oplus \mathcal{Y}_{c} \subseteq \mathcal{U}_p. \end{align} For simpler computation, we choose $ \mathcal{Y}_{c} $ as a polytope. Notice, that in order to analyze the reachability of $\vec{x}_p$, the plant model must also be reachset conformant regarding $\vec{x}_p$. The identification in Sec. \ref{sec_confTest} can be straight-forwardly extended by considering $\vec{x}_p$ as additional plant model outputs, and by extending the output measurements $\vec{y}_m$ (see Def. \ref{def:reachsetConformance}) with an estimation of the states, e.g., through observers. However, due to the estimation errors, we introduce additional conservativeness in the identified model, as can be seen in \eqref{eq:methods:estimError}. Therefore, $\vec{y}_\mathrm{con}$ should only include the elements of $\vec{x}_p$ that are relevant for state constraints. We formulate the resulting optimal control problem \begin{subequations} \begin{alignat}{2} &\!\min_{A_c, B_c, C_c, D_c} &\qquad& ||\mathcal{R}_z(t_\infty)||,\label{eq:methods:OCOutputCost}\\ &\text{subject to} & & \forall t: \mathcal{R}_\mathrm{con}(t) \subseteq \mathcal{Y}_c.\label{eq:methods:OCOutputConstraint} \end{alignat} \end{subequations} Since $ \mathcal{R}_z $ and $ \mathcal{R}_\mathrm{con}(t) $ are zonotopes, checking the constraint \eqref{eq:methods:OCOutputConstraint} requires only to check if a zonotope is inside a polytope. As shown in \cite{Schuermann2021a}, this can be computed very efficiently. In contrast to identification, optimizing the feedback matrix, which is multiplied with the output, cannot be expressed as a linear problem anymore. To be formally safe, we also consider time-varying disturbances when computing an over-approximative reachable set during the optimization problem, which prevents us from using under-approximations like \eqref{eq:methods:reachableSetRecursive}, see Lemma~\ref{lemma:constantInput}. As discussed in \cite{Schuermann2017b}, the resulting optimization problem can be solved using standard nonlinear programming techniques. \subsection{Reachset conformant model identification}\label{sec_confTest} Verifying reachability-based properties of a robot requires a reachset conformant model. We apply the definition of \cite{Roehm2019} to measureable physical systems: \begin{definition}[Reachset conformance]\label{def:reachsetConformance} Given is a physical system and its model. From the physical system, we perform a series of test cases, where the $m$-th case consists of the input $u_m(t)$, an initial state $x(0)$, and the measured output $y_m(t)$, where $t \in [0,t^*]$. From the model, we compute the reachable set $\mathcal{R}^{(m)}(t)$ for each $u_m(t)$ and x(0). The model is reachset conformant in the time interval $[0,t^*]$, iff \begin{equation*} \forall m: \forall t \in [0,t^*]: y_m(t) \subseteq \mathcal{R}^{(m)}(t), \end{equation*} which is a set inclusion problem. \end{definition} The task of model identification is thus to get an optimal set of model parameters $\mathcal{P}$, such that reachset conformance is preserved. For the general open-loop identification problem, we propose to minimize the norm of the reachable set integrated over $t \in [0,t^*]$ and over all test cases $m$: \begin{subequations} \begin{alignat}{2} &\!\min_{\mathcal{P}} &\qquad& \sum_m \int_0^{t^*}||\mathcal{R}^{(m)}(t)|| dt,\\ &\text{subject to} & & \forall m: \forall t: y_m(t) \subseteq \mathcal{R}^{(m)}(t) . \end{alignat} \end{subequations} This general formulation is applicable to nonlinear systems. For the remainder if this subsection, we derive a version for linear systems with $\mathcal{P} = \lbrace A,B,C,D,E,F,\mathcal{V},\mathcal{W}\rbrace$, that is much more computationally efficient to solve. At first, we show that we can remove the the sum $\sum_m$ and the quantifier $\forall m$ for linear systems by using the superposition principle. We substract the nominal output solution \begin{equation} y_m^*[k] := C \left(\tilde{A}^k x[0] + \sum_{i=0}^{k-1}\tilde{A}^i \tilde{B} u_m[i]\right) + D u_m[k], \end{equation} which is \eqref{eq:methods:discreteSystem} excluding the non-deterministic parameters, from the reachable set defined in \eqref{eq:methods:reachableSet}: \begin{align*} \mathcal{R}_a[k] &:= \mathcal{R}^{(m)}[k] - y^*[k] = \bigoplus_{i=0}^{k-1}\bar{E}_i\mathcal{W} \oplus F\mathcal{V}, \end{align*} where $\bar{E}_i = C \tilde{A}^{i}\tilde{E}$. We define the non-deterministic parameters as zonotopes $\mathcal{V} := (\vec{c}_V,G_V)$ and $\mathcal{W} := (\vec{c}_W,G_W)$, such that $\mathcal{R}_a[k]$ has a closed-form solution: \begin{gather} \mathcal{R}_a[k] = (\vec{c}_k,G_k), \vec{c}_k := \begin{bmatrix} \sum_{i=0}^{k-1}\bar{E}_i & F \end{bmatrix} \begin{bmatrix} \vec{c}_W \\ \vec{c}_V \end{bmatrix},\label{eq:methods:reachA}\\ G_k := \begin{bmatrix} \bar{E}_0 G_W & \dots & \bar{E}_{k-1} G_W & F G_V \end{bmatrix}. \end{gather} When applying Def. \ref{def:zonotopeNorm}, we immediately see that the zonotope norm $||\mathcal{R}_a[k]|| = ||\mathcal{R}^{(m)}(t)||$, and is independent from $m$ for linear systems. Also using the superposition principle, we substract $y_m^*[k]$ from the measurement $y_m[k]$, such that for each test case, $y_{a,m}[k] := y_m[k] - y_m^*[k]$ and the following holds for linear systems: \begin{align*} \forall m: \forall k: y_m[k] \subseteq \mathcal{R}^{(m)}[k] &\\ \iff &\forall m: \forall k: y_{a,m}[k] \subseteq \mathcal{R}_{a}[k] \\ \iff &\forall k: \bigcup_m y_{a,m}[k] \subseteq \mathcal{R}_{a}[k]. \end{align*} Thus, we formulate the open-loop identification problem for linear systems \begin{subequations} \begin{alignat}{2} &\!\min_{A,B,C,D,E,F,\mathcal{V},\mathcal{W}} &\qquad& ||\mathcal{R}_a(t)||\label{eq:methods:cost} dt,\\ &\text{subject to} & & \forall k: \bigcup_m y_{a,m}[k] \subseteq \mathcal{R}_{a}[k] .\label{eq:methods:constraint} \end{alignat} \end{subequations} The following two Lemmas present the cost and constraint function for the above optimization problem, which then result into Theorem \ref{theorem:linearSystemUnc} and \ref{theorem:linearSystem}. \begin{lemma}\label{lemma:linearCost} The cost \eqref{eq:methods:cost} for the identification of linear systems is linear in the scaling parameters $\alpha_W$ and $\alpha_V$ of the zonotopic non-determinisms $\mathcal{W}, \mathcal{V}$: \begin{gather}\label{eq:methods:linearCost} \int_0^{t^*}||\mathcal{R}(t)||dt = \vec{\gamma} \begin{bmatrix} \alpha_W\\ \alpha_V \end{bmatrix} \\ \gamma := \vec{1}^T \begin{bmatrix}\sum_{k=0}^{a} \left|\sum_{i=0}^{k-1} t_s \bar{E}_i G'_W\right|, & |F G'_V|\end{bmatrix}, \end{gather} where $\vec{1}$ is a vector full of ones and $a = t^*/\Delta t$. Please be reminded, that we use the notation $G := G'\diag{\alpha}$ here (see Def. \ref{def:zonotopeG}). \end{lemma} \begin{proof} To compute the norm (see Def. \ref{def:zonotopeNorm}), we only require the generators of $\mathcal{R}[k]$. Thus, for discrete-time linear systems, $\int_0^{t^*}||\mathcal{R}(t)||dt =\sum_{k}t_s||\mathcal{R}_a[k]||$. Each side length of $\mathcal{I}(\mathcal{R}_a[k])$ according to Def. \ref{def:intervalHull} is \begin{align*} \vec{\delta g}_k &= \left|\begin{bmatrix} \bar{E}_0 G_W & \dots & \bar{E}_{k-1} G_W & F G_V \end{bmatrix}\right|\vec{1} \\ &= \left|\begin{bmatrix} \bar{E}_0 G'_W & \dots & \bar{E}_{k-1} G'_W & F G'_V \end{bmatrix}\right|\begin{bmatrix} \alpha_W \\ \dots \\ \alpha_W \\ \alpha_V \end{bmatrix} \\ &= \left|\begin{bmatrix} \sum_{i=0}^{k-1} \bar{E}_i G'_W & F G'_V \end{bmatrix}\right|\begin{bmatrix} \alpha_W \\ \alpha_V \end{bmatrix}. \end{align*} With $||\vec{\delta g}_i||_1 := \vec{1}^T \vec{\delta g}_i$, we obtain $\vec{\gamma}\begin{bmatrix} \alpha_W \\ \alpha_V \end{bmatrix}$ by evaluating $\sum_{k} t_s||\mathcal{R}_a[k]|| = \sum_k t_s \vec{1}^T \vec{\delta g}_i$. \end{proof} \begin{lemma}\label{lemma:linearConstraint} The constraint \eqref{eq:methods:constraint} for the identification of linear systems is linear in $\vec{\xi} = [\vec{c}_W,\vec{c}_V,\vec{\alpha}_W,\vec{\alpha}_V]^T$, if we use the halfspace representation of $\mathcal{R}_a[k]$: \begin{equation}\label{eq:methods:linearConstraint} \forall k \in \left[0,\frac{t_e}{t_s}\right]: \forall m: N_k y_{a,m}[k] \leq D_k \vec{\xi}, \end{equation} where the $j$-th row of $N_k$ and $D_k$ are the facets of the zonotope $\mathcal{R}_a[k]$, s.t. \begin{align} \vec{n}_{j,k} &= \nX ({G'_k}^{\langle\gamma,\dots,\eta\rangle})^T/ ||\nX ({G'_k}^{\langle\gamma,\dots,\eta\rangle})||_2. \label{eq:methods:linearConstraint1}\\ \begin{split} \vec{d}_{j,k}^+ &= \Big[\begin{matrix} \sum_{i=0}^{k-1} \vec{n}_{j,k}^+ \bar{E}_i & \vec{n}_{j,k}^+ F \end{matrix} \\ &\qquad\qquad \begin{matrix} \sum_{i=0}^{k-1} |\vec{n}_{j,k}^+ \bar{E}_i G'_W|& |\vec{n}_{j,k}^+ F G'_V| \end{matrix}\Big]. \end{split} \\ \begin{split} \vec{d}_{j,k}^- &= \Big[\begin{matrix} -\sum_{i=0}^{k-1} \vec{n}_{j,k}^+ \bar{E}_i & -\vec{n}_{j,k}^+ F \end{matrix} \\ &\qquad\qquad \begin{matrix} \sum_{i=0}^{k-1} |\vec{n}_{j,k}^+ \bar{E}_i G'_W|& |\vec{n}_{j,k}^+ F G'_V| \end{matrix}\Big]. \end{split}\label{eq:methods:linearConstraint2} \end{align} \end{lemma} \begin{proof} Consider the halfspace representation of a zonotope $\mathcal{R}_a[k] = (\vec{c}_k,G_k)$ using the Def. \ref{def:zonotopeH}. We show that $\vec{n}_{j,k}^+$ is independent from $\vec{\alpha}$ for any generator matrix: \begin{align*} \nX &(G'\diag(\vec{\alpha})) =\\ &= [\dots, (-1)^{i+1}\det(G'^{[i]}\diag(\vec{\alpha})), \dots]^T,\\ &= \det(\diag(\vec{\alpha}))[\dots, (-1)^{i+1}\det(G'^{[i]}), \dots]^T,\\ &= \left(\prod \vec{\alpha}\right) \cdot \nX (G'), \end{align*} and since $\prod \vec{\alpha}$ is a positive scalar, the two-norm \begin{align*} \big|\big| (\prod \vec{\alpha}) \cdot \nX (G')\big|\big|_2 = (\prod \vec{\alpha}) || \nX (G')||_2, \end{align*} such that $\vec{\alpha}$ completely cancels out. To obtain $D_k$ we apply the definition of $\mathcal{R}_a[k]$ in \eqref{eq:methods:reachA}. From $\Delta d_{j,k}$, we extract $\alpha_W,\alpha_V$ in a similar way as in the proof of Lemma \ref{lemma:linearCost}: \begin{equation} \Delta d_{j,k} = \begin{bmatrix} \sum_{i=0}^{k-1} |\vec{n}_{j,k}^+ \bar{E}_i G'_W| & |\vec{n}_{j,k}^+F G'_V| \end{bmatrix}\begin{bmatrix} \alpha_W \\ \alpha_V \end{bmatrix}. \end{equation} \end{proof} The following two theorems formulate the reachset conformant identification problem \eqref{eq:methods:cost} and \eqref{eq:methods:constraint} for linear systems. \begin{theorem}[Reachset conformant identification of additive non-deterministic parameters of linear systems]\label{theorem:linearSystemUnc} Given a linear system \eqref{eq:methods:continuousSystem}, where $\mathcal{V}$ and $\mathcal{W}$ are zonotopes, the reachset conformant identification problem is a linear program, where $\mathcal{P} = \lbrace\vec{c}_W,\vec{c}_V,\vec{\alpha}_W,\vec{\alpha}_V\rbrace$ are the model parameters to be identified, \eqref{eq:methods:linearCost} is the cost, and \eqref{eq:methods:linearConstraint} are the constraints. \end{theorem} \begin{proof} Check proofs of Lemma \ref{lemma:linearCost} and \ref{lemma:linearConstraint}. Given $\mathcal{P}$, both cost \eqref{eq:methods:linearCost} and the constraint function \eqref{eq:methods:linearConstraint} are linear. \end{proof} \begin{theorem}[Reachset conformant identification of linear systems]\label{theorem:linearSystem} Given a linear system \eqref{eq:methods:continuousSystem}, where $\mathcal{V}$ and $\mathcal{W}$ are zonotopes, the reachset conformant identification problem is generally a nonlinear program, where $\mathcal{P} = \lbrace A,B,C,D,E,F,\mathcal{V},\mathcal{W}\rbrace$ are the variables to be identified, \eqref{eq:methods:linearCost} is the cost, and \eqref{eq:methods:linearConstraint} are the constraints. \end{theorem} \begin{proof} Check proofs of Lemma \ref{lemma:linearCost} and \ref{lemma:linearConstraint}. \end{proof} \begin{remark} We provide some remarks on the implementation of the above theorems: \begin{itemize} \item Theorem \ref{theorem:linearSystem} can be approached in a cascading way: an inner layer solves for $\lbrace\vec{c}_W,\vec{c}_V,\vec{\alpha}_W,\vec{\alpha}_V\rbrace$ using linear programming (Theorem \ref{theorem:linearSystemUnc}), while the outer layer solves for $\lbrace A,B,C,D,E,F,G'_W,G'_V\rbrace$ through nonlinear programming. An MATLAB implementation is provided together with this paper. \item The solution space can also be reduced by estimating $\lbrace A,B,C,D\rbrace$ using the subspace method based on least-squares optimization \cite[Chapter~4.3]{Ljung1999}, although \cite{Chen2019a} has shown that such an approach is generally not optimal. \item To compute $y^*[k]$, an estimation of the initial state $x[0]$ is required, similar to other identification algorithms (e.g., subspace methods in \cite{Ljung1999}). \end{itemize} \end{remark} The influence of non-determinism on the linear system \eqref{eq:methods:continuousSystem} is modelled by the matrices $E$ and $F$. The main motivation behind it is that, for systems with high-dimensional state spaces, engineering knowledge can be applied to accurately determine where non-determinism appears, and therefore reducing the number of optimization parameters. The following Lemma evaluates, whether $E$ and $F$ have been chosen correctly. \begin{lemma}\label{lemma:nonDeterminism} A linear system with variable $\mathcal{W},\mathcal{V}$ can capture all non-determinisms of the system, if \begin{equation} \forall k: \quad J_k=\left[C\bar{E}_0, \dots, C\bar{E}_{k-1}, F\right], \end{equation} has full (row) rank. If it is not full for some $k$, then the signals $\vec{y}_a[k]$ must only appear in $S(J_k)$, which is the linear subspace of $J_k$. \end{lemma} \begin{proof} If $J_k$ has full rank, then matrix $J_k:~\lbrace\mathcal{W},\mathcal{V}\rbrace \rightarrow~y[k]$ is in \eqref{eq:methods:reachableSet} is a surjective function (cite). When not full rank, then it is only surjective with respect to the image $S(J_k)$. \end{proof} \begin{remark} To check, if $\forall k: \vec{y}_a[k] \in S(J_k)$, we can simply evaluate whether $\vec{y}_a[k] = J_k J_k^+\vec{y}_a[k]$ is satisfied, where $()^+$ is the Moore-Penrose inverse operator \cite{James1978}. \end{remark} A thought experiment demonstrates the use of the above Lemma: the initial state $x[0]$, which is required for reachability analysis, is usually not measureable and can only be estimated with an estimation error $\mathcal{X}_0$, that has not been explicitely modelled in our linear system \eqref{eq:methods:discreteSystem}. However, if the conditions Lemma \ref{lemma:nonDeterminism} is fulfilled, then $\mathcal{X}_0$ is remapped onto $\mathcal{W}$ and $\mathcal{V}$. After one time-step, \begin{equation}\label{eq:methods:estimError} \mathcal{W} \times \mathcal{V} = J_1^+J_1(\mathcal{W}^* \times \mathcal{V}^*) \oplus J_1^+C\tilde{A}\mathcal{X}_0 \end{equation} is a possible solution of the remap. An interesting side result of this example is that it allows us to evaluate the performance of state estimation algorithms: higher-performing state estimation results in a decreasing $\mathcal{X}_0$, which strictly decreases the size of identified $\mathcal{W} \times \mathcal{V}$, as shown in \eqref{eq:methods:estimError}. An additional note for extensions to nonlinear systems: since reachability algorithms of nonlinear systems are generally non-closed, strict reachset conformance as defined in Def. \ref{def:reachsetConformance} requires an inner-approximation of reachable sets. \section{A case study on robot manipulators}\label{sec_evaluation} We demonstrate the applicability of our newly proposed methods for robot systems by studying the reachability-based design of feedback-linearizing controllers for a 6-DOF manipulator. We use reachability analysis to compute and minimize the ultimate bounds of the tracking error. We start with investigating modelling choices for optimal identification results. We subsequently examine the application of our methods on the synthesis of a state-feedback controller, a linear velocity observer, and an output-feedback controller. \subsection{Modelling choices and open-loop identification} The system at hand is a Schunk LWA-4P 6-DOF robot manipulator. Each joint has a local current controller and an encoder feedback measuring the angular joint position. A Speedgoat Real-Time Target Machine acts as a centralized controller which sends control currents over a CANopen fieldbus system, and receives the position feedback. The sampling time of the centralized controller is $t_s = 4$ ms. The following paragraphs describe the subsystems involved in this case study. \paragraph{Robot dynamics} The rigid-body model of a robot can be described by \begin{equation}\label{eq:eval:robotDynamics} M(\vec{q})\ddot{\vec{q}} + \vec{c}(\vec{q},\dot{\vec{q}}) + \vec{g}(\vec{q}) = \vec{\tau}, \end{equation} where $\vec{q},\dot{\vec{q}},\ddot{\vec{q}}$ are the position, velocity, and acceleration of the robot joints, $M$ is the mass matrix, $\vec{c}$ are the Coriolis and centripedal forces, $ \vec{g}$ are the gravity forces, and $\vec{\tau}$ are the joint torques. The feedback linearization technique \begin{equation}\label{eq:eval:feedbackLinearization} \vec{\tau} = M(\vec{q})\vec{u} + \vec{c}(\vec{q},\dot{\vec{q}}) + \vec{g}(\vec{q}) \end{equation} implements an internal control loop with a new input $\vec{u}$, such that the system dynamics become $\ddot{\vec{q}} = \vec{u}$ through inserting \eqref{eq:eval:feedbackLinearization} into \eqref{eq:eval:robotDynamics}. From the outside, the robot behaves like a decoupled linear system. However, the feedback linearization is usually imperfect \cite{Abdallah1991}; the effects can be mitigated using disturbance observers such as \cite{Mohammadi2013b}. Nevertheless, we consider an unknown additive disturbance $\mathcal{W} \in \mathbb{R}^2$, and an unknown position feedback error $\mathcal{V} \in \mathbb{R}$. The resulting state-space model for each joint is: \begin{align*}\label{eq:eval:robotLinearDynamics} \dot{\vec{x}}_r &= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \vec{x}_r + \begin{bmatrix} 0 \\ 1 \end{bmatrix} u_r + \mathcal{W}, \\ y_r &= \begin{bmatrix} 1 & 0\end{bmatrix} \vec{x}_r + \mathcal{V} \end{align*} where $\vec{x}_r = [q,\dot{q}]^T$. The discrete-time version is obtained by applying \eqref{eq:methods:discreteSystem}. \paragraph{State-feedback control} The inverse dynamics tracking controller \cite[Section 8.5]{Siciliano2009a} is characterized by the feedback linearization in \eqref{eq:eval:feedbackLinearization} and a state-feedback term for each robot joint: \begin{equation}\label{eq:eval:robotController} u_r := y_c = \begin{bmatrix} 1 & k_p & k_d\end{bmatrix} \vec{u}_c, \end{equation} where $u_c= [\ddot{q}_d, q_d-\hat{q}, \dot{q}_d -\dot{\hat{q}}]^T$, the values $q_d,\dot{q}_d,\ddot{q}_d$ denote the desired trajectory, and $\hat{q},\dot{\hat{q}}$ are the observation of robot position and velocity. The gains $k_p, k_d$ are designed by choosing a natural frequency $\omega$ and the damping ratio $\zeta$, s.t. $k_p := \omega^2, k_d := 2\zeta\omega$ \cite{Siciliano2009a}. \paragraph{Observer} The above controller requires a full state feedback; however, only the robot position is measurable. We thus require an online state estimation and therefore choose the linear high-gain observer from \cite{Nicosia1993a}. Its dynamics for each joint is \begin{align}\label{eq:eval:robotObserver} \dot{\vec{x}}_o &= \begin{bmatrix} -h_1 & 1 \\ -h_2 & 0 \end{bmatrix} \vec{x}_o + \begin{bmatrix} h_1 \\ h_2 \end{bmatrix} u_o, \\ \vec{y}_o &= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \vec{x}_o, \end{align} where $u_o := q_m$ is the measured position, and $\vec{y}_o = [\hat{q},\dot{\hat{q}}]^T$ are observed position and velocity. The gains designed by selecting $\tilde{h}_1$, $\tilde{h}_2$ and an $\epsilon$, such that $h_1 := \tilde{h}_1/\epsilon, h_2 := \tilde{h}_2/\epsilon^2$. On our centralized controller, we implement the discrete-time version from \cite{Busawon2017b}. \paragraph{Delay} The amount of time delay can seldomly be estimated exactly, and is often time-varying \cite{Liu2012}. We assume a delay of one sampling time in each direction: due to synchronization in the fieldbus communication, a computed control signal needs to wait until the next instant for it to be sent. Vice versa, a position measurement is almost retrieved instantly, but has to wait for the next cycle to be sent back to the central controller. Delay is best expressed in discrete-time: \begin{align*} x_{de}[k+1] &= u_{de}[k], \\ y_{de}[k] &= x_{de}[k], \end{align*} where $u_{de}$ is the input signal, and $y_{de}$ is the signal delayed by one sampling instant. The Pade approximation is a continuous-time model for any time delay \cite{Golub1989}. Given the subsystems introduced in the previous paragraphs, we are offered multiple options to choose plant model candidates; an optimal choice is often not immediately clear. Depending on the desired order, we can omit certain subsystems, or decide between a continuous-time or discrete-time version. In this case study, we regard six different plant model candidates with increasing order: \begin{description} \item[R-] Only robot dynamics (continuous-time) \item[R+] Only robot dynamics (discrete-time) \item[RO-] Robot dynamics with observer (continuous) \item[RO+] Robot dynamics with observer (discrete) \item[RD+] Robot dynamics with delay (discrete) \item[ROD+] Robot dynamics with observer and delay (discrete) \end{description} \begin{figure} \includegraphics[width=\columnwidth]{figs/robotModels.pdf} \caption{\textit{Robot model candidates:} Block interconnection diagrams of the model structues and their system states $x_*$} \label{fig:eval:models} \end{figure} The block interconnection diagram of the models are shown in Fig. \ref{fig:eval:models}. All candidates have the same inputs and outputs, such that we can use the same dataset to identify all models. For candidates that omit the observer we apply an alternative measurement error $\mathcal{V}' \in \mathbb{R}^2$ to satisfy Lemma \ref{lemma:nonDeterminism}. Since all the candidates are series interconnections of linear subsystems, their respective composition are also linear. Initially, we evaluate the quality of the model candidates by comparing the cost \eqref{eq:methods:linearCost} of the open-loop identification of the unknown disturbances $\mathcal{W},\mathcal{V}$. To make the comparison concise, we assume zonotopes $\mathcal{W} := (0,G_W'\diag(\vec{\alpha}_W))$ and $\mathcal{V} := (0,G_V'\diag(\vec{\alpha}_V))$ for all models, where $G_W = I$ and $G_V = I$ are fixed. The parameter set thus only consists of $\mathcal{P} = \lbrace \alpha_W,\alpha_V\rbrace$, so that the identification problem is a linear program and can be solved using Theorem \ref{theorem:linearSystemUnc}. The initial dataset for this and all subsequent synthesis problems has been obtained from the real robot running trapezoidal and polynomial trajectories with random target positions, velocities, and accelerations \footnote{A video showing the initial tests, and the code for reproducing all experiments are provided within the supplementary materials.}. The initial gains for the state-feedback controller and linear observer have been manually tuned to $\omega = 20, \zeta = 0.65, \tilde{h}_1 = 15, \tilde{h}_2 = 30, \epsilon = 0.01$. An automatic preselection was done to avoid trajectories that lead to self-collision or exceeding the maximum motor currents. The total duration of the initial dataset is 33 minutes and 20 seconds. We maximize the number of test cases by considering each sampling instant as a starting point of a new test case, resulting in 497,880 test cases for each joint. The initial states $x[0]$ for each model and test case can be mostly derived from the measurements and by tracking the corresponding signals on our controller. Only for the initial robot velocities, we choose to use offline zero-phase-filtering \cite{Oppenheim1999}, because it resulted in smaller identified disturbances compared to using the observed velocity, which ultimately, according to \eqref{eq:methods:estimError}, means that the offline method delivers a better velocity estimation. The resulting costs are shown in Tab. \ref{tab:eval:openLoopCost}, and the corresponding parameters are shown in Tab. \ref{tab:eval:openLoopParam}. \begin{table} \caption{Open-loop identification results: cost (Lemma 2)} \label{tab:eval:openLoopCost} \begin{center} \begin{tabular}{ l c c c c c c} \hline \textbf{Model} & Axis 1 & Axis 2 & Axis 3 & Axis 4 & Axis 5 & Axis 6 \\\hline R- & $0.0322$ & $0.0422$& $0.0325$& $0.0309$& $0.0505$& $0.0405$\\ R+ & $0.0322$ & $0.0422$& $0.0325$& $0.0309$& $0.0505$& $0.0405$\\ RO- & $0.0033$ & $0.0046$& $0.0023$& $0.0028$& $0.0035$& $0.0054$\\ RO+ & $0.0032$ & $0.0046$& $0.0022$& $0.0026$& $0.0035$& $0.0053$\\ RD+ & $0.0025$ & $0.0044$& $0.0022$& $0.0023$& $0.0035$& $0.0050$\\ ROD+& $0.0022$ & $0.0041$& $0.0021$& $0.0023$& $0.0032$& $0.0041$\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Open-loop identification results: non-determinisms of axis 1} \label{tab:eval:openLoopParam} \begin{center} \begin{tabular}{ l c c c c} \hline \textbf{Model} & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$\\\hline R- & $0.0184$& $2.2009$& $0.0001$& $0.0298$\\ R+ & $0.0184$& $2.2009$& $0.0001$& $0.0298$\\ RO- & $0.0386$& $1.4494$& $0.0000$& $-$\\ RO+ & $0.0401$& $1.7321$& $0$ & $-$\\ RD+ & $0.0127$& $1.7762$& $0.0001$& $0.0207$\\ ROD+& $0.0434$& $0.7556$& $0$& $-$\\ \hline \end{tabular} \end{center} \end{table} The open-loop identification results clearly show that the cost decreases with increasing model order for every robot axis. A decrease is also visible for $\alpha_{W,2}$, which corresponds to the size of non-determinism of the robot acceleration. A significant difference between discrete-time and continuous-time model candidates in terms of cost is not visible. The computation time for all models are within seconds, when using MATLAB, irregardless of the model order. This evaluation indicates that the ROD+ model is the best model candidate with the smallest reachable set and least amount of non-determinism. \subsection{State-feedback control synthesis} \label{sec:eval:statefeedback} \begin{figure} \includegraphics[width=\columnwidth]{figs/stateFeedback.pdf} \caption{\textit{Simultaneous state-feedback synthesis and identification:} we minimize $\mathcal{R}_{q_r,\dot{q}_r}$ and $\mathcal{R}_{u_r}$ is under constraint. Variables are shown in \textbf{bold}.} \label{fig:eval:stateFeedback} \end{figure} In this section of our case study, we apply our iterative synthesis approach from Sec. \ref{sec:methods:simultaneous} to the problem of designing the state-feedback controller in $\eqref{eq:eval:robotController}$. The feedback-linearization, which decouples the dynamics of robot joints, is a major simplification, since it allows us to synthesize the controller for each joint separately. The synthesis goal is to reduce the reachable set of the tracking error taking into account the limited motor capabilities, while simultaneously identifying reachset conforming disturbances of the robot. We evaluate the same model candidates as in the previous experiment. The block diagram of the closed-loop system is shown in Fig. \ref{fig:eval:stateFeedback}. The reference for each axis is the desired trajectory $\vec{y}_\mathrm{ref} := [q_d,\dot{q}_d]^T$ and $u_\mathrm{ref} = \ddot{q}_d$, the outputs of the closed-loop system are $\vec{y}_z = \vec{y}_p$ and $\vec{y}_\mathrm{con} = u_r$, where $\vec{y}_p$ consists of the output position and velocity of the robot (see Fig. \ref{fig:eval:models}), and $u_r$ is the plant input. The synthesis goal is to reduce the position and velocity error of the terminal reachable set: \begin{subequations} \label{eq:eval:stateOptimization} \begin{alignat*}{2} &\!\min_{\mathcal{P}} &\qquad& ||\mathcal{R}_{q_r,\dot{q}_r}(t_\infty)||,\\ &\text{subject to} & & \eqref{eq:methods:linearConstraint},\\ & & & \forall t: \mathcal{R}_\mathrm{con}(t) \subseteq \mathcal{Y}_c, \end{alignat*} and \begin{equation*} \mathcal{P} := \lbrace \omega,\zeta,\alpha_W,\alpha_V \rbrace. \end{equation*} \end{subequations} To find the appropriate $\mathcal{Y}_c$, according to \eqref{eq:methodsC:referenceSet}, we reserve $\ddot{q}_d \in \mathcal{U}_\mathrm{ref} = [-3,3]$ m/$s^2$. We then derive, for each axis $i$, the upper limit of the input $u_r \in \mathcal{U}_{p,i}$ from the peak torques of the motors, which are $\vec{\tau}_\mathrm{max} = [75.5,75.5,75.5,75.5,20,20]^T$~Nm. We fit the largest intervals for $\mathcal{U}_{p,i}$ of each joint that adheres to $\vec{\tau} \leq \vec{\tau}_\mathrm{max}$ by evaluating \eqref{eq:eval:feedbackLinearization} with $\vec{u} := \mathcal{U}_{p,1} \times ... \times \mathcal{U}_{p,6}$ and randomly sampled $q,\dot{q}$. We determined that $\mathcal{U}_{p,2} =[-7.27,7.27]$ rad/$s^2$, for axis 2, and $\mathcal{U}_{p,i}=[-20,20]$, for all other axes, are admissible intervals. Thus, by applying \eqref{eq:methodsC:referenceSet}, $\mathcal{Y}_c=[-4.27,4.27]$ for axis 2 and $\mathcal{Y}_c=[-17,17]$ for the other axes. The iterative synthesis is performed for each axis individually. The initial dataset for the first iteration is the same as in the open-loop identification. For subsequent iterations, we run a validation trajectory to obtain new data. The results of the synthesis is shown in Tab. \ref{tab:eval:stateFeedback}. \begin{table*} \caption{State feedback control synthesis results for all candidate models} \label{tab:eval:stateFeedback} \setlength\tabcolsep{5pt} \begin{center} \begin{tabular}{ r r r r r r r r r r r r r r r r r } \hline && \multicolumn{7}{c}{\textbf{Iteration 1}} && \multicolumn{7}{c}{\textbf{Iteration 2}} \\ \textbf{Model} &Ax.& cost & $\omega$ & $\zeta$&$\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$ && cost & $\omega$ & $\zeta$ & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$\\ \hline \multirow{6}{*}{R-} &$1$&$0.16$ & $100.00$ & $0.90$ & $0.00$ & $2.15$ & $0.00$ & $0.02$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$\\ &$2$&$1.07$ & $7.12$ & $0.74$ & $0.00$ & $2.05$ & $0.00$ & $0.09$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$3$&$0.22$ & $100.00$ & $0.76$ & $0.00$ & $1.50$ & $0.00$ & $0.04$&& $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$4$&$0.22$ & $97.05$ & $0.80$ & $0.00$ & $2.85$ & $0.00$ & $0.03$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$5$&$0.28$ & $78.81$ & $0.75$ & $0.00$ & $3.23$ & $0.00$ & $0.04$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$6$&$0.43$ & $47.75$ & $0.86$ & $0.00$ & $4.93$ & $0.00$ & $0.05$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\[0.1cm] \multirow{6}{*}{R+} &$1$&$0.21$ & $98.29$ & $1.00$ & $0.01$ & $2.25$ & $0.00$ & $0.02$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$2$&$1.77$ & $3.81$ & $1.00$ & $0.06$ & $2.05$ & $0.00$ & $0.09$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$3$&$0.30$ & $75.71$ & $0.85$ & $0.03$ & $1.54$ & $0.00$ & $0.04$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$4$&$0.30$ & $73.75$ & $0.89$ & $0.02$ & $3.01$ & $0.00$ & $0.03$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$5$&$0.36$ & $61.44$ & $0.88$ & $0.02$ & $3.32$ & $0.00$ & $0.04$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\ &$6$&$0.49$ & $43.78$ & $0.91$ & $0.01$ & $5.02$ & $0.00$ & $0.05$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\[0.1cm] \multirow{6}{*}{RO-}&$1$&$\mathit{0.24}$ & $\mathit{21.18}$ & $\mathit{1.00}$ & $\mathit{0.05}$ & $\mathit{0.00}$ & $\mathit{0.00}$ & $-$ && $-$ & $-$ & $-$& $-$ &$-$ & $-$ & $-$ \\ &$2$&$0.27$ & $\mathit{16.31}$ & $\mathit{1.00}$ & $\mathit{0.05}$ & $\mathit{0.00}$ & $\mathit{0.00}$ & $-$ && $-$ & $-$ & $-$& $-$ &$-$ & $-$ & $-$\\ &$3$&$0.18$ & $42.16$ & $1.00$ & $0.04$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\ &$4$&$0.12$ & $40.02$ & $1.00$ & $0.02$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\ &$5$&$0.18$ & $42.48$ & $1.00$ & $0.04$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\ &$6$&$0.21$ & $43.83$ & $1.00$ & $0.04$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\[0.1cm] \multirow{6}{*}{RO+}&$1$&$0.29$ & $40.03$ & $1.00$ & $0.03$ & $1.63$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\ &$2$&$\mathit{1.22}$ & $\mathit{4.73}$ & $\mathit{1.00}$ & $\mathit{0.09}$ & $\mathit{1.91}$ & $\mathit{0.00}$ & $-$ && $-$ & $-$ & $-$& $-$ &$-$ & $-$ & $-$ \\ &$3$&$0.30$ & $37.83$ & $1.00$ & $0.03$ & $1.33$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\ &$4$&$0.35$ & $46.69$ & $1.00$ & $0.04$ & $2.48$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ &$-$ \\ &$5$&$0.35$ & $47.97$ & $1.00$ & $0.03$ & $3.44$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ &$-$ \\ &$6$&$0.48$ & $38.31$ & $1.00$ & $0.04$ & $4.52$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ &$-$ \\[0.1cm] \multirow{6}{*}{RD+}&$1$&$\mathbf{0.31}$ & $\mathit{35.46}$ & $\mathit{0.79}$ & $\mathit{0.02}$ & $\mathit{1.88}$ & $\mathit{0.00}$ & $\mathit{0.02}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$2$&$\mathit{1.74}$ & $\mathit{3.96}$ & $\mathit{1.00}$ & $\mathit{0.08}$ & $\mathit{1.99}$ & $\mathit{0.00}$ & $\mathit{0.08}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$3$&$\mathit{0.38}$ & $\mathit{29.65}$ & $\mathit{0.80}$ & $\mathit{0.03}$ & $\mathit{1.38}$ & $\mathit{0.00}$ & $\mathit{0.03}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$4$&$\mathbf{0.42}$ & $\mathit{35.48}$ & $\mathit{0.79}$ & $\mathit{0.03}$ & $\mathit{2.61}$ & $\mathit{0.00}$ & $\mathit{0.03}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$5$&$0.51$ & $35.48$ & $0.79$ & $0.03$ & $3.17$ & $0.00$ & $0.03$ && $\mathit{0.56}$ & $\mathit{38.24}$ & $\mathit{0.85}$ & $\mathit{0.03}$ & $\mathit{4.08}$ & $\mathit{0.00}$ & $\mathit{0.03}$ \\ &$6$&$0.58$ & $36.51$ & $0.90$ & $0.03$ & $4.71$ & $0.00$ & $0.03$ && $\mathit{0.71}$ & $\mathit{29.76}$ & $\mathit{1.00}$ & $\mathit{0.02}$ & $\mathit{6.59}$ & $\mathit{0.00}$ & $\mathit{0.03}$ \\[0.1cm] \multirow{6}{*}{ROD+}&$1$&$\mathit{0.37}$ & $\mathit{19.28}$ & $\mathit{1.00}$ & $\mathit{0.03}$ & $\mathit{1.37}$ & $\mathit{0.00}$& $-$&&$-$&$-$&$-$&$-$&$-$& $-$&$-$\\ &$2$&$\mathbf{1.15}$ & $\mathit{4.92}$ & $\mathit{1.00}$ & $\mathit{0.09}$ & $\mathit{1.74}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$3$&$\mathbf{0.37}$ & $\mathit{18.67}$ & $\mathit{1.00}$ & $\mathit{0.04}$ & $\mathit{1.18}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$4$&$\mathit{0.50}$ & $\mathit{20.85}$ & $\mathit{1.00}$ & $\mathit{0.04}$ & $\mathit{2.14}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$5$&$\mathbf{0.54}$ & $\mathit{19.28}$ & $\mathit{1.00}$ & $\mathit{0.05}$ & $\mathit{2.09}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\ &$6$&$\mathbf{0.67}$ & $\mathit{21.75}$ & $\mathit{1.00}$ & $\mathit{0.04}$ & $\mathit{3.79}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\\hline \multicolumn{14}{c}{\textbf{bold}: best model candidate for this axis, \textit{italic}: converged values, $*$: infeasible solution, $-$: not evaluated} \end{tabular} \end{center} \end{table*} The ROD+ model is the only one, that converges after one iteration, meaning that when running the validation trajectory with the optimized values, the robot did not produce new data, for which the identified model was not reachset conformant. We can see that the R and RO models are not suitable for controller synthesis: the first iteration produced control gains, that were too high, and for which the real robot became unstable. Thus, the second iteration did not yield feasible solutions, since the identified non-determinism were too large. Only RD+ and ROD+ produced converging solutions, because they modelled the delay dynamics. This helped the reachability analysis to predict the instability, when using high gains, because the reachable sets would grow very large, thus letting the optimization avoid them. \subsection{Observer synthesis} \begin{figure} \includegraphics[width=\columnwidth]{figs/observer.pdf} \caption{\textit{Simultaneous observer synthesis and identification (Approach 1):} we minimize $\mathcal{R}_{\dot{\hat{e}}}$. Variables are shown in \textbf{bold}.} \label{fig:eval:observer} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figs/observer2.pdf} \caption{\textit{Observer synthesis (Approach 2):} we minimize the transient duration of a set of step responses. Variables are shown in \textbf{bold}.} \label{fig:eval:observer2} \end{figure} We study the problem of designing the linear observer in \eqref{eq:eval:robotObserver}. In this paper, we propose two different reachability-based approaches than can be realized within our framework. \begin{itemize} \item \textbf{Approach 1}: we minimize velocity estimation error, considering the closed-loop system as depicted in Fig.~\ref{fig:eval:observer}. We obtain an invariant set, towards which the reachable maximal velocity estimation error converges. \item \textbf{Approach 2}: given a set of step responses, we minimize the duration of the transient, as well as the reachable steady-state error of the observer system. This approach is inspired by the works in \cite{Prasov2013}, where the authors formally proved the boundedness of high-gain observers with measurement errors. \end{itemize} Like the state-feedback example, we formulate Approach 1 for each joint as an iterative synthesis problem (Sec. \ref{sec:methods:simultaneous}), since it involves identifying the robot plant. Approach 2 only analyses the known observer system, so that the reachability-based control synthesis method of Sec. \ref{sec:methods:controlSynthesis} is sufficient. \subsubsection*{Approach 1} The reference input and output are the same as for the state-feedback example: $\vec{y}_\mathrm{ref} := [q_d,\dot{q}_d]^T$, and $u_\mathrm{ref} = \ddot{q}_d$. The output of the closed-loop system is the estimation error $y_z = \hat{e} = \dot{q}-\dot{\hat{q}}$. Notice, that because the unmeasurable 'true' velocity $\dot{q}$ of the robot system is analysed here, we need to include an estimation (e.g., using an observed value) within the test data for the identification. This does not conflict with the synthesis, because the test data is only relevant for identification, while Approach 1 optimizes the resulting dynamics from a new observer. For brevity, we omit the input constraints. The overall synthesis is formalized in the following optimization problem: \begin{subequations} \label{eq:eval:observerOptimization} \begin{alignat*}{2} &\!\min_{\mathcal{P}} &\qquad& ||\mathcal{R}_{\dot{\hat{e}}}(t_\infty)||,\\ &\text{subject to} & & \eqref{eq:methods:linearConstraint}, \end{alignat*} and \begin{equation*} \mathcal{P} := \lbrace \tilde{h}_1,\tilde{h}_2,\alpha_W,\alpha_V \rbrace, \end{equation*} \end{subequations} where $\mathcal{R}_{\dot{\hat{e}}}(t_\infty)$ is a positively invariant set, towards which the velocity estimation error converges. Since $\epsilon$ is a redundant parameter, we fix it at $\epsilon = 0.01$. Like in the state-feedback synthesis, we optimize the scaling parameters $\alpha_W,\alpha_V$ of the robot non-determinism. Since the observer is implemented in discrete-time, we only consider the discrete-time robot model candidates R+ and RD+. The results of the synthesis is shown in Tab. \ref{tab:eval:observer}. The iterative synthesis converged after one iteration for both model candidates. In contrast to the open-loop identification and state-feedback synthesis, we see that the R+ model lead to the smallest final reachable set. Despite the varying non-determinisms of the robot, the optimal observer parameters are very similar across all axes, which indicates that $\mathcal{W},\mathcal{V}$ do not influence the observer dynamics much, but affect the final reachable set. \begin{table} \caption{Observer synthesis results (approach 1)} \label{tab:eval:observer} \begin{center} \setlength\tabcolsep{5pt} \begin{tabular}{ r r r r r r r r r } \hline && \multicolumn{7}{c}{\textbf{Iteration 1}} \\ \textbf{Model} & Ax. & cost & $\tilde{h}_1$ & $\tilde{h}_2$ & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$\\ \hline \multirow{6}{*}{R+}&$1$&$\mathbf{0.12}$ & $10.55$ & $29.73$ & $0.03$ & $1.58$ & $0.00$ & $0.10$ \\ &$2$&$\mathbf{0.16}$ & $11.72$ & $45.85$ & $0.02$ & $5.89$ & $0.00$ & $0.03$ \\ &$3$&$\mathbf{0.13}$ & $10.55$ & $29.73$ & $0.03$ & $1.32$ & $0.00$ & $0.09$ \\ &$4$&$\mathbf{0.15}$ & $10.55$ & $29.73$ & $0.04$ & $2.41$ & $0.00$ & $0.10$ \\ &$5$&$\mathbf{0.14}$ & $10.55$ & $29.73$ & $0.03$ & $3.07$ & $0.00$ & $0.07$ \\ &$6$&$\mathbf{0.20}$ & $10.55$ & $29.73$ & $0.04$ & $4.11$ & $0.00$ & $0.08$ \\[0.1cm] \multirow{6}{*}{RD+} &$1$&$0.21$ & $7.99$ & $21.79$ & $0.04$ & $2.19$ & $0.00$ & $0.02$ \\ &$2$&$0.39$ & $7.99$ & $21.81$ & $0.06$ & $5.68$ & $0.00$ & $0.05$ \\ &$3$&$0.21$ & $7.99$ & $21.82$ & $0.03$ & $2.26$ & $0.00$ & $0.02$ \\ &$4$&$0.25$ & $7.99$ & $21.81$ & $0.04$ & $3.37$ & $0.00$ & $0.02$ \\ &$5$&$0.29$ & $7.99$ & $21.81$ & $0.04$ & $3.77$ & $0.00$ & $0.03$ \\ &$6$&$0.31$ & $7.99$ & $21.81$ & $0.04$ & $4.15$ & $0.00$ & $0.04$ \\[0.1cm] \hline \multicolumn{7}{c}{\textbf{bold}: best model candidate for this axis} \end{tabular} \end{center} \end{table} \subsubsection*{Approach 2} The goal of the synthesis is to minimize the transient response time, as well as the steady-state error caused by the measurement error. As \cite{Prasov2013} points out, these are conflicting goals for high-gain observers, because a faster transient leads to noise amplification, while a slower transient attenuates noise. To resolve this conflict, we consider the transient response time as the sole cost, while we regard the maximum steady-state error as an constraint. The overall synthesis is formalized in the following optimization problem: \begin{subequations} \label{eq:eval:observerOptimization} \begin{alignat*}{2} &\!\min_{\mathcal{P}} &\qquad& t_\infty,\\ &\text{subject to} & & \mathcal{R}_{\hat{q},\dot{\hat{q}}}(t_\infty) \in \mathcal{Y}_s, \end{alignat*} and \begin{equation*} \mathcal{P} := \lbrace \tilde{h}_1,\tilde{h}_2 \rbrace, \end{equation*} \end{subequations} where $\mathcal{R}_{\hat{q},\dot{\hat{q}}}(t_\infty)$ is the positive invariant set representing the steady-state error, towards which the system converges, and $t_\infty$ is the time of the convergence, which we consider as the transient response time. We set the measurement error at $\mathcal{V} = [-1,1]$ Millidegrees, and considera set of step responses by starting the reachability analysis of the observer system with a non-empty set $x(0) \in \mathcal{X}_0 = [-0.1,0.1] \times [-0.1,0.1 ]$, while keeping the reference signal $q_r=0$. We constrain the steady-state error of $\dot{\hat{q}}$ to $\mathcal{Y}_s = [-0.005,0.005]$. We only analyse the discrete-time observer, since this is the one that is implemented on the real controller. The results are shown in Tab. \ref{tab:eval:observer2}. Interestingly, the optimal observer gains obtained through Approach 2 are similar to the ones obtained through Approach 1. In addition, we discover that by increasing the gains for discrete-time observers, the transient response time $t_\infty$ decreases at first, but increases again, because the system starts to oscillate due to discretization effects. Therefore, the $t_\infty$ in Tab. \ref{tab:eval:observer2} is the actual minimum, without violating $\mathcal{Y}_s$. We additionally show this behavior in Fig. \ref{fig:eval:observer2conv} by varying $\epsilon$: for $\epsilon=0.02$, the steady-state error is small, but $t_\infty=0.0112$ s is large. For $\epsilon=0.01$, the steady-state error is still small, and $t_\infty=0.064$ s is the smallest. For $\epsilon=0.005$, the steady-state error is large, and so is $t_\infty=0.128$ s. \begin{figure} \includegraphics[width=\columnwidth]{figs/observer2conv.pdf} \caption{\textit{Observer synthesis (Approach 2):} Comparison of transient response time $t_\infty$ for $\epsilon=0.005,\epsilon=0.01$, and $\epsilon=0.02$} \label{fig:eval:observer2conv} \end{figure} \begin{table} \caption{Observer synthesis results (approach 2)} \label{tab:eval:observer2} \begin{center} \setlength\tabcolsep{5pt} \begin{tabular}{ r r r } \hline transient response time [s]& $\tilde{h}_1$ & $\tilde{h}_2$ \\\hline $0.064$ & $10.13$ & $25.69$\\ \hline \end{tabular} \end{center} \end{table} \subsection{Output-feedback control synthesis} \begin{figure} \includegraphics[width=\columnwidth]{figs/outputfeedback.pdf} \caption{\textit{Simultaneous output-feedback synthesis and identification:} we minimize $\mathcal{R}_{\hat{q},\dot{\hat{q}}}$ and $\mathcal{R}_{u_r}$ is under constraint. Variables are shown in \textbf{bold}.} \label{fig:eval:outputfeedback} \end{figure} Merging the linear observer and the state-feedback controller, the overall mechanism becomes an output-feedback controller. We briefly show in this section of the case study, that we can also synthezise the controller and observer at the same time. The block diagram is shown in Fig. \ref{fig:eval:outputfeedback}, and the optimization problem is the same one as in Sec. \ref{sec:eval:statefeedback}, except that the parameter set is now $\mathcal{P} := \lbrace \omega,\zeta,\tilde{h}_1,\tilde{h}_2,\alpha_W,\alpha_V \rbrace$. For brevity, we only evaluate RD+ as the model candidate the results are shown in Tab. \ref{tab:eval:outputfeedback}. Because the variable parameter set has now grown, the nonlinear programming algorithms often reached local minima. We restarted the synthesis with differing initial values until we reached a global minimum. As we can observe for the cost of each axis, they are all smaller than the corresponding costs of the ROD+ model in Tab. \ref{tab:eval:stateFeedback}, for which the reason is obvious: in the previous experiment, $\tilde{h}_1$ and $\tilde{h}_2$ were manually tuned and fixed; in this experiment, output-feedback synthesis has found superior values. The observer gains for axis 2 and 6 are significantly larger than the rest, but resulted in smaller reachable sets and did not result in unstable robot behavior. \begin{table} \caption{Output-feedback synthesis results} \label{tab:eval:outputfeedback} \setlength\tabcolsep{3.2pt} \begin{center} \begin{tabular}{ r r r r r r r r r r } \hline && \multicolumn{8}{c}{\textbf{Iteration 1}} \\ \textbf{Model} & Ax. & cost & $\omega$ & $\zeta$ &$\tilde{h}_1$ & $\tilde{h}_2$ & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ \\ \hline \multirow{6}{*}{RD+}&$1$&$\mathbf{0.37}$ & $18.72$ & $1.00$ & $11.38$ & $27.36$ & $0.04$ & $1.23$ & $0.00$ \\ &$2$&$\mathbf{0.85}$ & $6.37$ & $1.00$ & $59.05$ & $37.20$ & $0.09$ & $1.52$ & $0.00$ \\ &$3$&$\mathbf{0.37}$ & $18.06$ & $1.00$ & $10.77$ & $25.00$ & $0.04$ & $1.18$ & $0.00$ \\ &$4$&$\mathbf{0.48}$ & $20.09$ & $1.00$ & $11.51$ & $27.59$ & $0.05$ & $1.91$ & $0.00$ \\ &$5$&$\mathbf{0.48}$ & $17.96$ & $1.00$ & $11.31$ & $27.25$ & $0.05$ & $1.58$ & $0.00$ \\ &$6$&$\mathbf{0.64}$ & $22.31$ & $1.00$ & $119.82$ & $360.36$ & $0.06$ & $3.23$ & $0.00$ \\[0.1cm] \hline \end{tabular} \end{center} \end{table} \section{Conclusion}\label{sec_conclusion} \section{Introduction}\label{sec_introduction} \subsection{Motivation} \begin{itemize} \item Formal methods are mathematical techniques for reasoning about systems, their requirements, and their guarantees \cite{Kress-Gazit2018}. \item Formal synthesis are frameworks where tasks are specified in precise language and automatically transform them into correct-by-construction robot controllers. \end{itemize} \subsection{Statement of contributions} \noindent In the following, we list the contributions of this work: \begin{itemize} \item We formulate a unified optimal control framework for reachability-based model identification, controller synthesis, and the combination of both. \item We propose a model identification method for non-deterministic systems, which preserves reachset conformance with respect to the test data of the real system. Computationally efficient solutions for continuous-time and discrete-time linear systems are presented using zonotopes as set representation. \item We extend reachability-based controller synthesis to general linear controller systems. Using our unified framework, we combine controller synthesis with model identification and propose an iterative method to generate optimal controller with formal guarantees for real systems. \item We extensively study the application of reachability-based methods to feedback-linearizing tracking controllers of robots. We use our approaches to obtain formal guarantees on the tracking error, velocity estimation error, and whether input constraints can be met. \item We provide software in the form of a reachability-based identification toolbox written in MATLAB. The underlying foundation is the COntinuous Reachability Analyzer (CORA) \cite{Althoff2015}. \end{itemize} \subsection{Literature overview}\label{sec_survey} Traditionally, system identification and model-based control design for continuous dynamical systems have been regarded as two separate disciplines \cite{VanDenHof1995}: a nominal model of a robot is identified based on an optimality criterion, e.g., minimizing a least-squares error \cite{Atkeson1986a,Ljung1999}; control design and stability analysis are then applied assuming that the model is an exact representation of the physical dynamics \cite{An1988model}. With the advance of robust control, it became clear that determining an exact model of physical systems might be unfeasible, and that instead, uncertainties should be included in the control design \cite{Abdallah1991}. Such uncertainties can be additive, multiplicative \cite{VanDenHof1995}, or parametric \cite{Swevers1997,Ramdani2005}. The main criterium for the identification of such uncertainties has been their size. However, small model errors do not necessarily lead to good robust control, and large model errors do not necessarily lead to bad control performance, as \cite{Skelton1989} points out. Therein lies the motivation for \textit{identification for control}, in which the model uncertainties are determined in a way, such that it is optimal for the control goal \cite{VanDenHof1995}. Model errors can be mainly divided in two ways: stochastic bounds vs. set bounds, and frequency-domain vs. time domain uncertainties. A discussion on frequency-domain uncertainties for robust control can be found in \cite{Douma2005}. Stochastic aspects of model errors are treated in large detail in \cite{Ljung1999}. In \cite{Santolaria2013}, the stochastic uncertainty of the parameters of robot kinematics is identified through Monte-Carlo sampling. In the following paragraphs, we will focus on set bounds for time domain uncertainties. Most of the previous literature belong to set-membership identification \cite{Vicino1996,Milanese2004,Kieffer2006,Bravo2006,Ramdani2005}, which usually refers to works based on finding a feasible solution set (FSS). Given an unknown model and a set of measurements, it is the goal to identify a FSS that is consistent with all measurements of the system. The general technique \cite{Vicino1996} is to model measurements, including their a priori assumed errors, as strips in parameter space. The FSS is then the intersection of all strips, such that the unknown parameter is guaranteed to be inside. It is important here to distinguish that the FSS does not actually represent non-deterministic parameters, rather it seeks to find the 'true' deterministic parameter value by narrowing the FSS down as much as possible. The non-determinism, in fact, must be assumed a priori. E.g., in \cite{Zhang2020}, the non-deterministic disturbance of a linear system must be known a priori to identify the FSS of the system matrix parameters. As \cite{Ramdani2005} showed on real robot manipulators, the set-membership identification technique frequently returns empty FSS, such that a priori non-determinisms actually have to be manually increased. Also, the authors exclude data considered as 'outliers', if the measurement strip is far away from the FSS. The work in \cite{Reppa2008} proposes to use the outliers for fault detection of robots. The work in \cite{Bravo2006} presents an set-membership approach that aims to track time-varying parameters. In contrast to these works, we are interested in identifying bounds of time-varying and non-deterministic parameters, for which the mentioned FSS approaches are not suitable. Since formal methods are increasingly applied to robotic systems, the question arises, how far verification results obtained for a model are actually transferable to the real system. This problem is also known as \textit{model conformance} and has been treated in-depth in \cite{Roehm2019}. Most literature on set-based identification are based on the \textit{simulation relation}, since it allows a transfer of, e.g., temporal logic properties for the entire state space. The model can be a coarse-grained abstraction of the state-space into a discrete automaton (e.g, for the navigation of mobile robots \cite{Kress-Gazit2018}), or differential equations \cite{Chen2019a,Sadraddini2018} with non-deterministic disturbance. Chen et al. \cite{Chen2019a} identify a linear system with non-determinism such that all state measurements are within a polytopic reachable set. Saddradini and Belta \cite{Sadraddini2018} identify piece-wise affine models using Mixed-Interger Linear Programming, also establishing a simulation relation between measured states with hyperrectangular reachable sets. However, if a system is high-dimensional, but only few outputs are relevant for verification, then the simulation relation can be too restrictive and conservative. Thus, \textit{trace} and \textit{reachset conformance} have been proposed to relax the formal relation only to the output of a system \cite{Roehm2019}. In \cite{Schurmann2018}, the authors apply trace conformance by reconstructing disturbance traces for a real autonomous vehicle. The set of non-deterministic disturbances is then taken as the outer bounds of all disturbance traces. \textit{Reachset conformance}, on the other hand, is a further relaxation which only requires that the output traces of a system must be within the reachable set of the model, instead of traces. The main advantage is that a user can now more freely choose the internal structure, as long as the output is conformant, resulting in a more flexible model-order reduction \cite{Althoff2012a}, or even applying black-box identification methods \cite{Wang2021}. Although the amount of transferable properties reduces to reachability-based ones only, it is only a supposed disadvantage: many verification problems in formal methods are actually based on reachability, such as the computation of control invariant sets \cite{Gruber2020} and verifying collision avoidance \cite{Althoff2019}. First works on the identification of robot manipulators based on reachset conformance can be found in \cite{Liu2018,Giusti2021}. A different view on set-based identification is to formulate it as a synthesis problem. The authors in \cite{Dang2019,Batt2007} are able to incorporate additional model knowledge as Temporal logic constraints to improve identification. \input{sections/Literature_Formal_Synthesis.tex} At last, we make the connection of this work to the area of robust control for robots. The approach of this paper can be directly applied for the reachability analysis of feedback-linearizing robust linear controllers, where--similarly to our work--an uncertainty of the linear system due to an imperfect model is assumed \cite{Sage1999}. Robustness analysis involves bounding of uncertain parameters of the model, e.g., in \cite[Section 8.5.3]{Siciliano2009a} the mass matrix and other nonlinear terms of the robot dynamics are bounded to prove uniform boundedness of the tracking error. The approach in \cite{Zenieh1997} discusses a control scheme for robots, that achieves a desired tracking performance with a specified convergence rate. Uniform ultimate boundedness despite system uncertainties of the computed torque controller (which we analyse in our work) has already been shown in previous work \cite{Qu1991}. Further works on robust control for robots are surveyed in \cite{Sage1999}. $\mathcal{H}_\infty$-synthesis (e.g., in \cite{Kim2015,Makarov2016}) generate controllers that minimize the influence of disturbances on the system dynamics expressed in frequency domain. Often, such as in \cite{Makarov2016}, the validation of these approaches are done in time-domain through a Monte-Carlo simulation of the uncertain parameters to analyse the systems reachability. In contrast, our work computes the reachable set directly to evaluate control performance. In fact, reachability analysis can be interpreted as a direct evaluation of robust properties such as uniform ultimate boundedness. \subsection{Structure of this paper} In Sec. \ref{sec_preliminaries} we introduce zonotopes and reachability analysis of linear systems. The reachability-based methods are presented in Sec. \ref{sec:methods}. We then address the application of these methods to the tracking control problem of robot systems in Sec. \ref{sec_evaluation}. This paper concludes in Sec. \ref{sec_discussion}. \subsection{Reachability-based control synthesis} Based on the identified uncertain model from the last section, we want to compute a controller which minimizes the resulting reachable sets while formally guaranteeing the satisfaction of state and input constraints. To do so, we use techniques from \cite{Schuermann2017b,Schuermann2021a}, where we combine the controller synthesis with reachable set computation in a single optimization problem. Since we have a linear system, we use a classical linear trajectory tracking controller which has the form \begin{align} u_{ctrl}(x[k])=u_{ref}[k] + K(x[k]-x_{ref}[k]). \end{align} Here, $ x_{ref}[\cdot] $ denotes a reference trajectory and $ u_{ref}[\cdot] $ the corresponding reference input, and $ K $ is the feedback matrix which we use to track this reference trajectory. We consider constraint sets for the states and the inputs of the form \begin{align} x[k] &\in \mathcal{X}, \label{eq:method:StateConstraint}\\ u[k] &\in \mathcal{U},\label{eq:method:InputConstraint} \end{align} $ \forall k \in \mathbb{N}^+_0. $ \textcolor{red}{State constraints unnecessary?} Due to the linearity of the system dynamics, we can use the superposition principle to independently consider the problems of finding a reference trajectory $ x_{ref}[\cdot] $ and a feedback matrix $ K. $ We use an optimization problem to find the optimal feedback matrix $ K $ offline once and use this feedback matrix to track any (online) generated reference trajectory. In order to decouple these two control problems, we divide the state constraints into two parts $ \mathcal{X}_{ref} $ and $ \mathcal{X}_{fb} $ for the reference trajectory and for the feedback controller, respectively. We do the same for the input constraints with $ \mathcal{U}_{ref} $ and $ \mathcal{U}_{fb} $. We choose these sets such that \begin{align} \mathcal{X}_{ref} \oplus \mathcal{X}_{fb} \subseteq \mathcal{X},\\ \mathcal{U}_{ref} \oplus \mathcal{U}_{fb} \subseteq \mathcal{U}. \end{align} For simpler computation, we choose $ \mathcal{X}_{fb} $ and $ \mathcal{U}_{fb} $ as polytopes. We obtain the feedback control law by solving the following optimal control problem \begin{subequations} \begin{alignat}{2} &\!\min_{K} &\qquad& \texttt{size}(\mathcal{R}),\\ &\text{subject to} & & \mathcal{R}\subseteq \mathcal{X}_{fb},\label{eq:method:OCStateConstraint}\\ & & & \forall t: K \mathcal{R}_y(t) \subseteq \mathcal{U}_{fb}.\label{eq:method:OCInputConstraint} \end{alignat} \end{subequations} Since $ \mathcal{R} $ and $ \mathcal{R}_y(t) $, and therefore also $ K \mathcal{R}_y(t), $ are zonotopes, checking the constraints \eqref{eq:method:OCStateConstraint}--\eqref{eq:method:OCInputConstraint} requires only to check if a zonotope is inside a polytope. As shown in \cite{Schuermann2021a}, this can be computed very efficiently. \textcolor{red}{I can add a Lemma, if necessary.} In contrast to the optimization problem in the previous subsection, optimizing the feedback matrix which is multiplied with the output cannot be expressed as a linear problem anymore. As discussed in \cite{Schuermann2017b}, the optimization problem can be solved using standard nonlinear programming techniques. \textcolor{red}{Over-approximative reachable set using varying disturbances?} The result of the optimization problem, is a feedback matrix $ K $ which minimizes the reachable set in which the state is guaranteed to stay around a reference trajectory while ensuring the satisfaction of the state and input constraints. During application, one only has to find a reference trajectory with respect to $ \mathcal{X}_{ref} $ and $ \mathcal{U}_{ref} $ and it is guaranteed that the closed-loop system satisfies the actual constraints \eqref{eq:method:StateConstraint}--\eqref{eq:method:InputConstraint}. The combined tracking controller then works similar to a tube-based robust MPC approach (\textcolor{red}{reference}) with the advantage of the optimal feedback controller from a reachability-based controller synthesis. \section{Preliminaries}\label{sec_preliminaries} \subsection{Zonotopes} The advantage of zonotopes as a set-representation is that they scale well to high dimensions. In addition, algebraic operations are closed-form, i.e., the result of an operation involving zonotopes is again a zonotope. In the following definitions, we define its generator-space and halfspace representation, as well as the algebraic operations. We denote sets in calligraphic letters (e.g., $\mathcal{A}$), matrices with upper case letters (e.g., $A$), vectors by $\vec{\cdot}$, and scalar values by lower case letters (e.g., $a$). The $n$-dimensional identity matrix is denoted by $I_n$. \begin{definition}[Zonotope: generator-space representation \cite{Althoff2010a}]\label{def:zonotopeG} A zonotope $\mathcal{Z}$ is defined by a center $\vec{c}$; a generator matrix $G$, where $\alpha^{(h)}\vec{g}^{(h)}$ is its $h$-th column; and $\alpha^{(h)}>0$, which is a scaling factor determining the length of each generator: \begin{align*} \mathcal{Z} &= (\vec{c},G) :=\left\lbrace \vec{x}=\vec{c} + \sum_{h=1}^{p}\beta_h \vec{g}^{(h)} \Bigg\vert \beta_h \in [-\alpha^{(h)},\alpha^{(h)}] \right\rbrace. \\ &=\left\lbrace \vec{x}=\vec{c} + \sum_{h=1}^{p}\beta \alpha^{(h)} \vec{g}^{(h)} \Bigg\vert \beta \in [-1,1] \right\rbrace = (\vec{c},G'\diag(\alpha)). \end{align*} \end{definition} \begin{definition}[Zonotope: halfspace representation \cite{Althoff2010a}]\label{def:zonotopeH} A zonotope $(\vec{c},G)$ with $p$ generators has $2{p \choose n-1}$ facets. The generators that span a facet are obtained by cancelling $p-n+1$ generators from the $G$-matrix. This is denoted by $G^{\langle\gamma,\dots,\eta\rangle}$, where $\gamma,\dots,\eta$ are the $p-n+1$ indices of the generators that are taken out of $G$. The halfspace representation of a zonotope is $N \cdot \vec{x} \leq \vec{d}$, where \begin{equation*} N = \begin{bmatrix} N^+ \\ -N^+ \end{bmatrix},\quad \vec{d} = \begin{bmatrix} \vec{d}^+\\ \vec{d}^- \end{bmatrix}, \end{equation*} and the $j$-th row $j \in 1..{p \choose n-1}$ of $N^+$, $\vec{d}^+$, and $\vec{d}^-$ are: \begin{align*} \vec{n}_j^+ &:= \nX (G^{\langle\gamma,\dots,\eta\rangle})/ ||\nX (G^{\langle\gamma,\dots,\eta\rangle})||_2 \\ d_j^+ &:= \vec{n}_j^{+T} \cdot \vec{c} + \Delta d_j \\ d_j^- &:= -\vec{n}_j^{+T} \cdot \vec{c} + \Delta d_j \\ \Delta d_j &:= \sum_{\nu=1}^{p}|\vec{n}_j^{+T} \cdot g^{(\nu)}|\\ \nX(H) &:= [\dots, (-1)^{j+1}\det(H^{[j]}), \dots]^T. \end{align*} \end{definition} \begin{definition}[Minkowski sum of zonotopes] The Minkowski sum of sets is defined as $\mathcal{A} \oplus \mathcal{B} = \lbrace \vec{a} + \vec{b} \mid \vec{a} \in \mathcal{A}, \vec{b} \in \mathcal{B}\rbrace$. For zonotopes, their Minkowski sum has a closed-form solution in generator space \begin{align*} \mathcal{Z}_1 \oplus \mathcal{Z}_2 = (\vec{c}_1,G_1) \oplus (\vec{c}_2,G_2) = (\vec{c}_1+\vec{c}_2,[G_1,G_2]). \end{align*} \end{definition} \begin{definition}[Linear transformation of zonotopes] Zonotopes are closed under linear transformation: $A \mathcal{Z} = (A\vec{c},AG)$. \end{definition} \begin{definition}[Interval hull of zonotopes]\label{def:intervalHull} The interval hull $\mathcal{I}(\mathcal{Z}) = \lbrace\vec{i}^-,\vec{i}^+\rbrace$ is a tight outer-approximation of a zonotope $\mathcal{Z} = (\vec{c},[\dots,\vec{g}^{(h)},\dots])$, which is defined by \begin{gather*} \vec{i}^- := \vec{c} - \vec{\delta g}, \qquad \vec{i}^- := \vec{c} + \vec{\delta g}, \qquad \vec{\delta g} := \sum_{h=1}^{p} |\vec{g}^{(h)}|. \end{gather*} \end{definition} \begin{definition}[Norm of zonotopes]\label{def:zonotopeNorm} We define the norm of a zonotope as sum of the side lengths of its interval hull: $||\mathcal{Z}|| := ||\vec{\delta g}||_1$, where $||.||_1$ is the (scalar) 1-norm. \end{definition} \subsection{Reachability analysis of linear time-invariant systems} This paper mainly regards linear systems $S$ with uncertainties described by the following differential inclusion \begin{align}\label{eq:methods:continuousSystem} \begin{split} \dot{\vec{x}}(t) &\in A\vec{x}(t) + B\vec{u}(t) \oplus \mathcal{W}, \\ \vec{y}(t) &\in C\vec{x}(t) + D\vec{u}(t) \oplus \mathcal{V} \end{split} \end{align} If input $\vec{u}(t)$ and the uncertainties $\mathcal{V},\mathcal{W}$ are constant within one sampling time $\Delta t$, then we can formulate a discrete-time version $\tilde{S}$, where the integer $k = t/\Delta t$: \begin{align} \label{eq:methods:discreteSystem} \begin{split} \vec{x}[k+1] &\in \tilde{A} \vec{x}[k] + \tilde{B} \vec{u}[k] \oplus \tilde{E} \mathcal{W},\\ \vec{y}[k] &\in C \vec{x}[k] + D \vec{u}[k] \oplus \mathcal{V}, \end{split} \end{align} where the system matrices are \begin{align*} \tilde{A} &= e^{At_s}, \tilde{B} = \int_{0}^{\Delta t}e^{A(t-\tau)}Bd\tau \\ \tilde{E} &= \int_{0}^{\Delta t}e^{A(t-\tau)}Ed\tau \end{align*} The \textit{reachable set} $\mathcal{R}$ of a linear system $\tilde{S}$ after one time-step is computed through a set-based evaluation of \eqref{eq:methods:discreteSystem}: \begin{multline} \mathcal{R}[k+1] = C \tilde{A} \mathcal{X}[k] \oplus C \tilde{B} \vec{u}[k] \\ \oplus C\tilde{E} \mathcal{W} \oplus D \vec{u}[k] \oplus \mathcal{V}, \label{eq:methods:reachableSet} \end{multline} where $\mathcal{X}[k]$ is the current set of states. If an initial state $x[0]$ is given, then the reachable set at $k$ can be computed by recursively applying \eqref{eq:methods:reachableSet}: \begin{multline} \mathcal{R}[k] = C \tilde{A}^k x[0] \oplus \sum_{i=0}^{k-1} C \tilde{A}^i \tilde{B} \vec{u}[i] \\ \oplus \bigoplus_{i=0}^{k-1}C \tilde{A}^{i}\tilde{E}\mathcal{W} \oplus D \vec{u}[k] \oplus \mathcal{V}. \label{eq:methods:reachableSetRecursive} \end{multline} Since \eqref{eq:methods:reachableSetRecursive} only involves the Minkowksi sum and linear transformations, the resulting reachable set is closed-form and exact for linear systems $\tilde{S}$. It is an inner-approximation of the reachable sets of linear systems $S$, as shown by the following Lemma \begin{lemma}\label{lemma:constantInput} By moving the set $\mathcal{W}$ out of the convolution integral of the particular solution of a linear time-invariant system, assuming $w$ is constant, the result is an inner-approximation of the time-varying $w(\tau)$ case. \begin{multline*} \left\{\int_{0}^{t}e^{A(t-\tau)} d\tau w \bigg| w \in \mathcal{W} \right\} \subseteq\\ \left\{\int_{0}^{t}e^{A(t-\tau)} w(\tau) d\tau \bigg| \forall \tau: w(\tau) \in \mathcal{W}\right\}. \end{multline*} The notation already shows that the right-hand side contains more solutions than the left-hand side. $\square$ \end{lemma} For nonlinear systems in the form of $\dot{x} \in f(x,u,\mathcal{W})$, the solution is generally non-closed. Further works on outer and inner-approximations of reachable sets are surveyed in \cite{Althoff2020}. In this work, we regard the compositional analysis of linear subsystems. We define the operators $\texttt{series}(S_1,S_2)$ and $\texttt{feedback}(S_1,S_2)$, which refer to the series and feedback interconnection of linear multi-input, multi-output subsystems $S_1$ and $S_2$, for which the results are also linear systems. The derivation is straight-forward and details can be found in the supplied software code and in \cite{Duke1986}.
proofpile-arXiv_065-3835
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{Introduction} With the enormous rise in computational power and molecular simulation methods in the last decades, atomistic modeling is increasingly becoming the method of choice~\cite{Hollingsworth2018, Phillips2005, HANSSON2002190, ABRAHAM201519, rapaport2004art, Plattner2015, Krylova2020, Wu2019, Wolf2019}. Applications range from the study and prevention of corrosion~\cite{Kallikragas2018, DorMohammadi2018, Obot2019} to protein folding~\cite{Best2012}, unfolding~\cite{Xiao2018}, and self-assembly~\cite{Meneksedag-Erol2019}. Efficient molecular modelling can be done using empirical force-fields (FF): computing interactions between atoms and molecules using predefined functional forms. Unfortunately, when it comes to accuracy, such simulations leave a lot to be desired; they are, for example, not able to compute any chemical changes or capture many-body effects in a reliable way. When both accuracy and efficiency are a concern, machine learning force fields (MLFF) are becoming the method of choice. In contrast to empirical FF, ML models can potentially reproduce any functional form of interatomic and intermolecular interactions, leading to reliable descriptions of potential energy surfaces (PES) of arbitrary complexity. Many successes have been found in this domain in the recent years, with a multitude of methods being able to predict the behaviour of small to medium sized molecules and more \cite{Noe2020, sgdml, Schutt2017, Montavon2013, Behler2016, Schutt2017a, Hansen2015, Mardt2018} These methods were used to calculate the stability of molecules with chemical accuracy~\cite{Bartok2017}, predict the formation energy of crystals at the level of density functional theory~\cite{Faber2016}, or even reconstruct phase diagrams~\cite{Artrith2018}, to name a few examples. Despite those achievements, the data-driven nature of ML has its downsides: the quality of the ML results are at the mercy of the availability of ``good'' initial data. Collecting such data and choosing ``good'' training points is a nontrivial problem requiring a deep understanding of the nature of the data, which relies on human intuition. This puts into question the unbiased nature of the ML approaches, eliminating one of their main advantages over the human-designed FFs. For instance, for applications in molecular dynamics (MD) simulations, the training data are generally parts of molecular trajectories extracted from a reference \textit{ab initio} simulation with the desired level of accuracy. ML models are then frequently trained to have the best overall prediction across the entire dataset. This however skews the ML models toward more common (close-to-equilibrium) molecular configurations, as poorly predicted but rare (out-of-equilibrium) configurations hardly impact overall statistics. Hence, the usage of such ML models is unreliable and unpredictable in long MD simulations where out-of-equilibrium configurations are significantly more important for constructing the partition function. This will always be the case when the data distributions in the target simulations are different from those in the reference datasets. Examples include: studying nuclear quantum effects (such as proton transport) using the MLFF trained on classical MD trajectories, simulating phase transitions based on the information collected only in stable phases, computing reaction rates with ML models trained on meta-dynamics, etc. In all these cases, minimizing the prediction error on the reference dataset does not guarantee good prediction quality across all important configurations, making the results of the simulations questionable. In this work, we address the issue outlined above by ``flattening the error'' of ML models: i.e. we ensure that the predictive accuracy of the MLFF is equally reliable for out-of-equilibrium structures or rare events as for common configurations, thus enhancing the stability of the model regardless of its use case. To accomplish this, we propose a novel method to optimise the training of ML models, leading to unbiased molecular FFs with almost constant accuracy across the entire reference dataset. This method is equally applicable to any ML model and is available in our free open-source MLFF package~\cite{github}. We showcase its application on small organic molecules (uracil, salicylic acid, ethanol, toluene) as well as a larger molecule (alanine tetrapeptide) using GAP~\cite{GAP} models with the SOAP~\cite{SOAP} descriptor and sGDML~\cite{sgdml} as a representative kernel-based approach and SchNet~\cite{Schutt2017} as a representative neural-network-based approach. Comparing our improved models to default models of equal training set size reveals an error reduction on rare/out-of-equilibrium configurations by a factor of up to 2 for a negligible sacrifice in mean error. While most standard approaches involve calculating an average error across an entire dataset, we are able to compute the prediction error of a model for different regions of the reference data (configurational space for our examples), leading to a detailed view on the domain of applicability of the model. The presented approach can also be applied as an outlier detection method, effectively finding rare processes or out-of-equilibrium mechanisms inside of a dataset in an automated way. As an example, the developed approach enabled us to reveal the fingerprints of the proton transfer mechanism between the hydroxyl and the carboxylic group in the salicylic acid molecule database~\cite{sgdml} generated using classical MD simulations. This process is represented only by a few hundreds of configurations within more than 320k molecular geometries. Nevertheless, the developed approach is sensitive enough to separate this subset of geometries into an individual cluster. The structure of the article is the following. In the \textit{Theory} section we explain the developed methodology for outlier detection and improved training technique. The \textit{Practical Application} section contains the best practice example, where we explain in detail how to use the proposed method for the outlier detection and improved training on the example of the salicylic acid molecule. In the \textit{Results} section we apply our method to reconstruct the FF of small organic molecules and alanine tetrapeptide, which serves as a representative case for how the method performs on larger molecules. The \textit{Conclusions} section presents a summary and an outlook. \section{Theory} \label{Theory} The method described in this section allows for an in-depth error analysis of a ML model, outlier detection, as well as an improved training technique resulting in ``equally'' reliable predictions for all parts of the entire reference dataset. We apply the developed approach for constructing accurate MLFF for molecules consisting of a few tens of atoms, but it can be generalized for any regression problem. An overview of the methodology can be found in Figure~\ref{fig:roadmap}. \begin{figure*}[h!] \centering \includegraphics[width=0.9\textwidth]{MP_roadmap.pdf} \caption{Overview of the improved learning method. A dataset is clustered into subsets and the error of an initial ML model is assessed on each individual cluster. High-error clusters are re-clustered finely and representative configurations are extracted from each and added to the training set. This is repeated until a given number of training points is reached.} \label{fig:roadmap} \end{figure*} The method can be subdivided into three main steps. In the ``Initial Clustering" step, we split molecular configurations into groups based on similarities in geometric and energetic properties using a combination of clustering techniques. Namely, we employ agglomerative clustering~\cite{Ward} to group up configurations of similar geometries and further split the groups in different energy brackets using KMeans~\cite{Sculley2010}. We then apply a given ML model to each individual cluster and compute the respective mean prediction error, as illustrated in the ``Outlier detection" step. Clusters with high prediction errors represent regions of configurational space (CS) the model is ill-adapted to, restricting its range of applicability. This poor prediction can arise from two main sources: significant differences in physical/chemical properties compared to common configurations, and/or poor representation for specific regions of CS in the training set. We address the latter in the ``Improving model" step, where the combination of all poorly predicted areas is considered and subdivided into a larger amount of clusters, providing a fine grid of the problematic regions. The numerous clusters allow us to filter out well-predicted configurations that the initial clustering had previously misrepresented, as well as find problematic clusters on a finer scale. Extracting representative geometries (largest error, cluster centroid, random,...) from poorly predicted fine clusters and adding them to the initial training set improves the model's performance on the problematic regions of CS. Repeating the described procedure with re-trained models results in a final model with an optimised training set of a given size, capable of producing comparable errors across all reference data. In order to subdivide tens of thousands of unlabeled data points into just a handful of broad initial clusters, similarities between molecular configurations were defined based on the descriptors of CS used for training the ML models. In this paper, pairwise atomic distances is the descriptor of choice. Differences between configurations were defined using the Euclidean distance in the descriptor space. An agglomerative approach was chosen to cluster the dataset into configurations with similar geometries, as the algorithm avoids merging rare but geometrically unique configurations with large groups of common ones. With $\vec{x_i^a}$ the Euclidean position of atom $i\in[1,N]$ of data point $a\in[1,M]$, the descriptor $\vec{z^a}$ is given by: \begin{align} \vec{z^a}&= [...,z^a_{i,j},...], \; j<i \\ z^a_{i,j}&=||\vec{x_i^a}-\vec{x_j^a}||_2 \end{align} with $||\cdot||_2$ a simple Euclidean distance. Distances in the descriptor space are then: \begin{align} d(\vec{z^a},\vec{z^b})= ||\vec{z^a}-\vec{z^b}||_2 \end{align} Since Euclidean distances are not a natural metric of our chosen descriptor space, clusters produced this way often contained large variations in potential energy. To avoid this problem, a further distinction between different energy levels was done using a KMeans method. The combination of both clustering techniques helped distinguish between possible degenerate states as well as geometrically `similar' configurations with significant energy differences. After successfully splitting the dataset both by geometries and energies, an initial ML model was applied to every individual cluster and the average error on all configurations therein was computed between the predicted and actual forces. The root mean squared error (RMSE) was chosen as a way to emphasize large differences. Ordering the clusters by their average prediction error led to a simple way to identify outliers for the given dataset and model. Poor predictions on specific clusters were commonly caused by the training set containing too few examples from the relevant region of CS. Often, this arose as a simple consequence of a non-optimal training set choice: out-of-equilibrium geometries are naturally rarer and thus less represented in datasets born from physical simulations. As such, a random choice of training points --- even if according to some statistical distribution --- is very unlikely to contain those important out-of-equilibrium points. In other cases, clusters contained configurations whose physiochemical properties deviate from the rest of the dataset. In such cases, even small changes in geometry can lead to large differences in forces, hence the need to include a sizable contribution of outlying configurations to the training set for accurate predictions. The outlier detection described above also enabled an improved method to choose the training set. To this end, poorly predicted initial clusters were recombined and re-clustered more finely by applying the agglomerative approach as before but with a larger amount of clusters. This increased the resolution in which problematic regions of CS were identified, allowing for (a) filtering out of well-predicted configurations, previously buried in overly broad clusters, and (b) a finer distinction between all sub-regions of CS that include the configurations problematic for our initial model. Systematically adding data points from the worst predicted fine clusters to the model's training set ensured that all subregions of CS were sufficiently represented. Several methods were explored to choose which data points to add from the fine clusters to the training set. Selecting random points from the clusters already lead to improvements, but to a lesser extent than centroids (in the descriptor space) or points with the highest prediction error within their cluster. Both of the latter methods performed similarly, however, as previous steps already required the computation of prediction errors for every point, the highest-prediction error criterion proved to be more efficient and is the default method for this paper. An alternative to the above scheme could be considered: to skip the initial clustering and simply continue the process using all configurations whose prediction error exceeds some minimum value. However, this would require calculating the prediction for every single data point, whereas our method allows calculating only a subset of each cluster as a representative error for the whole, saving on valuable computational cost. As the datasets in this work are of limited size, the latter was not necessary, but will become important when scaling to larger systems. Furthermore, many well-predicted clusters still contain singular configurations associated with high errors. As opposed to poorly predicted configurations inside poorly predicted geometry clusters, the former are victims to the limitations of the ML model rather than that of the training set. Including those in the training set in an attempt to improve their prediction comes at a significant cost in accuracy in their otherwise well-predicted cluster. Including entire clusters rather than singular points of high error anchors the aforementioned exceptions to their respective low-error cluster, thus giving room to the algorithm to favour configurations that are representative of entire poorly-predicted subregions of the CS. One could also think of skipping the initial clustering step and immediately proceed to creating fine clusters from which to extract new training points. However, our clustering method of choice---agglomerative clustering---has a time complexity of $O(n^3)$ and memory requirement of $O(n^2)$, making it inadequate for handling large datasets in one go; instead in a first step, a subset of the data is chosen and clustered. The remaining (unclustered) data points are then iteratively added to an existing cluster based on smallest average distance, mimicking agglomerative clustering while bypassing computational limitations. This approach reduces the quality of the clustering scheme, but is still able to exclude well-predicted regions of CS in broad strokes. The combination of all remaining clusters represent a subset much smaller than the original dataset, thus most, if not all of the remaining data points can now be clustered in a single agglomerative step, leading to fine clusters of higher quality. Thanks to the fine grid of problematic configurations provided by our clustering algorithm, new data points could be added to the training set such as to address the model's poor predictions in targeted way. The complete training set for given data was created in an iterative manner, successively computing prediction errors, targeting problematic configurations to add to the training set, and re-training ML models. In the end, resulting models were trained on all the necessary data points to produce comparable prediction errors across all of CS within a dataset. This extends the MLFF application range beyond near-equilibrium simulations, providing reliable results even for out-of-equilibrium computations such as finding reaction rates or transition pathways. \section{Practical application of clustering algorithm to salicylic acid ML force field} \label{BestPractices} In this section we describe in detail each step of the outlier detection and improved training process on the example of the salicylic acid molecule~\cite{sgdml}. All the results shown within this article were obtained using exactly the same procedure and settings as explained here unless specified otherwise. As a first step, the atomic positions of each reference configuration are converted to the more appropriate pairwise distance descriptor. This descriptor is used to split the dataset into 10 clusters through agglomerative clustering with Ward~\cite{Ward} linkage and Euclidean metric. Then, an additional clustering step is performed on each individual cluster using the KMeans algorithm with Euclidean metric and kmeans++ initialization~\cite{kmeans++}. This step splits each previous cluster into 5 for a total of 50 clusters. For purposes of outlier detection, an sGDML model is trained on the salicylic acid database~\cite{sgdml} with 1000 training points using the default training scheme implemented within the sGDML package. This model is subsequently used to predict forces for all 320k configurations of the reference dataset. Then, force prediction errors are computed for each individual cluster, and clusters are rearranged based on the cluster RMSE (see Figure~\ref{fig:CE_SAL}). This outlier detection can be done automatically for sGDML using our MLFF software~\cite{github} with the following call on default settings: \begin{center} \begin{verbatim} python run.py cluster_error -d <dataset_file> -i <model_file> \end{verbatim} \end{center} \begin{figure*}[h!] \centering \includegraphics[width=0.49\textwidth]{BP_ce_sal.png} \caption{Force prediction root mean squared error (bars) on all 50 clusters (x-axis) of the salicylic acid dataset, ordered by ascending error. Relative population of each cluster is also indicated (solid blue line, arbitrary units). A representative structure of the highest-error cluster is shown (red box).} \label{fig:CE_SAL} \end{figure*} The above will provide the user with a graph similar to Figure~\ref{fig:CE_SAL} as well as the indices of every cluster and the prediction errors on each one respectively. It is worth noting that each cluster corresponds to a qualitatively different set of configurations. Hence the proposed scheme detects poorly predicted regions of CS for a given model. An example geometry from the worst-predicted cluster is shown in the figure below: this configuration has a clear fingerprint of shared hydrogen between the carboxylic and hydroxyl groups. This process is a rare event in the reference database obtained by employing classical MD simulations and can be easily missed by visualization of the trajectory or other human analyses. In contrast, the proposed clustering approach can easily separate such nontrivial configurations (a few hundred) from the overwhelming number (above 300 thousand) of simple fluctuations around the equilibrium geometry. In order to create improved models, we instead start with a smaller sGDML model trained on only 200 configurations using the default training scheme. The same error analysis is performed, giving us a rough idea of which part of CS the current model is struggling with. Every cluster whose error exceeds a factor of the overall error (factor of $1.1$ here) is merged and re-clustered to distinguish between finer subparts of CS. The fine clusterisation created a total of 200 clusters based solely on pairwise atomic distances, using the same agglomerative approach as for the initial clusterisation. Finally, one representative configuration is extracted from half the fine clusters (prioritising high-error clusters). The extracted configuration corresponds to the highest error within the respective cluster. Overall, 100 points are extracted to be added to the training set before the sGDML model is re-trained using the new combined training set (containing 300 configurations in total). The model's errors are then re-assessed on the same initial broad clusters, after which new fine clusters are created and a new subset of 100 training points is extracted. This process is repeated 8 times in total, resulting in an optimized training set of 1000 reference molecular configurations and the corresponding improved sGDML model. Using our software, one can simply execute the following command: \begin{center} \begin{verbatim} python run.py train -d <dataset_file> -n 8 -i 200 -s 100 \end{verbatim} \end{center} \begin{figure*}[h!] \centering \includegraphics[width=1\textwidth]{sal_training.pdf} \caption{Force prediction root mean squared error (solid lines) on all 50 clusters (x-axis) of the salicylic acid dataset, ordered by ascending error. Different colours correspond to varying sizes of training set, using the default sGDML training method (left) and the improved method (right). } \label{fig:SAL_TRAIN} \end{figure*} The above command will provide the improved model with 1000 training points as per the procedure described above, as well as the indices of the broad clusters. The results are shown in Figure~\ref{fig:SAL_TRAIN} for the initial, final, and one intermediate step. For comparison we add the results of the default training scheme as implemented in the sGDML package for models of equal size. One can see a noticeable flattening of the error curve with every iteration step of the improved training method. In contrast, the increase in the number of training points for the default sGDML model leads mainly to an overall decrease in prediction errors, keeping entire parts of the CS poorly predicted. \section{Results} \label{Results} The developed methodology was used to perform a detailed error analysis of three state-of-the-art MLFF models, namely sGDML~\cite{sgdml}, GAP~\cite{GAP} using the SOAP~\cite{SOAP} descriptor and SchNet~\cite{Schutt2017}. The reference datasets used are of ethanol, salicylic acid and uracil~\cite{gdml}. From Figure~\ref{fig:CE_combined} (note the different scales of the ordinate axes), it is clear that these different models show varying performance on the molecular datasets, however all of them are consistently inaccurate for out-of-equilibrium geometries. We show that by employing default training techniques, the prediction error on some physically relevant configurations can exceed the overall root mean squared error (RMSE) by up to a factor of 3. On the flip side, our improved training method alleviates the problem by creating models with significantly flattened errors across all configurations. This was applied to the previously mentioned datasets as well as toluene~\cite{gdml} and an alanine tetrapeptide dataset. In many cases, the RMSE of initially poorly predicted configuration clusters is reduced twofold, while well-predicted clusters suffer a marginal increase in RMSE, largely within the error margin of the original dataset. Furthermore for most molecule/model combinations, our improved learning methods result in an overall RMSE decrease despite the focus of the method on rarer configurations. \subsection{Outlier detection}\label{OutlierDetection} Before improving the models, it is necessary to find and show the underlying problems in the default training methods. For this, we used our outlier detection methods on multiple datasets for different molecules. The MLFF models of choice were sGDML, SchNet and GAP/SOAP: we applied each model to the same molecular datasets. We cluster a dataset of salicylic acid, uracil and ethanol into 50 different regions of CS and compute the mean squared force prediction error for every cluster. The results are plotted in Figure~\ref{fig:CE_combined}. A very large disparity between the errors in clusters and the mean squared error can be observed, with some clusters presenting an error 3 times higher than the mean. The difference between them and the cluster of lowest error is of course even higher. It is clear that in these cases, a single MSE is a very bad metric to quantify how good the ML model works for out-of-equilibrium geometries. This is in direct contradiction with the idea of the MLFF being comparable to the underlying \textit{ab initio} method, as entire regions of CS present an accuracy significantly worse than that of the reference calculations. There are two possible reasons for the observations above. The trivial one is that the poorly predicted regions contain large fluctuations of molecular geometries, which are not well represented in the training set. This lack of information then renders the ML models unable to learn them. Many applications mainly deal with close-to-equilibrium molecular configurations where the PES of the molecule is constrained within a given region of CS. There, higher prediction errors out-of-equilibrium might not significantly impact the results of simulations. However, for studying molecular stability, configurational changes or chemical reactions to name a few, large prediction errors on such fluctuations may affect the final results greatly. The second reason is non-trivial and can have a significant impact on the reliability of the MLFF. The poorly predicted areas of the CS can represent physics or chemistry missing in the majority of the configurations in the reference dataset. This is showcased in our previous example of salicylic acid, where the cluster with the most significant error corresponds to a shared proton between the carboxylic and hydroxyl group (for the details see the \textit{Practical Application} section). An accurate simulation of this process would require proper account of nuclear quantum effects, hence the corresponding configurations are a negligible minority (a few hundred) in the salicylic acid dataset containing over 320k molecular geometries of a classical MD run. Even if corresponding reference data were added to the MD dataset, those would mainly be ignored within the standard training schemes due to their relatively high energies. All in all, this points to default MLFF being inapplicable for studying the proton sharing effect for our given dataset. In contrast, the developed method is designed to alleviate this problem by widening the model's applicability range to the fullest capability of the dataset. \begin{figure*}[ht!] \centering \includegraphics[width=.9\textwidth]{CE_combined_graph.pdf} \caption{Force prediction RMSE for sGDML, SchNet and GAP/SOAP (with 12 radial and 6 angular functions) models on the same ethanol, uracil and salicylic acid datasets (y-axis, scale adapted for each model for better visibility), split into 50 clusters of similar configurations (x-axis) ordered by ascending error. RMSE (bars) is given on a per-cluster basis in contrast to the RMSE over the entire dataset (solid horizontal black line). Relative cluster populations are also indicated (solid blue line, arbitrary units). } \label{fig:CE_combined} \end{figure*} \subsection{Improved models}\label{Improved_models} \begin{figure*} \centering \includegraphics[width=.90\textwidth]{IL_combined_graph_nooverlap.png} \caption{Force prediction RMSE for sGDML and SchNet default models compared to the improved models (orange/blue bars, y-axis scale adapted for each model for better visibility). RMSE was computed on a per-cluster basis on ethanol, uracil and salicylic acid datasets, split into 50 clusters of similar configurations (x-axis) ordered by ascending error.} \label{fig:IL_combined} \end{figure*} To resolve the problem with nonuniform prediction accuracy, we applied the improved training techniques developed in this work using both sGDML and SchNet as our FF models once again. The molecules explored include all the datasets from the previous subsection as well as toluene. First, we performed the outlier detection by computing the root mean squared force prediction error on 50 clusters for an initial model with 200 training points. After that, 100 training points were added at every step for a total of 8 steps, resulting in models of a total of 1000 training points each. All the details of the improved training procedure can be found in the \textit{Practical Application} section. The comparison between the default and improved models of the same size is shown in Figure~\ref{fig:IL_combined}. In contrast to the default models, the improved versions present a more constant accuracy, with most models reaching a maximal cluster error of less than half that of their default counterpart. The results shown in Figure~\ref{fig:IL_combined} represent the quality of MD simulations performed with the default and the improved models with respect to the reference method. Since forces are the variables entering the equations of motion, their errors are directly related to the deviations between the reference and ML trajectories (more so than the energies). Of course, computing properties that are mainly defined by the most common configurations in the reference dataset---such as average energies at reasonably low temperatures---both types of models would lead to nearly identical results. On the other hand, processes involving broad parts of the PES or regions underrepresented in the reference dataset will be much better described using the proposed improved ML models. The goal of the improved models is to present a more stable prediction error across all of configurational space. They do so by explicitly including more out-of-equilibrium/rare configurations in their training set, at the expense of the more common/in-equilibrium configurations in the dataset. Despite this, the overall RMSE across the entire dataset does not change significantly, and even sees some decrease for many of the molecules shown below (see Table~\ref{table:1}). This further highlights the importance and usefulness of choosing the training set in a careful and meticulous way beyond just RMSE. \begin{table}[h!] \centering \caption{Overall RMSE for sGDML and SchNet models, comparing default and improved versions. All numbers are given in $kcal/\left(mol\,\AA\right)$} \begin{ruledtabular} \begin{tabular}{ccccc} molecule & def. sGDML & imp. sGDML & def. SchNet & imp. SchNet \\ \hline uracil & 0.38 & 0.32 & 0.77 & 0.65 \\ salicylic acid & 0.44 & 0.39 & 0.99 & 1.03 \\ toluene & 0.21 & 0.20 & 0.78 & 0.67 \\ ethanol & 0.51 & 0.50 & 0.57 & 0.47 \\ \end{tabular} \end{ruledtabular} \label{table:1} \end{table} \subsection{Application to larger molecule} So far, we applied the developed methods only on rather small molecules, demonstrating significant improvements in the resulting MLFFs. In this subsection, we extend the applications to noticeably larger molecules, using as an example an alanine tetrapeptide (AcAla3NHMe). This peptide is large enough to exhibit several incipient secondary structure motifs akin to biological peptides and proteins. It is important to note that, especially for larger molecules, our training method can only lead to improvements if the base model has acceptable accuracy in the first place, hence we use SchNet as our model in this subsection, since SchNet can be easily employed with much larger datasets than kernel-based methods (sGDML or GAP). Our reference dataset was constructed via \textit{ab initio} molecular dynamics at 500~K with the FHI-aims software~\cite{FHIaims} wrapped with the i-PI package~\cite{IPI}, using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional~\cite{MadeSimple} with tight settings and the Many-Body Dispersion (MBD) method~\cite{Ambrosetti2014, ATVDW} to account for van der Waals interactions. The dataset contains over 80k data points and covers at least three local minima. Our ML models were trained on 6k points, a number large enough to fully represent all physically different configurations given the intrinsic correlation between neighboring steps within MD trajectories. Figure~\ref{fig:AcAla3} shows the performance of two equal-size well-converged SchNet models trained using the default and improved training schemes for identical architectures. The details of the improved training procedure are similar to those in the \textit{Practical Application} section with two main differences: first, the number of training points added at each step is 500. Secondly, 150 fine clusters are created and several training points are extracted from each, where the number of points is weighted by the size of the fine cluster. \begin{figure*}[h!] \centering \includegraphics[width=1\textwidth]{AcAla3.pdf} \caption{Energy (left) and force (right) prediction RMSE for a AcAla3NHMe SchNet default models on different clusters: default model (orange bars) compared to the improved model (blue bars). Each model consists of 6000 training points with identical training procedures and architecture.} \label{fig:AcAla3} \end{figure*} Importantly, our method concentrates only on the improvement of the force predictions, where the RMSE of the default model for the worst clusters is about twice as large as that for the best one. The minor improvement in energy demonstrated in Figure~\ref{fig:AcAla3} is a nice accompanying effect which we were not aiming for. Although the overall improvement in RMSE for forces and energy drops only from $0.89$ to $0.8$~$kcal/(mol\,\AA)$ and $0.54$ to $0.49$~$kcal/mol$ respectively, the flattening of the errors for the force prediction can have noticeable results in practice. Improving the force predictions along with learning the energy within the SchNet model (or any other ML model) would usually require the employment of mixed loss functions, where the errors for energy and forces are minimized together. Such mixed loss functions are less efficient than pure ones, since optimizing two competing functions leads to sub-optimal results for each component. In contrast, the proposed method decouples the problem of improving the predictions for energy and forces. We use an energy-based loss function in the SchNet model while the forces are improved by manipulating the training set. As a result, we have a ML model with equally reliable energies and forces. Moreover, the errors in energy predictions also decrease, which would not happen if mixed loss functions were used. Alternatively, if our method instead focused on flattening the energy predictions, the improvements in the latter would be significantly more noticeable. However, this would come at the expense of force predictions, which would largely worsen as a result. Figure~\ref{fig:AcAla3} also reveals challenges for the developed method when applied to large systems. The cluster force predictions of the default model does not present a variance quite as large as the previously explored molecules. The main reason is that high-dimensional space (AcAla3NHMe has 42 atoms, i.e. 861 pairwise atomic distances) makes our clustering algorithms significantly weaker. Any distance metric loses meaning as the number of dimensions increases, and clustering algorithms rely on the latter to subdivide datasets. As a consequence, our clusters are ill-defined and contain larger overlaps between qualitatively different configurations. This reduces the resolution between well and poorly predicted parts of CS, decreasing the efficiency of the proposed method. We expect that reducing the size of descriptors by making use of dimensionality reduction techniques (such as kernel principal component analysis) would improve the efficiency of the clustering schemes and in turn make the developed approach reliable for systems containing hundreds and thousands of atoms. Nevertheless, even without the aforementioned additional step, significant improvement can be found when applying our training methods to alanine tetrapeptide: both energy and force prediction errors are reduced for almost every cluster. Importantly, achieving the same improvement within the default training scheme would require adding more reference data to the training set. Figure~\ref{fig:AcAla3train} shows the energy and forces prediction accuracy of the SchNet models trained with different sizes of the training set. Depending on the size of the training set, one can observe three qualitatively different behaviors of the resulting SchNet models: (a) Whenever the training set contains insufficient data (the model with 3k points), the constructed MLFF demonstrates low accuracy across the entire CS for both energy and forces. In this limit, the force-based improved training method proposed in this work does little to improve the FF since the starting model cannot distinguish between poorly and well-predicted areas of the CS, (b) The training set contains enough data for the ML model to accurately learn the PES, but the forces are poorly predicted across CS (the model with 6k points), akin to previous examples (see the \textit{Improved Models} subsection). This is precisely the scenario for which the proposed improved training technique has been developed. By comparing the default and improved models with 6k training points, one can see a significant boost in accuracy for forces accompanied by a slight improvement of the PES reconstruction (here, an improved model of 6k points is comparable to a default model of 7.5-8k points). As such, the proposed training method gives an optimal compromise between data-efficiency and accuracy of ML models, (c) Finally, a training set overloaded with reference data (the model with 9k points) leaves little room for improvement. Indeed in this case, the training contains all relevant configurations in the dataset (and by extension the validation set), such that the choice of training points becomes insignificant. \begin{figure*}[h!] \centering \includegraphics[width=1\textwidth]{AcAla3_training.pdf} \caption{Energy (left) and force (right) prediction RMSE for a AcAla3NHMe SchNet default (orange) and improved (blue) models on different clusters. Comparing different size of training sets: 3000, 6000, 9000 for default and 6000 for improved.} \label{fig:AcAla3train} \end{figure*} \newpage To compare the performance of the improved and default models, we ran constant-temperature MD simulations at 300~K and 400~K using the SchNet FF model. The time step was set to 0.5~fs to accurately reproduce fast hydrogen fluctuations in the molecule. Due to the size and high flexibility of the peptide, obtaining well-converged average energies requires MD trajectories of more than four million steps, equivalent to two nanoseconds. Simulations of this size come at prohibitively expensive computational costs for any accurate \textit{ab initio} method; MLFFs are the only way to perform them in practice. Note that our improved training procedure does come with higher computational costs (due to training the model multiple times), but the time spent on training is still very low compared to that of actually running the MD. At 300~K both models converge without any issues with a difference in average total energies of only 0.5~$kcal/mol$. The latter is within the accuracy of the ML models, see Figure~\ref{fig:AcAla3train}, meaning that both simulations give identical results. This is exactly what should be expected for a well trained ML model in its zone of comfort. At 400~K the situation changes drastically: the average total energy as a function of simulation time is shown in Figure~\ref{fig:AcAla3_400K}. One can see that while the improved model remains stable (as a zero energy level, we use the lowest potential energy in the reference dataset), the default one fails to reproduce the dynamics of the molecule at 400~K. The monotonic decay of the red curve advocates for the unreliability of the default training scheme for high-temperature simulations. As a result of wrong predictions, the molecule escapes the applicability range of the MLFF and we observe nonphysical results. Note that the training set was generated at 500~K, and thus contains all the information needed for 400~K MD simulations. Importantly, increasing the temperature to generate new reference data would require broader sampling of parts of CS with computationally expensive ~\textit{ab initio} methods --- an unacceptable scenario for growing molecule sizes. Hence, the developed improved training scheme not only leads to quantitatively better predictions, but also qualitatively increases the applicability range of the ML models by boosting their reliability. \begin{figure*}[h!] \centering \includegraphics[width=.5\textwidth]{E_400.png} \caption{Average total energy as a function of simulation time for the default (red) and the improved (blue) SchNet models for the AcAla3NHMe molecule. The constant-temperature MD simulations have been done at 400~K with 0.5~fs time step.} \label{fig:AcAla3_400K} \end{figure*} \section{Conclusions} \label{Conclusions} By leveraging supervised and unsupervised ML, we proposed a new strategy for improved training set selection for the construction of molecular machine learning force fields. We developed an automatic outlier detection method that exposed a noticeable bias in the predictive accuracy of the models towards common/in-equilibrium configurations at the expense of rarer/out-of-equilibrium ones, leading to entire regions of CS with significantly higher-than-average prediction errors. Our procedure is able to extract tiny subsets of molecular configurations representing nontrivial physical or chemical processes from an overwhelming amount of reference data. For example, a few hundred configurations with fingerprints of a shared proton in the salicylic acid molecule were found within 300k+ classical fluctuations around the equilibrium state. The developed error analysis helped us optimise the training set choice, resulting in largely improved accuracy of ML models across all of CS---effectively ``flattening" the prediction error curve throughout input space. During the training process, we iteratively selected poorly predicted training points from different parts of CS to add to the training set. This ensured that it contained sufficient representation from every qualitatively different type of configurations in the reference dataset. Models born from this approach proved more reliable than those with training sets in-line with the dataset's inherent distributions and guarantee ``chemical accuracy" for the entire sampled CS. With the examples of small organic molecules and an alanine tetrapeptide we demonstrated that the developed training method leads to an optimal compromise between data-efficiency and accuracy of MLFFs, avoiding the need to generate extensive amounts of computationally expensive highly-accurate reference data for training sets. Along with quantitative reductions in prediction errors, the ML models trained on the optimised training sets offer qualitative improvements in reliability for practical applications. This is demonstrated on the example of high-temperature MD simulations for the alanine tetrapeptide. Future plans include combining the developed approach to dimensionality reduction techniques to extend the applicability range to systems consisting of hundreds and thousands of atoms. While this paper focused on improving three specific ML models (GAP with SOAP descriptors and sGDML as kernel based approaches and SchNet as a neural network), all methods can easily be extended to any ML field presenting similar training problems. The code for the outlier detection and improved training is available in the open-source software MLFF on Github.~\cite{github} \begin{acknowledgments} We acknowledge financial support from the Luxembourg National Research (FNR) under the AFR project 14593813 and C19/MS/13718694/QML-FLEX, FNR DTU-PRIDE MASSENA, and the European Research Council (ERC-CoG grant BeStMo). \end{acknowledgments} \section{Data availability} The data that support the findings of this study are openly available on the sGDML \cite{gdml, sgdml} official website and in the supplementary material.
proofpile-arXiv_065-3837
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} Recently, unmanned aerial vehicles (UAVs) {have become increasingly} important and applicable in various fields, including structural inspection\cite{jung2018multi,jung2020bridge}, environment monitoring\cite{jung2017development,kim2016image}, and surveillance\cite{scherer2020multi}. {It is crucial that a UAV should be able to estimate its state accurately in real time for an autonomous flight system.} Therefore, there has been a massive effort to develop precise state estimation algorithms. However, the UAV system has limitations in terms of size, payload, and power, {which are problems that are commonly encountered in the filed of computer vision and robotics.} Visual odometry (VO) has been solving these issues using vision sensors. The most widely used vision sensor for the VO method is a monocular camera. {Unlike other sensors, this sensor is economical, compact, and power efficient. Hence, they can be easily mounted on a UAV.} However, it is impossible to obtain the absolute scale of the traveled path using only monocular images captured by the camera. In the field of computer vision and robotics, this scaling problem has been solved in various ways. The RGB-D sensor\cite{rgbd,whelan2013robust}, deep learning-based methods\cite{deep-depth-1,deep-depth-2}, and stereo vision\cite{stereo,gomez2016robust} have been used to obtain the depth information to infer the absolute scale. {Another commonly used approach} is to combine additional sensors with the camera to obtain additional information for measuring the movement of the camera attached to the rigid body of the robot. \begin{figure}[t] \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=4.2cm]{figure/drone.pdf}\hfil \caption{} \label{fig:uav} \end{subfigure} \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=\textwidth]{figure/lab.pdf} \caption{} \label{fig:lab} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=7.5cm]{figure/figure1_GT.pdf} \caption{} \label{fig:gt_total} \end{subfigure} \caption{Experiment setup: (a) The UAV platform \textcircled{1} Intel Realsense D435i \textcircled{2} Pixhawk4 mini \textcircled{3} Jetson TX2 with a carrier board \textcircled{4} Reflective marker (b) Test environment (c) Ground Truth trajectories of KAIST VIO dataset} \label{fig:fig1} \vspace{-0.5cm} \end{figure} Visual-inertial odometry (VIO) algorithms are a representative example of the latter approach. A combination of inertial measurement units (IMUs) and the camera could solve the odometry problems more accurately and efficiently, complementing the {imperfections present in both technologies.} Numerous methodologies have been proposed for this combination, and several applications have been proposed\cite{kneip2011robust,li2013high},\cite{forster2014svo,leutenegger2015keyframe}. {A recent trend is to combine deep learning with VO/VIO methods or use a GPU-accelerated front-end for those methods.} To achieve this, the hardware platform on which the algorithm runs should have sufficient resources. Therefore, NVIDIA Jetson boards equipped with graphics processing units (GPUs) {are used as they} have the potential to be used as a basic hardware platform in the future. Jetson boards are hardware modules released by NVIDIA and are developed to run software for autonomous machines. They are used as a companion computer for numerous autonomous robotic platforms, especially in UAVs, as they consume less power and overcome the limitations of the UAV platform in terms of size and weight. In addition, UAV applications that require real-time deep learning processes such as object detection and tracking as well as drone racing, use a lightweight network structure with Jetson boards installed. {To avoid the installation of} an additional embedded board for state estimation, the latest VIO algorithms should work well on these Jetson boards by sharing the computing resources with other processes. However, few studies have been conducted on the performance evaluation of VIO algorithms on various Jetson boards. This study aims to comprehensively analyze the feasibility and evaluate the performance of VIO algorithms, which are open {source}, and widely applied and used on various NVIDIA hardware configurations. We test three mono-VIO (VINS-Mono\cite{vins-mono}, ROVIO\cite{rovio-2}, and ALVIO\cite{alvio}), two stereo-VO (ORB-SLAM2 stereo\cite{orb2} and VINS-Fusion w/o IMU\cite{vins-fusion-1}), and four stereo-VIO (VINS-Fusion w/ IMU\cite{vins-fusion-1}, VINS-Fusion w/ GPU\cite{vinsfugpu}, Stereo-MSCKF\cite{msckf-vio}, and Kimera\cite{kimera}) algorithms and benchmark them on NVIDIA Jetson TX2, Xavier NX, and AGX Xavier boards, respectively. Furthermore, we conduct benchmark tests on the proposed dataset. The KAIST VIO dataset includes four different trajectories such as \texttt{circle}, \texttt{infinity}, \texttt{square}, and \texttt{pure\_rotation} with normal speed, high speed, and {head} rotation (Fig.~\ref{fig:fig1}(c)). Each sequence contains a pair of stereo images, one RGB image, and IMU data with accurate ground truth by a motion capture system acquired during UAV flight. {It is crucial to resolve the vulnerabilities caused by estimation error in visual-inertial state estimation that occurs during pure rotation\cite{olson2001stereo}.} {The dataset in this study consists of several rotation situations; hence, it is suitable to evaluate the performance or resistance encountered by each algorithm for hard cases to VIO.} The main contributions of this study are as follows: \begin{itemize} \item {This study presents} the feasibility analysis and performance evaluation of various visual(-inertial) odometry algorithms on several NVIDIA Jetson boards, including the latest model "Xavier NX". \item We propose a novel \textbf{KAIST VIO dataset} with different sets of sequences containing many rotations. The comparison shown in this paper presents an index of performance of the algorithm and Jetson board for motion trajectory with specific geometric and physical characteristics. The full dataset is available at: \url{https://github.com/zinuok/kaistviodataset}. \end{itemize} The rest of the paper is organized as follows: Section \ref{sec:related} reviews related works. Section \ref{sec:datasets} describes the proposed dataset. Section \ref{sec:exp} benchmarks the VIO algorithms with the dataset, and Section \ref{sec:result} analyzes the results in detail. Finally, Section \ref{sec:cons} summarizes our contributions and future works. \section{DATASET} \label{sec:datasets} { The main contributions of KAIST VIO dataset are as follows: \begin{itemize} \item It includes pure-rotational and harsh motions for VIO that were not covered well in the other datasets. \item Each trajectory sequence is subdivided into three types: normal/fast/head to ensure that benchmarking for each motion type is possible. \end{itemize} } The data are recorded in the 3.15 $\times$ 3.60 $\times$ 2.50 m sized indoor laboratory as shown in Fig.~\ref{fig:fig1}(b). This environment has sufficient image features to run various VO/VIO algorithms. The KAIST VIO dataset provides four types of paths {with different geometrical properties.} To acquire accurate geometric characteristics of each trajectory, the drone (Fig.~\ref{fig:fig1}(a)) for data collection is automatically flown as programmed. \subsection{Sensor Setup}\label{sec:sensor_setup} The sensors and reference systems used for data collection are shown in Table \ref{table:sensor_setup}. Fig.~\ref{fig:fig1}(a) shows the camera and IMU mounted on the drone body. \vspace{-0.2cm} \begin{table}[t] \renewcommand{\arraystretch}{0.7} \renewcommand{\tabcolsep}{0.67mm} \caption{Sensor setup} \label{table:sensor_setup} \begin{center} \begin{tabular}{c|c|c|c} \hline Sensor & Type & Data & Rate\\ \hline \multirow{2}{*}{Camera} & \multirow{2}{*}{D435i} & IR 1,2 (640$\times$480) & 30 Hz\\ & & RGB (640$\times$480) & 30 Hz\\ \multirow{2}{*}{IMU} & \multirow{2}{*}{Pixhawk 4 mini} & 3-axes accel., & 100 Hz\\ & & 3-axes gyro.& 100 Hz\\ Ground Truth & OptiTrack Mocap & Ground Truth & 50 Hz\\ \hline \end{tabular} \end{center} \vspace{-0.7cm} \end{table} \noindent \newline\textbf{Camera } Images with 640 $\times$ 480 resolution are obtained by Intel Realsense D435i, mounted in front of the drone {to ensure that it can look forward} at a rate of 30 Hz. The rolling shutter and global shutter deliver RGB images and infra-red (IR) images with the emitter turned off, respectively. In this study, for benchmarking the VO/VIO algorithms, only the IR images were used {as the global shutter is more suitable for rapid motion in this dataset.} \noindent \textbf{IMU } IMU data are logged at a rate of 100 Hz using Pixhawk4 mini, mounted at the center of the drone. A VI sensor unit consists of this IMU and D435i camera. {Kalibr\cite{kalibr} is used to obtain spatial and temporal calibration data. To accomplish this, the VI sensor unit records a unique pattern (AprilTag) with smooth 6-DOF motions. Kalibr uses a temporal basis function to calculate the time offset between the camera and IMU. In addition, temporal synchronization is performed using high-rate IMU data accumulated and interpolated for each camera frame. } Furthermore, the noise parameter values of the Pixhawk4 mini are calculated using a Kalibr$\_$allan\cite{kalibr_allan}. This allows more accurate calibration data to be obtained {in addition to} optimal parameter tuning for VO/VIO algorithms. \noindent \textbf{Ground truth } To obtain the accurate ground truth, an OptiTrack Prime\textsuperscript{X} 13 motion capture system\cite{mocap} consisting of six cameras is used. {This motion capture system captures 6-DOF motion information by tracking the motion capture marker mounted on top of the drone.} The information is recorded at a rate of 50 Hz within millimeter accuracy during the flight. Additionally, a transformation matrix for aligning the difference in positions between the origin of the ground truth defined by five markers and the VI sensor unit is included in the dataset format. \subsection{Dataset Format} This dataset has two sub-directories, the \texttt{config} and \texttt{data} directories, as shown in Fig. \ref{fig:tree}. \vspace{-0.15cm} \begin{figure}[h] \centering \includegraphics[width=62mm, height=60mm]{figure/dirtree.pdf} \caption{KAIST VIO dataset structure } \label{fig:tree} \vspace{-0.3cm} \end{figure} \noindent \textbf{\texttt{config} directory } The config directory contains three YAML files. \texttt{trans-mat.yaml} contains translational matrix information for correcting the offset as described in Section \ref{sec:datasets}.\textit{A}. This offset has already been applied to the ground truth of the Robot Operating System (ROS) bag data but has been included for reference. \texttt{imu-params.yaml} contains four noise parameter estimates for Pixhawk4 mini: {white noise of the gyroscope, white noise of the accelerometer, random walk of the gyroscope, and random walk of the accelerometer.} These values are obtained by based on \cite{kalibr_allan}. \texttt{cam-imu.yaml} contains the calibrated data from the VI sensor unit. \noindent \textbf{\texttt{data} directory } Each set of data is recorded as a bag file, a file format commonly used in ROS. Each file stores the sensor information required to run the algorithms acquired from the camera and the IMU. Additionally, the ground truth 6-DOF pose information of the drone, acquired using the motion capture system, is saved. All the data in each file are recorded in the form of ROS topics during flight. There are a total of four sub-directories with different geometric classifications of the motion trajectories: \texttt{circle}, \texttt{infinity}, \texttt{square}, and \texttt{pure\_rotation} (see Fig. 1(c)). Furthermore, each sub-directory contains several types of data: \texttt{normal} (normal speed with fixed heading), \texttt{fast} (high speed with fixed heading), and \texttt{{head}} (normal speed with rotational motion). For details, {please refer to our dataset link.} \section{EXPERIMENTS}\label{sec:exp} \subsection{Compared Hardware Platforms}\label{sec:jetson} NVIDIA Jetson boards are used as a hardware platform for performance comparison. The Jetson platforms used in this study are Jetson TX2, Jetson AGX Xavier, and the recently released Jetson Xavier NX. A brief description of each board is as follows, and the detailed specification for each platform is shown in Table \ref{table:jetson_platform}: \begin{itemize} \item Jetson TX2: TX2 is widely used as a companion computer for UAV systems {owing to its better CPU and GPU performance as well as larger memory than that of} Nano and TX1. \item Jetson Xavier NX: Xavier NX is a module recently released by NVIDIA. Owing to its small size and low weight similar to the Jetson Nano, it is suitable for robotic systems having significant physical limitations. \item Jetson AGX Xavier: AGX Xavier has a decent performance and can serve as a workstation for the autonomous system. It is mainly used for industrial robots and large UAV systems. \end{itemize} \subsection{Compared Algorithms} Table \ref{table:algorithm} shows all algorithms compared in this paper. \vspace{-0.1cm} \begin{table}[h] \renewcommand{\arraystretch}{0.9} \caption{Compared algorithms: monocular and stereo} \begin{center} {\scriptsize \begin{tabular}{c|c c} \hline Monocular & \multicolumn{2}{c}{Stereo}\\ w/ IMU & w/o IMU & w/ IMU\\ \hline VINS-Mono\cite{vins-mono} & VINS-Fusion\cite{vins-fusion-1} & VINS-Fusion-gpu\cite{vinsfugpu}\\ ALVIO\cite{alvio} & ORB-SLAM2\cite{orb2} & VINS-Fusion-imu\cite{vins-fusion-1}\\ ROVIO\cite{rovio-2} & & Stereo-MSCKF\cite{msckf-vio}\\ & & Kimera\cite{kimera}\\ \hline \end{tabular} } \end{center} \label{table:algorithm} \vspace{-0.55cm} \end{table} \subsection{Evaluation} All Jetson boards are set to the maximum CPU clock mode, consuming maximum power to thoroughly compare the potential {performance of each algorithm.} The performance evaluation is performed based on the resource usage and {Absolute} Trajectory Error (ATE) for each algorithm and platform. Considering the usage of resources such as CPU, memory, and GPU, all algorithms are measured in \texttt{infinity\_fast} path, where all algorithms do not diverge and have sufficient dynamic motion. To obtain more accurate measurements, only necessary processes are run, which is {intended by the authors of each algorithm to obtain the appropriate trajectory estimation.} The total sum of the resource usage is recorded every 0.1 seconds. For calculating the ATE, the origin alignment method \cite{grupp2017evo} is used for aligning the ground truth and estimated odometry values. \subsubsection{Setup} Ubuntu 18.04 with ROS melodic was setup on all platforms. Jetpack 4.2 was installed on TX2 and 4.4 on AGX Xavier and Xavier NX. Benchmark evaluation uses the data sequences described in Section \ref{sec:datasets}. \subsubsection{Parameter setting} For each algorithm, the trade-off between resource usage and accuracy is considerably different, and this study aims to present a performance comparison considering both of them for a general UAV system. Therefore, considering the trade-off, parameter values are tuned, maintaining widely used preset values recommended by the author(s) of each algorithm. The parameter settings of the used algorithms are as follows: \noindent \textbf{VINS-Mono}: The maximum number of features is set to 150, and the minimum distance between the two features is set to 25. The loop closure is disabled. \noindent \textbf{ALVIO}: ALVIO (adaptive line visual-inertial odometry) is an algorithm that additionally introduces line features to the existing VINS-Mono and overcomes the failure of line tracking through an optical-flow-based method. The parameter setting is similar to VINS-Mono. \noindent \textbf{ROVIO}: The maximum number of features is set to 20 and the size of the patch is set to 6 pixels. The maximum distance penalized during the bucketing process is set to 100 pixels. \noindent \textbf{ORB-SLAM2 stereo}: The maximum number of features per frame is set to 1200. \noindent \textbf{VINS-Fusion}: The maximum number of features is set to 350 and the minimum distance between the two features is set to 30. {The IMU and loop closure are disabled.} \noindent \textbf{VINS-Fusion-gpu}: Same as VINS-Fusion except that the GPU and IMU are enabled. \noindent \textbf{VINS-Fusion-imu}: Same as VINS-Fusion except that the IMU is enabled. \noindent \textbf{Stereo-MSCKF}: The minimum and maximum number of features per grid (3 $\times$ 4 grid that divides one image frame) constituting a frame are set to 3 and 4, respectively. The patch size is set to 15. \noindent \textbf{Kimera}: The maximum number of features is set to 800, and the minimum distance between the two features is set to 8. The IMU pre-integration type is a non-combined IMU factor method. \section{RESULTS ANALYSIS}\label{sec:result} \begin{figure*}[t] \centering \includegraphics[width=17.5cm, height=11cm]{figure/figure3_resource_usage.pdf} \caption{Statistical comparison of CPU, memory, GPU usages for possible algorithm-platform combinations on {\texttt{infinity\_fast} sequence} (a) TX2 (VINS-Fusion and VINS-Fusion-imu fail on all sequences) (b) NX (c) Xavier} \label{fig:resource} \vspace{-0.4cm} \end{figure*} \subsection{Analysis of Resource Usage} The CPU, memory, and GPU usages are shown in Fig. \ref{fig:resource}. On TX2, the GPU-accelerated version of VINS-Fusion (VINS-Fusion-gpu) was used because VINS-Fusion and VINS-Fusion-imu do not run owing to insufficient memory and CPU performance issues. Considering the CPU usage, Kimera and ORB-SLAM2 stereo were loosely-bounded and had relatively higher values than those of other algorithms on all Jetson platforms. {They needed a larger number of features per frame compared with those of} other algorithms, and the variation in CPU usage was considerable, depending on the number of detected features (0 to 800 and 0 to 1200 in this case). The CPU usage of ROVIO was the lowest on all Jetson platforms as ROVIO tracks the patch extracted from the detected feature and reduces the computation compared with that of other algorithms. {Except for ROVIO, all other algorithms showed more than 100\% CPU usage on each platform because of multi-core processing.} There was no significant difference between the mono and stereo algorithms. {The memory usage of} stereo VO/VIO algorithm was higher than that of the monocular VIO algorithm on all Jetson platforms. The memory usage of Stereo-MSCKF was similar to that of the monocular-based algorithm as the number of used features per frame is small (3 or 4 features per grid) and it is a filtering-based method. Furthermore, among the stereo-based methods, VINS-Fusion and VINS-Fusion-imu showed higher usage rates than other algorithms except Kimera. This tendency was relatively significant on Xavier NX, which has a lower CPU performance than AGX Xavier. The memory usage of Kimera was considerably higher than that of other algorithms on all Jetson platforms as Kimera requires numerous computations per keyframe. For TX2, which lacks CPU performance, this difference was more noticeable. When comparing each Jetson platform, AGX Xavier has significantly lower memory usage than the rest as it has the most massive memory of 32 GB. {VINS-Fusion-gpu had the highest GPU usage} on all Jetson platforms because it is the only algorithm that uses GPU-acceleration. The GPU usage by the Jetson platform did not show any significant difference. The same was true for the GPU usage of stereo and monocular-based systems in each platform. Considering the overall result, these three platforms are sufficient for the algorithms that use GPU-acceleration without any constraints. \subsection{Analysis of ATE RMSE} \begin{table*}[t] \caption{RMSE (Unit: m) of Absolute Trajectory Error (ATE) for all data sequences. We aligned the estimated trajectory to the ground truth trajectory according to the origin alignment method. The best performance combinations in each sequence on each platform are highlighted in \textbf{bold}. `\ding{53}' denotes the diverged one and `-' denotes failed to run, respectively. \newline\small{(cir: \texttt{circle}, inf: \texttt{infinity}, squ: \texttt{square}, rot: \texttt{rotation}, and n: \texttt{normal}, f: \texttt{fast}, h: \texttt{head})}} \centering \renewcommand{\arraystretch}{0.85} \renewcommand{\tabcolsep}{0.9mm} {\scriptsize \begin{tabular}{c|ccccccccc|ccccccccc|ccccccccc} \hline & \multicolumn{9}{c}{\textbf{\tiny{TX2}}} & \multicolumn{9}{c}{\textbf{\tiny{NX}}} & \multicolumn{9}{c}{\textbf{\tiny{Xavier}}}\\ & \rotatebox[origin=c]{90}{\tiny{VINS-Mono}} &\rotatebox[origin=c]{90}{\tiny{ALVIO}} &\rotatebox[origin=c]{90}{\tiny{ROVIO}} &\rotatebox[origin=c]{90}{\tiny{Kimera}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion-gpu}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion-imu}} &\rotatebox[origin=c]{90}{\tiny{ORB-SLAM2}} &\rotatebox[origin=c]{90}{\tiny{S-MSCKF}} & \rotatebox[origin=c]{90}{\tiny{VINS-Mono}} &\rotatebox[origin=c]{90}{\tiny{ALVIO}} &\rotatebox[origin=c]{90}{\tiny{ROVIO}} &\rotatebox[origin=c]{90}{\tiny{Kimera}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion-gpu}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion-imu}} &\rotatebox[origin=c]{90}{\tiny{ORB-SLAM2}} &\rotatebox[origin=c]{90}{\tiny{S-MSCKF}} & \rotatebox[origin=c]{90}{\tiny{VINS-Mono}} &\rotatebox[origin=c]{90}{\tiny{ALVIO}} &\rotatebox[origin=c]{90}{\tiny{ROVIO}} &\rotatebox[origin=c]{90}{\tiny{Kimera}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion-gpu}} &\rotatebox[origin=c]{90}{\tiny{VINS-Fusion-imu}} &\rotatebox[origin=c]{90}{\tiny{ORB-SLAM2}} &\rotatebox[origin=c]{90}{\tiny{S-MSCKF}}\\ \hline cir-n & 0.10 &\textbf{0.07} &\ding{53} &0.08 &- &0.08 &- &0.10 &0.14 &0.13 &0.09 &\ding{53} &0.12 &\textbf{0.06} &0.09 &0.11 &0.09 &0.12 &0.12 &0.12 &\ding{53} &0.08 &\textbf{0.07} &0.09 &0.08 &0.08 &0.11\\ cir-f &\textbf{0.07} &0.13 &0.79 &0.13 &- &0.14 &- &0.28 &0.12 &0.15 &\textbf{0.05} &0.83 &0.07 &0.12 &0.13 &0.10 &0.11 &0.19 &0.14 &0.12 &0.80 &\textbf{0.08} &0.16 &0.13 &0.13 &0.12 &0.23\\ cir-h &0.24 &\ding{53} &2.11 &0.25 &- &\textbf{0.10} &- &0.19 &\textbf{0.10} &0.43 &0.45 &2.12 &0.28 &\textbf{0.08} &0.11 &0.13 &0.13 &0.21 &0.41 &0.49 &2.11 &0.26 &\textbf{0.06} &0.11 &0.07 &0.15 &0.20\\ \hdashline inf-n & 0.14 &0.11 &\ding{53} &\textbf{0.09} &- &\textbf{0.09} &- &0.35 &0.10 &0.10 &0.12 &1.32 &\textbf{0.05} &\textbf{0.05} &0.09 &0.08 &0.08 &0.32 &0.24 &0.12 &1.19 &0.09 &\textbf{0.07} &0.09 &\textbf{0.07} &\textbf{0.07} &0.09\\ inf-f & 0.11 &\textbf{0.06} &0.44 &{0.19} &- &\textbf{0.06} &- &0.22 &0.20 &0.08 &0.07 &0.41 &0.14 &0.09 &\textbf{0.05} &{0.08} &0.10 &0.17 &0.10 &0.09 &0.44 &{0.13} &{0.07} &\textbf{0.05} &{0.07} &0.07 &0.12\\ inf-h & 1.19 &\ding{53} &\ding{53} &0.94 &- &\textbf{0.13} &- &1.50 &1.11 &0.50 &1.10 &\ding{53} &1.08 &\textbf{0.12} &0.14 &\textbf{0.12} &\textbf{0.12} &0.60 &0.57 &0.48 &\ding{53} &1.09 &\textbf{0.09} &0.14 &0.12 &\textbf{0.09} &0.87\\ \hdashline squ-n & 0.20 &0.16 &0.46 &\textbf{0.08} &- &0.12 &- &0.44 &0.18 &0.17 &0.16 &0.46 &0.17 &0.17 &0.12 &0.21 &\textbf{0.09} &0.10 &0.11 &\textbf{0.10} &0.47 &0.13 &0.17 &\textbf{0.10} &0.15 &0.12 &0.15\\ squ-f &\textbf{0.07} &0.13 &0.56 &0.14 &- &0.10 &- &0.29 &0.17 &0.14 &\textbf{0.07} &0.56 &0.19 &\textbf{0.07} &0.11 &0.13 &0.09 &0.30 &0.12 &0.10 &0.56 &0.14 &\textbf{0.08} &0.11 &0.10 &0.14 &0.17\\ squ-h & \ding{53} &\ding{53} &\ding{53} &0.18 &- &\textbf{0.15} &- &0.16 &0.40 &0.34 &\ding{53} &\ding{53} &1.57 &0.19 &\textbf{0.15} &0.20 &0.16 &0.30 &0.30 &0.36 &\ding{53} &1.50 &0.18 &\textbf{0.15} &0.18 &0.17 &0.50\\ \hdashline rot-n & 0.83 &\ding{53} &\ding{53} &0.16 &- &\textbf{0.12} &- &0.31 &0.16 &\ding{53} &\ding{53} &\ding{53} &0.17 &0.11 &0.12 &0.16 &0.17 &\textbf{0.10} &\ding{53} &0.81 &\ding{53} &0.18 &0.11 &0.12 &0.11 &0.16 &\textbf{0.07}\\ rot-f & \ding{53} &\ding{53} &2.74 &0.85 &- &\textbf{0.11} &- &0.18 &0.29 &0.40 &\ding{53} &\ding{53} &0.74 &0.28 &0.11 &\textbf{0.10} &0.21 &0.29 &0.89 &0.72 &\ding{53} &0.90 &0.26 &0.11 &\textbf{0.07} &0.18 &0.19\\ \hline \end{tabular} } \label{table:overall_rmse} \end{table*} \begin{figure*}[ht] \vspace{-0.25cm} \centering \includegraphics[width=17cm, height=10cm]{figure/figure4_overall_error_big.pdf} \caption{Boxplot for translational and yaw errors on {\texttt{infinity\_fast} sequence.} RMSE errors were calculated using \cite{grupp2017evo} (a) Monocular VIO (b) Stereo VO/VIO} \label{fig:error} \vspace{-0.7cm} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=17.5cm]{figure/figure5_rotation_withstand.pdf} \caption{{Resulting trajectories} of VI/VIO tests on \texttt{rotation\_normal} sequence running on Jetson NX board. We aligned the estimated trajectory to the ground truth trajetory according to the origin alignment method. (a) Monocular VIO (b) Stereo VO (c) Sterero VIO} \label{fig:rotation} \vspace{-0.6cm} \end{figure*} The ATE RMSE (RMSE of Absolute Trajectory Error) for all trajectories, Jetson boards, and algorithms are shown in Table~\ref{table:overall_rmse}. On each platform, the algorithm that exhibits the smallest error for each trajectory sequence is highlighted in \textbf{bold}. All 11 sequences were recorded in the same environment. Therefore, there is a difference in the feature displacement of two consecutive frames for each path, and it is necessary to analyze the error by considering the {motion characteristics of each path.} In the KAIST VIO dataset, a representative sequence with no rotational motion and only rapid translational motion is \texttt{infinity\_fast}. Translational and yaw errors for each algorithm and platform for this sequence are shown in Fig. \ref{fig:error}. For translational errors, stereo methods generally performed better than monocular-based methods on all platforms. VINS-Mono and ALVIO showed excellent performance similar to stereos, and ALVIO was better than VINS-Mono. Adding a line-feature (multi-pixels) to a point-feature (single-pixel), ALVIO could precisely track the extracted features without losing them. This robustness was also shown for yaw error except for TX2, which has an unsatisfactory performance to run ALVIO. \texttt{rotation\_normal} and \texttt{rotation\_fast} sequences, which have little translational movement, provide harsh paths for VO/VIO. Hence, the estimated odometry value often diverged in many algorithms. For the \texttt{rotation\_normal} sequence, the x, y, z, and yaw errors of each algorithm executed on Xavier NX are shown in Fig. \ref{fig:rotation}. The first algorithm diverged was ROVIO, and ROVIO mostly diverged in sequences with rotational motion: \texttt{infinity\_head}, \texttt{square\_head}, \texttt{rotation\_normal}, and \texttt{rotation\_fast}. ROVIO showed weak rotational motion because multi-level patches are not properly extracted or tracked during rapid scene transitions. The overall result showed robustness against rotations in the order of stereo VIO, stereo VO, and mono VIO. However, all three methods showed excellent performance {for yaw errors.} Comparing VINS-Fusion, VINS-Fusion-imu, and VINS-Fusion-gpu in the \texttt{rotation} sequences, the following two tendencies were observed. In \texttt{rotation\_normal}, VINS-Fusion showed a smaller error than VINS-Fusion-imu and VINS-Fusion-gpu. In \texttt{rotation\_fast}, the error of VINS-Fusion-imu and VINS-Fusion-gpu was smaller than that of VINS-Fusion (see Table \ref{table:overall_rmse}). This is because the IMU is specialized in detecting rapid motion, and the camera is specialized in detecting relatively slow motion. Moreover, the VINS-Fusion series {is considerably affected by the IMU as} IMU measurements are locally integrated with their pre-integration model, and their estimator refines extrinsic parameters between the camera and IMU online at the start of the flight. Therefore, for rotational motion, the VINS-Fusion series requires precise tuning of IMU parameters. {Although the same algorithm with the fixed-parameter setting was run in the same sequence, the error on each board was different. The statistical characteristics of these differences are shown in Fig. \ref{fig:error}. For mono methods, ROVIO did not show any significant difference among boards in both translation/yaw errors. This implies that each board has sufficient computing resources to run ROVIO smoothly. Similarly, for VINS-Mono and ALVIO, no significant difference was observed among the boards in a translation error. For stereo methods, VINS-Fusion-gpu, which mainly depends on GPU operation, did not show any significant difference among the boards in both translation/yaw errors. This means that the GPU resources of each board are sufficient to run the VINS-Fusion-gpu smoothly. In ORB-SLAM2(stereo), S-MSCKF, and Kimera, translation/yaw errors were the highest in the order of TX2, Xavier NX, and AGX Xavier. This is because these algorithms were particularly limited by the computational performance of the board, and the computational performances of TX2 and Xavier NX were inferior to that of AGX Xavier to run these algorithms smoothly. This was consistent with the differences in the number of cores and the performance of CPU/GPU mounted on each board, as shown in Table \ref{table:jetson_platform}. Similarly, in VINS-Fusion and VINS-Fusion-imu, the translation/yaw error range was higher in Xavier NX than in AGX Xavier.} On the TX2 platform, VINS-Fusion-gpu showed the best performance for the trajectories with rotational motion. This is because VINS-Fusion-gpu is the only algorithm that uses the GPU to compensate for the insufficient computational performance of the CPU of TX2. Stereo methods, which perform computations using only the CPU without the GPU, have a larger error than monocular-based methods owing to the limitation of per-frame processing time. Compared with other platforms, the monocular-based algorithms had better performance than that of the stereo algorithms in TX2, except for cases that diverge on trajectories with rotational motion. On NX and Xavier, which have better CPU and memory performance than TX2, stereo methods were better than monocular-based ones. The overall error for each path was lower in Xavier than in NX. This is because Xavier has a better CPU and memory than NX, and the per-frame processing time is shorter than NX. \section{CONCLUSIONS} \label{sec:cons} This study presented a novel KAIST VIO dataset that has harsh trajectories for VO/VIO, and the overall performance of various VO/VIO algorithms (mono, stereo, and stereo + IMU) was evaluated on NVIDIA Jetson TX2, Xavier NX, and AGX Xavier platforms. {The goal of this study} was to benchmark well-known VO/VIO algorithms {using the proposed} dataset, which has considerable rotational movement, with hardware that has limited computing power{, is compact,} and has GPU cores. In summary, {the monocular VO/VIO would be suitable for use} in TX2. In stereo VO/VIO, a GPU-accelerated algorithm would be appropriate for use in TX2. For the UAV system, Xavier NX will be appropriate, given that the UAV system has physical limitations (payload, dimensions etc.). {In the absense of limitations,} AGX Xavier would be a better choice. In the rotational motion case, the stereo VIO method is robust for rapid rotation and the stereo VO is suitable for relatively slow rotation. The error in pure rotation movement is a huge challenge that VO/VIO must overcome. Therefore, this KAIST VIO dataset includes various pure rotational trajectories to serve as a benchmark tester to solve this problem. These results and the dataset presented in this paper can be used as an index for {determining the suitable pair of platform and algorithm for the UAV systems that fly along predefined} paths with certain motion characteristics. Please refer to our official link that has the descriptions of our dataset and the setting instructions on how to run each algorithm on Jetson boards. Run your VO/VIO algorithms on NVIDIA Jetson boards with our dataset to demonstrate its robustness for rotational motion. \section*{APPENDIX} \section*{ACKNOWLEDGMENT} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression, ``One of us (R. B. G.) thanks . . .'' Instead, try ``R. B. G. thanks''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section{INTRODUCTION} Recently, owing to large-scale accidents arising from safety issues, public consciousness on the importance of safety management has been arisen. Structural inspection and maintenance of large structures is becoming increasingly important in the prevention of structural collapse and safety accidents that may cause human casualties as well as economic loss. Conventional structural inspection and maintenance are limited by the fact that the reliability and objectivity of the diagnosis results are low. Also, it is difficult to perform efficient internal and external inspections, and the time and cost required for professionals to perform visual inspection or inspections using non-destructive inspection equipment are high. Therefore, it is necessary to develop an unmanned system for the efficient inspection and maintenance of the structure. \begin{figure}[t] \begin{center} \centering \framebox{\parbox{7.5cm}{\includegraphics[width=7.5cm]{pics/intro.pdf} }} \caption{Simulated high-rise structure 3D model to be inspected: The 100m$\times$20m$\times$20m dimension of Big Ben\cite{bigben} (\textit{left}). The proposed method: Slicing the target structure into multi-layers and planning the path in each layer (\textit{right}).} \label{fig1} \end{center} \end{figure} A large proportion of the increasing demand for autonomous systems in the field of structural inspection is directed towards an unmanned aerial vehicle (UAV) that can replace human labor. For examples, there have been various autonomous inspection research endeavors using UAVs such as bridge inspection\cite{ohgraph}, wind turbine\cite{jung2015mechanism}, aircraft, and tunnels\cite{ozaslan2015inspection}. In this paper, we target only the inspection of high-rise structures. For the automation of inspections using a UAV, the strategy applied to inspect the structure in a fast and safe manner without missing any surface is one of the critical factors. Taking into account the sensor limitations and operational restrictions of the UAV, it is necessary that the UAV should inspect the entire surface of the target structure with a coverage path planning algorithm that can calculate an efficient and practical path quickly. In this work, we present a new 3D coverage path planning algorithm for the inspection of high-rise structures using a UAV. Our approach initially assumes that a prior map is available and it is represented with a 3D volumetric map using Octomap\cite{hornung2013octomap}. At first, the target structure is divided into several layers for extracting efficient and reasonable viewpoints. Then, in each layer, the method samples initial viewpoints which are generated by calculating the normal vectors of every center point of the voxels, and down-samples the essential viewpoints. Next, it connects the selected viewpoints and calculates the optimal path which has the lowest cost. With the result of the first layer, it re-samples viewpoints in the next layer by detecting the duplicated seen target surface. Finally, the whole tour path, which must be `\textit{spiral}' in form can be obtained by connecting all the layers. Fig. \ref{fig1} illustrates a 3D model used to verify the results and the proposed strategy in this work. The main contributions are as follows: \romannum{1}) To the best of our knowledge, this paper describes the first attempt that proposes a Multi-layer Coverage Path Planner (ML-CPP) which extracts the viewpoints in a layered way and plans the path in a layer-by-layer manner for 3D structural inspection to generate an efficient and smooth tour path. \romannum{2}) To minimize the UAV's energy cost, it iterates the viewpoint re-sampling process in every layer while checking for duplication of the area to be inspected. The rest of the paper is organized as follows: Section \ref{sec:rw} overviews the related works. Section \ref{sec:pd} defines the problem to be considered, and Section \ref{sec:pa} describes the proposed approach in detail. Section \ref{sec:er} shows the simulation results. Finally, Section \ref{sec:cs} summarizes our contributions and points to future work. \section{RELATED WORK} \label{sec:rw} Coverage path planning (CPP) is the task of deciding a path that fully covers all the points or surfaces of a target area with a set of selected viewpoints\cite{Almadhoun2016}. They are usually categorized as model-based or non-model-based method\cite{scott2003view}. Generally, the former one is performed with the prior knowledge of a model of the target structure, whereas the latter one entails exploring without any prior information about the environments and plans the paths online. They can also be classified as either off-line or online. This paper proposes a complete and off-line planning algorithm assuming full knowledge of the environment and map because for autonomous inspection of high-rise structures, working with a UAV without a pre-made map is likely to cause unexpected problems such as crashes and collisions. \subsection{Non-model-based Planning} When it comes to non-model-based methods (or exploration) that assume unknown environments or maps, most researchers have dealt with the next-best-view (NBV) problem\cite{connolly1985determination}. In earlier works, Yamauchi \cite{yamauchi1997frontier} introduced the frontier-based method. This method tries to find paths using frontier cells which form the boundary between the known space and unmapped space. Bircher \textit{et al.} \cite{Bircher2016b} proposed a receding horizon NBV planner for 3D exploration. It finds the best branch with a rapidly-exploring random tree (RRT) by considering the information gains. Since these researches are \textit{greedy} strategies, an advanced version of this scheme was presented in \cite{song2017online}. It introduced an efficient sampling strategy to model a 3D environment completely by employing a streaming set cover algorithm\cite{emek2016semi} which incrementally reduces a sampling range. \subsection{Model-based Planning} A number of contributions to model-based path planning have been made in the literature. Hover \textit{et al.}\cite{hover2012advanced} presented a method for the full coverage of a ship hull using a polygonal mesh to optimize a path. Englot and Hover\cite{Galceran2014} proposed a sampling-based method for the inspection of 3D underwater structures that re-plans in real time with a prior knowledge of a bathymetric map. Cheng \textit{et al.} \cite{cheng2008time} described the complete path planning for 3D urban structure coverage based on simplified, abstract models and they generated spiral trajectories based on the sensor placement. Alexis \textit{et al.} \cite{alexis2015uniform} presented a Uniform Coverage 3D structure Inspection Path Planner (UC3D–IPP) which provides full coverage of the mesh model and ensures uniform focus on the geometrical details by appropriately selecting the inspection distance. It also iterates to improve the inspection path that benefits from re-meshing techniques. In \cite{bircher2015structural}, they also generated the tour path for structural inspection using a triangular mesh model which is obtained using a 3rd party software (MeshLab\cite{Meshlab}). Similar to our work, the path was generated by solving a Travelling Salesperson Problem (TSP) using Lin-Kernighan heuristic (LKH) solver\cite{helsgaun2000effective}. The method proposed by them has shown good performance; however, in some cases, unreasonable viewpoints were randomly generated and the whole path became inefficient and untidy. The overview of our entire research on autonomous structural inspection is shown in Fig.~\ref{fig2}. In our previous papers, we have shown significant results related to the localization and inspection process\cite{ohgraph},\cite{jeon2017high}. Due to the necessity of deceleration and acceleration, the efficiency of the planned path can be measured by the number of turns in it \cite{mazo2004robust}. The spiral path can be performed at a high speed by minimizing deviation due to the inertia of the UAV. Also, the UAV will pass through the local environment from the same direction and look for the same features from the same side which can improve localization. As a result, this novel path planner generates the `\textit{spiral}' coverage path that must be efficient and tidy. \begin{figure}[t] \centering \framebox{\parbox{7cm}{\includegraphics[width=7cm]{pics/flowchart.pdf} }} \caption{The main processes of autonomous structural inspection using UAV: (1) Coverage path planning, which is the main theme in this work, is the first process. It starts with a manual, prior map generation using a 3D LiDAR sensor. Then, with the 3D model, the proper viewpoints are generated, and the planning step is started by solving the TSP problem and re-sampling steps. Finally, the coverage completeness is evaluated. (2) Development of an autonomous flight system with low-level UAV control and global position-based control. (3) Multi-sensor-based localization process with Simultaneous Localization and Mapping (SLAM) techniques. (4) An actual online inspection using various inspecting sensors.} \label{fig2} \end{figure} \newpage \section{PROBLEM DESCRIPTION} \label{sec:pd} The main problem within this work of the coverage path planning for structural inspection is to find the optimal path that guarantees the full coverage of the high-rise structure in a bounded 3D space $V\subset\mathbb{R}^3$ with limitations of payload, sensor range, and flight time of the vehicle. The primary aim of planning is to minimize a missed space $V_{mis} (\subset V)$ where $V_{mis} = V - (V_{seen} \cup V_{free})$, $V_{seen} (\subset V )$ is a seen space, and $V_{free} (\subset V )$ is a free space. In order to minimize an unexplored surfaces like ceiling surfaces or inclined planes, the initial mapping process performs with a 3D LiDAR sensor attached to quadrotor-type UAV for as much detail as possible. We assume that the aerial vehicle configuration is a flat state $\xi=(x,y,z,\psi)^T$ composed of the position $(x,y,z)$ and yaw angle $\psi$. For accurate inspection, its attitude must be close to the hover state where the roll and pitch angles are small. Also, $v_{max}$ denoting the translational speed limit and $\dot{\psi}_{max}$ denoting the rotational speed limit are assumed to be small and constant. The orientation of the attached camera is fixed relative to the vehicle and the viewing direction is always horizontal (assuming the UAV is close to hovering). In addition, for a safe inspection, the environments should not have any external disturbing or unexpected obstacles, since we already have a map. \section{PROPOSED APPROACH} \label{sec:pa} \begin{algorithm}[t] \small \caption{ML-CPP} \label{pp} \begin{algorithmic}[1] \REQUIRE Dist2Struct, FieldOfView, VoxelSize, StartPoint, \\NumOfLayers \STATE OctoMap based mapping \STATE Calculate a surface normal vector ($\vec{n}_1,\vec{n}_2, \ldots,\vec{n}_N)$ of every center point ($C_{1\sim N}$) \STATE Divide the normal vectors of the structure with $K$ layers by height\\ \WHILE{$i<K$} \STATE Sample initial viewpoints ($v_1,v_2,\ldots,v_N$) at $i$-th layer\ \STATE Down-sample essential viewpoints ($\hat{v}_1,\hat{v}_2,\ldots,\hat{v}_n$)\ \STATE Solve the Traveling Salesman Problem using LKH at $i$-th layer\ \STATE Update viewpoints in $(i+1)$-th layer by detecting $C$ which are duplicated in $i$-th layer (Fig. \ref{vp_resample})\ \STATE Connect $i$-th layer and $(i+1)$-th layer\ \STATE $i\leftarrow i+1$ \ENDWHILE \STATE Return TourLength, Time \end{algorithmic} \end{algorithm} The algorithm in this paper focuses on covering the structure fully, which is the most important factor in the inspection operation, and minimizing the total length of the tour path. Before planning the path, we need a 3D map of the target structure in advance. With the 3D LiDAR sensor on the UAV, the structure needs to be represented by the 3D volumetric map. Every voxel in the voxelized map has a centroid point ($C_i$) and each point makes a surface along with the neighboring points. The surface normal can be estimated as follows: \begin{equation} \mathrm{Cov}=\frac{1}{k}\sum_{i=1}^{k} (C_i - \overline{C})\cdot (C_i - \overline{C})^T \\ \end{equation} \begin{equation} \mathrm{Cov} \cdot \vec{v}_j = \lambda_j \cdot \vec{v}_j, \ j \in {{0,\ 1,\ 2}} \\ \end{equation} where $\mathrm{Cov}$ denotes the covariance matrix, $k$ is the number of neighboring points considered in the neighborhood of $C_i$ ($k$-neighborhood), $\overline{C}$ denotes the centroid of the nearest neighbors, and $\lambda_j$ and $\vec{v}_j$ are the $j$-th eigenvalue and eigenvector of the covariance matrix, respectively. Given a certain surface, the direction of the normal at any point on the surface can be obtained as a vector perpendicular to the surface at that point. In other words, if there are $N$ number of center points on the voxelized structure, there are also $N$ number of the surface normal vectors which are supposed to be sampled as the initial viewpoints. Then, the whole structure is sliced into several layers according to its height to make the tour path efficient and smooth by planning the paths layer-by-layer. However, due to too many viewpoints and the overlapped surface of the structure, in order to reduce existing viewpoints, the initial viewpoints are divided into discrete cells and replaces all points within a voxel by their centroids, which is termed as `down-sampling' using a voxel grid filter. With the minimized essential viewpoints ($\hat{v}_{1 \sim N}$), the shortest path connecting the viewpoints is computed using the LKH solver. Finally, the local paths in each layer are connected to form one tour path. The algorithm \ref{pp} shows the overall steps for the proposed multi-layer coverage path planner (ML-CPP). \begin{figure*}[tp] \centering \framebox{\parbox{17cm}{\includegraphics[width=17cm]{pics/vp_resample.pdf} }} \caption{A description of viewpoint update and re-sampling. (a) Initial sampling of viewpoints after down-sampling in each layer (b) With the down-sampled viewpoints in the first layer, it solves the TSP problem by using the LKH solver and after checking the overlapped area, updates viewpoints in the second layer. Consequently, the duplicated voxels are reduced and an efficient tour path is obtained. The red circled numbers are the updated viewpoints and the blue dashed line is a local path in the layer. Dark gray rectangles indicate the image plane with the overlapped area. (c)$-$(d) Keep processing the same procedure till it reaches the $K$-th layer. After we get the local paths in each layer, we connect all the layers.} \label{vp_resample} \end{figure*} \subsection{Implementation details} In the process of re-sampling of the viewpoints, the overall layer-by-layer process is described in Fig. \ref{vp_resample}. Fig. \ref{vp_resample}(a) shows the initial viewpoint-sampling step after down-sampling using a voxel grid filter in each layer. In Fig. \ref{vp_resample}(b), with the viewpoints in the first layer, it solves the TSP problem using the LKH solver and by checking the overlapped area, updates viewpoints in the second layer. Here, we set the overlap ratio as $0.1$ which means that two image planes in gray should not share their voxels over 10\% in the rectangles. Otherwise, the upper one is re-sampled and updated to the new local path. Consequently, it is possible to reduce the duplicated voxels and to have an efficient tour path. The red circled numbers are the updated viewpoints and the blue dashed line is a local path in the layer. The same procedure is followed till it reaches the $K$-th layer. After we get the local paths ($\xi_1,...,\xi_K$) in each layer, we connect all the layers into one global path $\xi_T$. The global inspection path $\xi_T$ is computed by connecting the shortest local path from $\xi_1$ to $\xi_K$ with the TSP solver. Two cost functions of path in the $j$-th layer $Q_j$ and path to connect two adjacent layers $Q_c$ can be defined as follows: \begin{dmath} Q_j = \sum_{i=1}^{N_j-1}\Big(\sqrt{(x^j_{i+1}-x^j_i)^2}+\sqrt{(y^j_{i+1}-y^j_i)^2}+\sqrt{(z^j_{i+1}-z^j_i)^2}\Big) \end{dmath} \begin{dmath} Q_c = \sum_{j=1}^{K-1}\Big(\sqrt{(x^s_{j+1}-x^e_j)^2}+\sqrt{(y^s_{j+1}-y^e_j)^2}+\sqrt{z^s_{j+1}-z^e_j)^2}\Big) \end{dmath} where $N_j$ is the number of essential viewpoints and ($x_i^j,y_i^j,z_i^j$) is the coordinate of each $i$-th point in the $j$-th layer, and $K$ is the number of layers. $x^s_{j+1}$ and $x^e_j$ denote the start point in the $(j+1)$-th layer and the end point in the $j$-th layer, respectively. We do not treat $Q_c$ as a TSP problem because obviously it is efficient when we connect them from the bottom layer. With the cost function above, we get the total cost function $Q_T$: \begin{equation} Q_T = \Big(\sum_{j=1}^{K}Q_j\Big)+Q_c \end{equation} \begin{equation} \xi_T = \arg\min_\xi Q_T \end{equation} It is clear that by minimizing the Euclidean distance in each layer with LKH, the sum of the local paths $\xi_j$ is minimized and the best tour path $\xi_T$ can be extracted. \subsection{Completeness of coverage} \begin{figure}[t] \centering \framebox{\parbox{7.5cm}{\includegraphics[width=7.5cm]{pics/completeness.pdf} }} \caption{To assess the completeness of our algorithm and to reduce the overlapped area, it shows the relationship between a viewpoint and a surface normal vector of centroid point. By setting proper $D_{obs}$ and $\theta_{thres}$ which denote the observable inspection limit and the angle between the normal vector of the viewpoint $\vec{n}_{vp}$ and the normal vector of the center point $\vec{n}_N$, respectively. Here, when $\theta_{N}<\theta_{thres}<\theta_{N+1}$, $C_N$ is acceptable, but $C_{N+1}$ is not qualified as an acceptable one. } \label{completeness} \end{figure} In this work, the completeness of the proposed ML-CPP is evaluated by quantifying the number of observed center points of the target structure. Generally, an image sensor (e.g. a mono-camera) has several constraints such as limitation of the field of view and maximum detecting range, which compose a view frustum. By using these constraints the qualified surfaces or, in other words, the observed points can be figured out. Also, we can decide whether to re-sample or not by calculating the observability. Simply, the completeness can be obtained as described below: \begin{equation} \mathrm{Completeness}(\%)=\frac{\mathrm{missed\ voxels}\ (V_{mis})}{\mathrm{number\ of\ voxels}\ (V)} \end{equation} Fig. \ref{completeness} shows the relationship between a viewpoint and a surface normal vector of the center point in order to find out the missed voxels and to check for duplication while inspecting. When the camera (or the body of UAV) is facing the center point $C_N$, there should be an angle $\theta_N$ between $\vec{n}_{vp}$ and $\vec{n}_{N}$ which denote the normal vector of the viewpoint and the center point, respectively. If the angle $\theta_N$ is less than or equal to a specified threshold $\theta_{thres}$ and the distance between viewpoint $v_i$ and $C_N$ is closer than the observable inspection limit $D_{obs}$, the voxel is accepted as a qualified one for objective structural inspection. As a result, we can determine which one is overlapped and which has not been observed yet. \begin{table}[t] \caption{Parameter settings in experiments} \label{param} \begin{center} \begin{tabular}{c|c||c|c} \hline \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline Voxel resolution($m$) & 0.5 & FoV & [60,90]\textdegree\\ \hdashline Num. of layers & 0, 5, 12 & Num. of voxels & 19,935\\ \hdashline Threshold angle ($\theta_{thres}$) & 60\textdegree & Overlap ratio & 0.1 \\ \hdashline Maximum distance($m$) & 10${\sim}$15 & $D_{obs}$($m$) & 15\\ \hline \end{tabular} \end{center} \end{table} \begin{figure*}[t]% \centering \begin{mdframed} \SetFigLayout{4}{2} \subfigure[\label{fig:ex3-a}] {\includegraphics[height=7cm]{pics/bircher.pdf}}\hfill% \subfigure[\label{fig:ex3-b}] {\includegraphics[height=7cm]{pics/nolayer.pdf}}\hfill% \subfigure[\label{fig:ex3-c}] {\includegraphics[height=7cm]{pics/5layer.pdf}}\hfill% \subfigure[\label{fig:ex3-d}] {\includegraphics[height=7cm]{pics/12layer.pdf}}\hfill \\% \subfigure[\label{fig:ex3-e}] {\includegraphics[height=3cm]{pics/bircher_top.pdf}}\hfill% \subfigure[\label{fig:ex3-f}] { \includegraphics[height=3cm]{pics/nolayer_top.pdf}}\hfill% \subfigure[\label{fig:ex3-g}] {\includegraphics[height=3cm]{pics/5layer_top.pdf}}\hfill% \subfigure[\label{fig:ex3-h}] { \includegraphics[height=3cm]{pics/12layer_top.pdf}\hfill}% \end{mdframed} \caption[A set of four subfigures.]{An illustration of Big ben simulations: \subref{fig:ex3-a} SIPP \cite{bircher2015structural} \subref{fig:ex3-b} ML-CPP with no-layer \subref{fig:ex3-c} ML-CPP with 5-layer (Slice every 20$m$ in height) \subref{fig:ex3-d} ML-CPP with 12-layer (Slice every 8$m$ in height) \subref{fig:ex3-e} Top view of SIPP \subref{fig:ex3-f} Top view of no-layer \subref{fig:ex3-g} Top view of 5-layer \subref{fig:ex3-h} Top view of 12-layer}% \label{fig:ex3}% \end{figure*} \begin{table}[t] \caption{Experimental results and comparisons} \label{comparisonA} \begin{center} \begin{tabular}{c||c|c|c|c} \hline & \textbf{SIPP\cite{bircher2015structural}} & \textbf{No-Layer} & \textbf{5-Layer} & \textbf{12-layer}\\ \hline \textbf{Dist. to target} & 10$\sim$50 &10 & 10 & 10\\ \hdashline \textbf{Num. of VP} & 526 & 83 & 95 & 102\\ \hdashline \textbf{Sampling time(s)} & - & 296.7 & 79.6 & 15.2 \\ \hdashline \textbf{TSP time(s)} & 24.8 & 4.84 & 1.07 & 0.07 \\ \hdashline \textbf{VP update time(s)} & - & 134.6 & 4.3 & 5.3 \\ \hdashline \textbf{Total time(s)} & $\approx$ 30 & 436.1 & 84.9 & 20.5\\ \hdashline \textbf{Tour length(m)} & $\approx$ 2000 & 3505.1 & 1943.6 & 2165.7 \\ \hdashline \textbf{Completeness(\%)} & \multirow{3}{*}{-} & 98.4 & 99.2 & 99.8 \\ \textbf{(missed voxel} & & (311 & (156 & (36 \\ \textbf{/total voxel)} & & /19935) & /19935) & /19935) \\ \hline \multicolumn{2}{c}{-: not mentioned or not exist} \end{tabular} \end{center} \end{table} \newpage \section{EXPERIMENTAL RESULTS} \label{sec:er} In this section, the result has been verified with simulation experiments. The simulations are conducted with Hector quadrotor simulation environment\cite{meyer2012comprehensive} with Gazebo \cite{koenig2004design}. After importing the 3D model into Gazebo, a UAV with a 3D LiDAR and a camera flies around the structure to generate a 3D voxelized map. Table \ref{param} summarizes the parameters used in the simulation. The proposed algorithm is compared with the Structural Inspection Path Planner (SIPP)\cite{bircher2015structural}. The proposed ML-CPP and SIPP algorithm were implemented on an Intel Core i7 CPU with 16 GB of memory. In the scenario, a 3D model of the Big ben\cite{bigben}, whose dimension is $100m\times20m\times20m$, is used as shown in Fig. \ref{fig1}. The ML-CPP algorithm starts from the bottom to the top. For each step and layer, we determine the computation time for sampling, TSP solving, and re-sampling; the length of the path, and the missed voxels. Although the parameter settings are slightly different and there being a possibility of a minor difference between the mesh model used in Bircher \textit{et al.} \cite{bircher2015structural} and the point cloud model used in this work. As shown in Table \ref{comparisonA}, if considering up to sampling time, SIPP is much better than ours with no-layer method in terms of the computation time and tour length. Compared to the 5-layer method, the computation time of SIPP is approximately three times faster and the total path length is similar. However, its computation time of the 12-layer ML-CPP is 1.5 times faster than SIPP with comparable tour length. Fig. \ref{fig:ex3} illustrates the results visually. Fig. \ref{fig:ex3-a} shows the result of SIPP which has a continued fluctuation because of a few odd viewpoints at the bottom of the mesh model. On the contrary, our methods with multi-layers show a relatively more efficient and neat path. The highest computation time consumption occurs in sampling with ML-CPP. The time for solving the TSP problem shows a notable difference. That is because LKH has a computational complexity of $\mathcal{O}(N^{2.2})$ where $N$ is the number of viewpoints\cite{helsgaun2000effective}. Unlike SIPP which solves the TSP problem with whole viewpoints, ML-CPP applies the TSP solver in each layer with a small number of viewpoints. Therefore, the complexity becomes $\mathcal{O}(K\cdot(\frac{N}{K})^{2.2})$, where $K$ denotes the number of layers and $N=n_1+\cdots+n_K$, resulting in very low computation time. More precisely, the overall complexity of ML-CPP with $K$ layers by LKH is $\mathcal{O}({n_1}^{2.2})+\cdots+\mathcal{O}({n_K}^{2.2})$. As for the completeness of coverage, the more layers there are, the higher the completeness it shows. With 12-layers, it missed 36 voxels out of 19,935 voxels, which corresponds to the coverage of 99.8\%. \section{CONCLUSION} \label{sec:cs} In this paper, we presented a novel coverage path planning algorithm for the inspection of high-rise structures, such as buildings or towers using a UAV. For efficient and practical planning, we employed the multi-layer-based method which plans the local path and re-samples viewpoints in each sliced layer to find a global inspection path. Since the proposed one is the model-based approach, the prior map is prepared using a 3D volumetric model which is obtainable with a 3D LiDAR sensor. The method is verified with simulations involving a rotary wing-type UAV. The aim of this work is to cover the structure as completely as possible. In summary, in Big ben experiment, 99.8\% of the surface of the structure (19,899 voxels out of 19,935 voxels) is inspected by the camera on the UAV, which is better than other state-of-the-art method with respect to computation time as well as coverage completeness. In addition, due to our viewpoint-sampling and re-sampling procedure, the final tour path becomes considerably smooth and neat. As a future work, we will apply this strategy to real 3D structures and test it with various sensors attached to a UAV for actual structural health monitoring (SHM). \bibliographystyle{IEEEtran} \section{Related works} \label{sec:related} \subsection{Benchmark Comparison of VO/VIO} Numerous studies {have been conducted} on the benchmarking of VO or VIO methods. Delmerico and Scaramuzza\cite{vio-benchmark} presented the overall benchmark comparisons of the state-of-the-art VIO algorithms on several hardware platforms (Laptop, Intel NUC, UP Board, and ORDROID) using the EuRoC dataset\cite{euroc}. However, the benchmark included only monocular visual-inertial methods, and not the stereo VO algorithms. Choi\cite{open-source-benchmark} presented a benchmark comparison of open-source methods based on \cite{euroc} and TUM VI dataset\cite{tum_vi}; {however, a comparison of various algorithms was absent.} Similarly, {the authors in \cite{monovo} presented the} benchmarking of vision-based odometry using their own dataset; however, only monocular VO methods were compared. For vision-based methods that require image processing tasks, an embedded system with a GPU might be an appropriate solution to accelerate the processing time. Giubilato \textit{et al.}\cite{vio-tx2-1} compared the well-known VO and SLAM methods on the Jetson TX2 platform. \subsection{Benchmark Comparison of Jetson Boards} There are several studies {that compared the} performance of Jetson boards. By using a deep-CNN algorithm, S{\"u}zen \textit{et al.}\cite{jetson-bench-1} compared Jetson TX2, Jetson Nano, and Raspberry Pi board with respect to accuracy and resource consumption. Ullah and Kim\cite{jetson-bench-2} presented the performance benchmarks of Jetson Nano, Jetson TX1, and Jetson AGX Xavier running deep learning algorithms that require complex computations, {in terms of resource consumption such as CPU, GPU, memory usage, and} processing time. Jo \textit{et al.}\cite{gpu-benchmark} also described a set of CNN benchmark comparisons of Jetson TX2, Jetson Nano, GTX1060, and Tesla V100. In existing studies, the performance comparison of the various Jetson boards targeting VO/VIO methods has not been clearly evaluated. Furthermore, the most recent Jetson NX board has not been included in the comparison. \subsection{Benchmark Dataset in Harsh Environment} {It is crucial to prove that} VO/VIO work well in a real environment with several harsh cases. Zu{\~n}iga-No{\"e}l \textit{et al.}\cite{corner-case-1} proposed an in/outdoor dataset in which low-texture scenes or scenes with dynamic illumination are included. These conditions are difficult cases of vision-based odometry. Kasper \textit{et al.}\cite{corner-case-2} also presented a dataset of scenes with dynamic motion blur, various degrees of illumination, and low camera exposure. Another study\cite{corner-case-3} analyzed the effect of photometric calibration, motion bias, and rolling shutter effect on the performance of vision-based methods. Pfrommer \textit{et al.}\cite{corner-case-4} introduced a dataset similar to \cite{corner-case-2},\cite{corner-case-3}. This includes partially rapid rotational motion; however, it is only a part of the entire path. It is still necessary to compare the performance of existing methods for the rotational movement itself in a rotation-only trajectory.
proofpile-arXiv_065-3849
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} On a smooth manifold equipped with a Riemannian metric, the basic objects for geometry are the canonical Levi-Civita connection and the corresponding Riemann curvature tensor. Conformal geometry is partly studying transformations preserving angles, and partly polynomials in the curvature and its derivatives transforming in simple ways under conformal changes of the metric, i.e., multiplying it by a positive smooth function. Already the decomposition of the curvature into the Weyl tensor, the scalar curvature, and the trace-free Einstein tensor indicates the role of conformal geometry, since the Weyl tensor transforms just by multiplication by a power of the conformal factor. H. Weyl himself was introducing the first gauge theory in physics exactly via the local change of scale given by a conformal factor, and related it to the Maxwell equations in relativistic field theory. \index{$\mathcal{W}_2$ \quad conformal Willmore functional} Given a hypersurface in a Riemannian manifold one may in addition consider invariants coming from the embedding, i.e., not only by the metric on the hypersurface induced by the metric of the ambient space, but also from the normal geometry of the hypersurface. In the classical Gauss theory of surfaces in a three-dimensional Euclidean space, the product of the principal curvatures is the intrinsic Gauss/scalar curvature, whereas the arithmetic mean of the principal curvatures is the extrinsic mean curvature. A celebrated conformal invariant is the integral of the square of the mean curvature over the surface, the so-called Willmore energy, also relevant for the physical theory of surfaces. It is closely related to the conformally invariant integral $$ \mathcal{W}_2 = \int_{M^2} |\lo|^2 dvol $$ of the conformally invariant squared norm of the trace-free second fundamental form. In general, curvature invariants of a hypersurface consist of intrinsic invariants, coming from the induced metric, and extrinsic invariants, coming from the second fundamental form and the ambient metric. Whereas for a given manifold it is known how to describe the conformally invariant scalar curvature quantities using the Fefferman-Graham ambient metric \cite{FG-final}, an analogous description of scalar conformal invariants of a hypersurface is not known. Such a classification would also be of interest in physics \cite{Sol}. Recent years have seen attempts to embed the theory of the Feffermann-Graham ambient metric and of the related Poincar\'e-Einstein metric into a wider framework. For instance, Albin \cite{Albin} extended parts of the theory to Poincar\'e-Lovelock metrics including applications to $Q$-curvature. In another direction, Gover et al. \cite{Gover-AE,GW-announce,GW-LNY,GGHW} developed a tractor calculus approach to the problem of constructing higher-order generalizations of the Willmore functional $\mathcal{W}_2$. Here a central role is played by the singular Yamabe problem which replaces the Einstein condition. In \cite{ACF}, it was discovered that the obstruction to the {\em smooth} solvability of the singular Yamabe problem of a hypersurface of dimension $n$ is a scalar conformal invariant $\B_n$. The observation of Gover et al. that $\B_2=0$ is the Euler-Lagrange equation of $\mathcal{W}_2$ was the starting point of their theory. More generally, \cite{Graham-Yamabe} identified the equation $\B_n=0$ as the Euler-Lagrange equations of a conformally invariant functional which he termed the Yamabe energy. This energy is an analog of the integrated (critical) renormalized volume coefficient of a Poincar\'e-Einstein metric which in turn is related to the integrated (critical) Branson $Q$-curvature. Notably this connection to $Q$-curvature also extends to the present setting \cite{GW-reno}, \cite{JO}. Formulas for the conformally invariant obstruction in terms of classical curvature data are not known for $n\ge 4$. But for $n=3$, such a formula for $\B_3$ was derived in \cite{GGHW} from a general tractor calculus formula in \cite{GW-LNY}. In the present paper, we shall take a classical perspective and derive formulas for $\B_3$ directly from its very definition. This approach is independent. We only apply standard linear algebra and tensor calculations. It confirms and partly corrects results in the literature. As technical tools we also employ some differential identities involving $L$. Partly these are classical such as those found by J. Simons, and partly these are less well-known. Finally, we apply classical style arguments to relate $\B_3$ to the variation of the conformally invariant functional $$ \mathcal{W}_3 = \int_M (\tr(\lo^3) + (\lo,\W)) dvol $$ \index{$\mathcal{W}_3$ \quad higher Willmore functional} which can be viewed as a natural generalizations of the classical Willmore functional. This fits with Graham's theorem \cite[Theorem 3.1]{Graham-Yamabe}. Our arguments replace a technique introduced and exploited in \cite{GGHW} for the same purpose. The formulation of the main result requires some notation. Let $L$ be the second fundamental form, $\lo$ its trace-free part and $H$ the mean curvature. Let $\overline{W}$ be the Weyl tensor of the background metric. We also define two contractions $\overline{W}_{0}$ and $\W$ of $\overline{W}$ on $M$ by inserting a unit normal vector $\partial_0$ at the last and at the first and the last slot, respectively. We let the operator $\LOP$ act on trace-free symmetric bilinear forms $b$ on $M$ by $\LOP (b) = \delta \delta (b) + (\Rho,b)$. $\LOP$ maps trace-free symmetric bilinear forms to $C^\infty(M)$. It is conformally invariant in the sense that $e^{4\varphi} \hat{\LOP} (b) = \LOP (b)$ for $\varphi \in C^\infty(M)$. The Levi-Civita connections on $X$ and $M$ are denoted by $\bar{\nabla}$ and $\nabla$. For more details see Section \ref{not}. For the definition of the obstructions $\B_n$ we refer to Section \ref{SYP}. \index{$\LOP$} \begin{thm}\label{main1} Let $\iota: M^3 \hookrightarrow (X^4,g)$ be a smooth embedding. Then it holds \begin{align}\label{B3-g-final} 12 \B_3 & = 6 \LOP ((\lo^2)_\circ) + 2 |\lo|^4 + 2 \LOP (\W) \notag \\ & - 2 \lo^{ij} \bar{\nabla}^0(\overline{W})_{0ij0} - 4 \lo^{ij} \nabla^k \overline{W}_{kij0} - 4 H(\lo,\W) + 16 (\lo^2,\W) + 4 |\W|^2 + 2 |\overline{W}_{0}|^2. \end{align} \end{thm} For a conformally flat background, Theorem \ref{main1} reduces to the identity \begin{equation}\label{B3-CF} 6 \B_3 = 3 \LOP ((\lo^2)_\circ) + |\lo|^4 = \Delta (|\lo|^2) - |\nabla \lo|^2 + 3/2 |\delta(\lo)|^2 - 2 \J |\lo|^2 + |\lo|^4. \end{equation} The second equality follows from Lemma \ref{NEW3a}. We recall the well-known fact that for an odd-dimensional $M$, there are Poincar\'e-Einstein metrics $g_+$ such that the conformal compactification $g=r^2 g_+$ is smooth \cite{FG-final}. For such a metric, it holds $\lo=0$, $\W = 0$ and even $\overline{W}_{0} = 0$ \cite[Proposition 4.3]{Gover-AE}. In particular, for $n=3$, the above formula confirms that $\B_3 = 0$. If $\lo=0$, then $$ 6 \B_3 = \LOP(\W) + 2 |\W|^2 + |\overline{W}_{0}|^2. $$ Formula \eqref{B3-g-final} confirms the conformal invariance of $\B_3$ (of weight $-4$). In fact, the conformal invariance of $\LOP$ implies that $\LOP((\lo^2)_\circ)$ and $\LOP(\W)$ are individually conformally invariant. Furthermore, the sum of the first three terms in the second line of \eqref{B3-g-final} is conformally invariant. In fact, it holds $$ \lo^{ij} ( \bar{\nabla}^0(\overline{W})_{0ij0} + 2 \nabla^k \overline{W}_{kij0} +2 H \W_{ij}) = (\lo, B) + (\lo^2,\W) + \lo^{ij} \lo^{kl} \overline{W}_{kijl} $$ (see Lemma \ref{Bach-relation}) with a conformally invariant tensor $B$ of weight $-1$, i.e., $e^\varphi \hat{B} = B$, introduced in \cite[Lemma 2.1]{GGHW} and termed the hypersurface Bach tensor. All remaining terms in \eqref{B3-g-final} are individually conformally invariant. In the course of the proof of Theorem \ref{main1} we shall derive a number of equivalent formulas for $\B_3$ which are of interest in special cases. In particular, we find that \begin{equation}\label{B3-flat-back} 12 \B_3 = \Delta (|\lo|^2) + 6 (\lo,\Hess(H)) + 6 H \tr(\lo^3) + |\lo|^4 + 12 |dH|^2 \end{equation} for a flat background (Corollary \ref{B3-inter-corr}). In \cite[Section 13.7]{JO}, we derived this formula in a different way as a consequence of a general expression for singular Yamabe obstructions \cite[Theorem 6]{JO}. One may also derive the general case of the above formula along this line. In \cite{JO}, we derived \eqref{B3-CF} by combining the conformal invariance of $\B_3$ with \eqref{B3-flat-back} and Simons identity. The heat kernel asymptotics of elliptic boundary value problems for Laplace-type operators are another rich source of polynomials in the covariant derivatives of the curvature tensor and of the second fundamental form; in \cite{BG} explicit formulas are given for the first five (integrated) heat coefficients. It is a natural question to determine the polynomials of this nature which exhibit a conformal invariance. The paper is organized as follows. In Section \ref{second}, we derive identities for $\delta \delta (\lo^2)$ and $\Delta (|\lo|^2)$ which are crucial for later calculations and may also be of independent interest. They are closely related to some identities of J. Simons \cite{Simons}. In Section \ref{SYP} we define the singular Yamabe problem and the resulting obstructions in general dimensions. In Section \ref{B2-cl} we derive a formula for $\B_2$ and connect it with the Willmore equation. Section \ref{B3-general} is devoted to the derivation of formulas for $\B_3$ in terms of standard curvature quantities. The starting point will be a formula in terms of the volume expansion of $g$ in geodesic normal coordinates. Along the way we derive several equivalent formulas for $\B_3$ which might be of interest under specific additional assumptions on the background metric and the embedding. The proof of the main theorem Theorem \ref{main1} is contained in the last subsections. It is here where we need the material of Section \ref{second}. In the final section, we derive the right-hand side of \eqref{B3-g-final} by variation of the functional $\mathcal{W}_3$ under normal variations of the embedding reproving a result in \cite{GGHW}, \cite{Graham-Yamabe}. Finally, we note that the main result is equivalent to \cite[Proposition 1.1]{GGHW} in the arXiv-version, but differs from its printed version - we clarify that issue in Remark \ref{GGHW-wrong}. In view of the possible applications to physics, such as string and membrane theory, and the principles of holography, we have tried to be very explicit throughout. \section{Notation}\label{not} All manifolds $X$ are smooth. For a manifold $X$, $C^\infty(X)$ and $\Omega^p(X)$ denote the respective spaces of smooth functions and smooth $p$-forms. Let $\mathfrak{X}(X)$ be the space smooth vector fields on $X$. Metrics on $X$ usually are denoted by $g$. $dvol_g$ is the Riemannian volume element defined by $g$. The Levi-Civita connection of $g$ is denoted by $\nabla_X^g$ or simply $\nabla_X$ for $X \in \mathfrak{X}(X)$ if $g$ is understood. In these terms, the curvature tensor $R$ of the Riemannian manifold $(X,g)$ is defined by $R(X,Y)Z =\nabla_X \nabla_Y (Z) - \nabla_Y \nabla_X (Z) - \nabla_{[X.Y]}(Z)$ for vector fields $X,Y,Z \in \mathfrak{X}(X)$. The components of $R$ are defined by $R(\partial_i,\partial_j)(\partial_k) = {R_{ijk}}^l \partial_l$. We also set $\nabla_X (u) = \langle du,X \rangle$ for $X \in \mathfrak{X}(X)$ and $u \in C^\infty(X)$. $\Ric$ and $\scal$ are the Ricci tensor and the scalar curvature of $g$. On a manifold $(X,g)$ of dimension $n$, we set $2(n-1) \J = \scal$ and define the Schouten tensor $\Rho$ of $g$ by $(n-2)\Rho = \Ric - \J g$. Let $W$ be the Weyl tensor. \index{$dvol_g$ \quad volume element of $g$} \index{$\Omega^p$ \quad space of $p$-forms} \index{$\nabla$ \quad Levi-Civita connection} \index{$\grad (u)$ \quad gradient field of $u$} \index{$\delta$ \quad divergence operator} \index{$\Delta$ \quad Laplacian} \index{$R$ \quad curvature tensor} \index{$W$ \quad Weyl tensor} \index{$\Ric$ \quad Ricci tensor} \index{$\scal$ \quad scalar curvature} \index{$\Rho$ \quad Schouten tensor} \index{$\J$} For a metric $g$ on $X$ and $u \in C^\infty(X)$, let $\grad_g(u)$ be the gradient of $u$ with respect to $g$, i.e., it holds $g(\grad_g(u),V) = \langle du,V \rangle$ for all vector fields $V \in \mathfrak{X}(X)$. $g$ defines pointwise scalar products $(\cdot,\cdot)$ and norms $|\cdot|$ on $\mathfrak{X}(X)$, on forms $\Omega^*(X)$ and on general tensors. Then $|\grad (u)|^2 = |du|^2$. In these definitions, we use the metric as a subscript if needed for clarity. $\delta^g$ is the divergence operator on differential forms or symmetric bilinear forms. On forms it coincides with the negative adjoint $-d^*$ of the exterior differential $d$ with respect to the Hodge scalar product defined by $g$. Let $\Delta_g = \delta^g d$ be the non-positive Laplacian on $C^\infty(X)$. On the Euclidean space $\R^n$, it equals $\sum_i \partial_i^2$. In addition, $\Delta$ will also denote the Bochner-Laplacian (when acting on $L$). A metric $g$ on a manifold $X$ with boundary $M$ induces a metric $h$ on $M$. In such a setting, the curvature quantities of $g$ and $h$ will be distinguished by adding a bar to those of $g$. In particular, the covariant derivative, the curvature tensor and the Weyl tensor of $(X,g)$ are $\bar{\nabla}$, $\bar{R}$ and $\overline{W}$. Similarly, $\overline{\Ric}$ and $\overline{\scal}$ are the Ricci tensor and the scalar curvature of $g$. \index{$\bar{\nabla}$ \quad Levi-Civita connection} \index{$\overline{\Ric}_{0}$} \index{$\overline{W}$ \quad Weyl tensor} \index{$\overline{W}_0$} \index{$\W$} \index{$\overline{\scal}$ \quad scalar curvature} \index{$\bar{\J}$} A hypersurface usually is given by an embedding $\iota: M \hookrightarrow X$. Accordingly, tensors on $X$ are pulled back by $\iota^*$ to $M$. In formulas, we often omit this pull back. For a hypersurface $\iota: M \hookrightarrow X$ with the induced metric $h = \iota^*(g)$ on $M$, the second fundamental form $L$ is defined by $L(X,Y)= - h (\nabla^g_X(Y), N)$ for vector fields $X, Y \in \mathfrak{X}(M)$ and a unit normal vector field $\partial_0 = N$. We set $n H = \tr_h(L)$ if $M$ has dimension $n$. $H$ is the mean curvature of $M$. Let $\lo = L - H h$ be the trace-free part of $L$. We often identify $L$ with the shape operator $S$ defined by $h(X,S(Y)) = L(X,Y)$. We use metrics as usual to raise and lower indices. In particular, we set $(L^2)_{ij} = L_i^k L_{kj} = h^{lk} L_{il} L_{kj}$ and similarly for higher powers of $L$. We always sum over repeated indices. The $1$-form $\overline{\Ric}_{0} \in \Omega^1(M)$ is defined by $\overline{\Ric}_{0}(X) = \overline{\Ric}(X,\partial_0)$ for $X \in \mathfrak{X}(M)$. Similarly, we write $b_0$ for the analogous $1$-form defined by a bilinear form $b$ and we let $\overline{W}_0$ be the $3$-tensor on $M$ with components by $\overline{W}_{ijk0}$, i.e., we always insert $\partial_0$ into the last slot. Moreover, we set $\W_{ij} = \overline{W}_{0ij0}$. \index{$H$ \quad mean curvature} \index{$\lo$ \quad trace-free part of $L$} \index{$L$ \quad second fundamental form} \index{$L^2$, $L^3$, $L^4$} \index{$\partial_0$ \quad unit normal vector} \section{Some second-order identities involving the second fundamental form}\label{second} In the present section, we derive formulas for $\delta \delta (\lo^2)$ and $\Delta (|\lo|^2)$ in terms of the geometry of the background and the intrinsic geometry of $M$. The second of these formulas is closely related to a well-known formula of Simons. The main results are Lemma \ref{NEW3a} and Lemma \ref{diff-key-g}. The latter result will play an important role in Section \ref{B3-general}. \begin{lem}\label{Id-basic} For $n=3$, it holds \begin{equation}\label{Id-2} \delta \delta (\lo^2) = 2 \lo_{jk} \nabla^j \delta(\lo)^k + |\nabla \lo|^2 + \frac{1}{2} |\delta(\lo)|^2 - \frac{1}{2} |\overline{W}_0|^2 + \kappa_1 \end{equation} with \begin{equation}\label{kappa-def} \kappa_1 \st (\nabla^i \nabla^j \lo^k_i - \nabla^j \nabla^i \lo^k_i) \lo_{kj}. \end{equation} \end{lem} \begin{proof} First, we calculate \begin{align*} \delta \delta (\lo^2) & = \nabla^i \nabla^j \lo^2_{ij} = (\nabla^i \nabla^j \lo_i^k) \lo_{kj} + (\nabla^j \lo_i^k)(\nabla^i \lo_{kj}) + (\nabla^i \lo_i^k) (\nabla^j \lo_{kj}) + \lo_i^k (\nabla^i \nabla^j \lo_{kj}) \\ & = (\nabla^i \nabla^j \lo^k_i) \lo_{kj} + (\nabla^j \lo_i^k)(\nabla^i \lo_{kj}) + \delta(\lo)^k \delta(\lo)_k + \lo_i^k \nabla^i \delta (\lo)_k. \end{align*} In the first term, we interchange covariant derivatives. This generates the curvature term \begin{equation*} \kappa_1 \st (\nabla^i \nabla^j \lo_i^k) \lo_{kj} - (\nabla^j \nabla^i \lo_i^k) \lo_{kj}. \end{equation*} In the second term, we apply the Codazzi-Mainardi equation: $$ \nabla_j L_{ik} - \nabla_i L_{jk} = \bar{R}_{ijk0}. $$ Its trace-free part gives \begin{equation}\label{CM-trace-free} \nabla^j \lo^k_i - \nabla_i \lo^{jk} = - \frac{1}{2} \delta(\lo)^j h_i^k + \frac{1}{2} \delta(\lo)_i h^{jk} + {{\overline{W}_i}^{jk}}_0. \end{equation} In particular, we get $$ (\nabla^j \lo_i^k - \nabla_i \lo^{jk}) \nabla^i \lo_{jk} = - \frac{1}{2} (\delta(\lo),\delta(\lo)) + {{\overline{W}_i}^{jk}}_0 \nabla^i \lo_{jk}. $$ But \begin{equation*}\label{add-1} \overline{W}_{ijk0} \nabla^i \lo^{jk} = \frac{1}{2} \left(\overline{W}_{ijk0} - \overline{W}_{jik0} \right) \nabla^i \lo^{jk} = \frac{1}{2} \overline{W}_{ijk0} \left(\nabla^i \lo^{jk} - \nabla^j \lo^{ki} \right) = -\frac{1}{2} |\overline{W}_0|^2, \end{equation*} where $|\overline{W}_0|^2 \st \overline{W}_{ijk0} \overline{W}^{ijk0}$. Thus \begin{align*} (\nabla^j \lo_i^k) (\nabla^i \lo_{kj}) & = (\nabla^j \lo_i^k - \nabla_i \lo^{jk}) \nabla^i \lo_{kj} + (\nabla_i \lo^{jk})(\nabla^i \lo_{kj}) \\ & = (\nabla_i \lo^{jk}) (\nabla^i \lo_{kj}) - \frac{1}{2} (\delta(\lo),\delta(\lo)) - \frac{1}{2} |\overline{W}_{0}|^2 \\ & = (\nabla \lo, \nabla \lo) - \frac{1}{2} (\delta(\lo),\delta(\lo)) - \frac{1}{2} |\overline{W}_{0}|^2. \end{align*} These observations show that \begin{equation}\label{dd-eval} \delta \delta (\lo^2) = 2 \lo_{jk} \nabla^j \delta(\lo)^k + |\nabla \lo|^2 + \frac{1}{2} |\delta(\lo)|^2 - \frac{1}{2} |\overline{W}_{0}|^2 + \kappa_1 \end{equation} with $\kappa_1$ as being defined in \eqref{kappa-def}. \end{proof} \begin{lem}\label{kappa-1a} \begin{equation*}\label{K1} \kappa_1 = 3 (\lo^2,\Rho) + \J |\lo|^2. \end{equation*} \end{lem} \begin{proof} By definition, we have $$ \kappa_1 = (\nabla^i \nabla^j \lo^k_i - \nabla^j \nabla^i \lo^k_i) \lo_{kj} = \Curv^{ij} (\lo)_i^k \lo_{kj}, $$ where $\Curv$ denotes the curvature operator of $M$. We also observe that $$ \Curv^{ij} (\lo)_i^k \lo_{kj} = \Curv^{ij} (L)_i^k L_{kj}. $$ Hence by $$ \Curv_{ij} (L)_{kl} = - L_l^{m} R_{ijkm} - L^m_k R_{ijlm} $$ and the decomposition \begin{equation}\label{KN} R_{ijkl} = -\Rho_{ik} h_{jl} + \Rho_{jk} h_{il} - \Rho_{jl} h_{ik} + \Rho_{il} h_{jk} \end{equation} (the Weyl tensor vanishes in dimension $3$) we get (see also Remark \ref{kappa1-2}) \begin{align}\label{kappa-curv} \kappa_1 & = - (L^m_i {R^{ij}}_{km} + L^{km} {{R^{ij}}_{im}}) L_{kj} \\ & = L^m_i (\Rho^i_k h^j_m - \Rho_k^j h_m^i + \Rho^j_m h_k^i - \Rho_m^i h_k^j) L_{kj} + L^{km} \Ric^j_m L_{kj} \notag \\ & = 2 (L^2,\Rho) - 6 H (L,\Rho) + (L^2,\Ric) \notag \\ & = 3 (L^2,\Rho) - 6 H (L,\Rho) + \J |L^2| \notag \\ & = 3 (\lo^2,\Rho) + \J |\lo|^2. \notag \end{align} Alternatively, combining the Simons' identity \eqref{S-I} with the Gauss formula for the curvature endomorphism yields the first identity in \eqref{kappa-curv}. This completes the proof. \end{proof} \begin{example}\label{kappa1-flat} For flat backgrounds, it holds $$ \kappa_1 = 3 H \tr(L^3) - |L|^4 = 3 H \tr(\lo^3) + 3 H^2 |\lo|^2 - |\lo|^4. $$ \end{example} \begin{proof} The Gauss identity gives $$ \J = -\frac{1}{4} |\lo|^2 + \frac{3}{2} H^2 $$ and the identity \begin{equation}\label{Fial} \JF \st \iota^* \bar{\Rho} - \Rho + H \lo + \frac{1}{2} H^2 h \stackrel{!}{=} \lo^2 - \frac{1}{4} |\lo|^2 h + \W \end{equation} for the conformally invariant Fialkov tensor $\JF$ of weight $0$ \cite[Lemma 6.23.3]{J1} implies $$ \Rho = - \lo^2 + \frac{1}{4} |\lo|^2 h + H \lo + \frac{1}{2} H^2 h. $$ Hence $$ 3 (\lo^2,\Rho) + \J |\lo|^2 = - 3 \tr(\lo^4) + \frac{1}{2} |\lo|^4 + 3 H \tr(\lo^3) + 3 H^2 |\lo|^2 $$ and it suffices to apply the identity $2 \tr(\lo^4) = |\lo|^4$ (Corollary \ref{trace-id}). \end{proof} Now let \begin{equation}\label{M2} \kappa_2 \st (\lo,\Delta (\lo)) - \frac{3}{2} \lo^{ij} \nabla_j \delta(\lo)_i. \end{equation} \begin{lem}\label{diff-simple} \begin{equation*} \kappa_1 - \kappa_2 = \lo^{ij} \nabla^k \overline{W}_{kij0}. \end{equation*} \end{lem} \begin{proof} The trace-free part of the Codazzi-Mainardi equation reads \begin{equation}\label{tf-CM} \nabla_i \lo_{kj} - \nabla_k \lo_{ij} - \frac{1}{2} \delta(\lo)_k h_{ij} + \frac{1}{2} \delta(\lo)_i h_{kj} = \overline{W}_{kij0}. \end{equation} Hence $$ \nabla^k \nabla_i \lo_{kj} - \nabla^k \nabla_k \lo_{ij} - \frac{1}{2} \nabla^k \delta(\lo)_k h_{ij} + \frac{1}{2} \nabla^k \delta(\lo)_i h_{kj} = \nabla^k \overline{W}_{kij0}. $$ We commute the covariant derivatives in the first term and obtain $$ \lo^{ij} \nabla_i \delta(\lo)_j + \kappa_1 - (\lo, \Delta (\lo)) + \frac{1}{2} \lo^{ij} \nabla_j \delta(\lo)_i = \lo^{ij} \nabla^k \overline{W}_{kij0}. $$ Therefore, we get $$ \frac{3}{2} \lo^{ij} \nabla_i \delta(\lo)_j - (\lo, \Delta(\lo)) + \kappa_1 = \lo^{ij} \nabla^k \overline{W}_{kij0}. $$ In other words, we have $$ \kappa_1 - \kappa_2 = \lo^{ij} \nabla^k \overline{W}_{kij0}. $$ The proof is complete. \end{proof} One should compare this result with \cite[(2.12)]{GGHW}. \begin{cor}\label{kappa-2-form} \begin{equation}\label{K2} \kappa_2 = 3(\lo^2,\Rho) + \J |\lo|^2 - \lo^{ij} \nabla^k \overline{W}_{kij0}. \end{equation} \end{cor} \begin{cor}\label{Laplace-L} \begin{equation*} (\lo,\Delta(\lo)) = 3 (\lo,\Hess(H)) + 3 (\lo^2,\Rho) + \J |\lo|^2 + 3 \lo^{ij} \nabla_i (\bar{\Rho}_0)_j - \lo^{ij} \nabla^k \overline{W}_{kij0}. \end{equation*} \end{cor} \begin{proof} We calculate \begin{align*} (\lo,\Delta(\lo)) & = \frac{3}{2} \lo^{ij} \nabla_j \delta(\lo)_i + \kappa_2 \qquad \mbox{(by \eqref{M2})} \\ & = 3 (\lo,\Hess(H)) + 3 \lo^{ij} \nabla_j \bar{\Rho}_{0i} + \kappa_2 \qquad \mbox{(by Codazzi-Mainardi)}. \end{align*} Now we apply Corollary \ref{kappa-2-form}. \end{proof} Alternatively, we outline how Corollary \ref{Laplace-L} can be derived by using a Simons type formula. First, we prove \begin{lem}\label{pre-Simons} In general dimensions, it holds $$ \nabla_k \nabla_l (L)_{ij} = \nabla_i \nabla_j (L)_{kl} - \nabla_i \bar{R}_{kjl0} - \nabla_k \bar{R}_{lij0} + {{R_{ki}}^m}_l L_{mj} + {{R_{ki}}^m}_j L_{lm}. $$ \end{lem} \begin{proof} We start with the Codazzi-Mainardi equation $$ \nabla_i(L)_{lj} - \nabla_l (L)_{ij} = \bar{R}_{lij0}. $$ Differentiation gives $$ \nabla_k \nabla_i (L)_{lj} - \nabla_k \nabla_l (L)_{ij} = \nabla_k \bar{R}_{lij0}. $$ Now we commute the derivatives in the first term using $$ \nabla_k \nabla_i (L)_{lj} - \nabla_i \nabla_k (L)_{lj} = {{R_{ki}}^m}_l L_{mj} + {{R_{ki}}^m}_j L_{lm}. $$ Hence \begin{equation}\label{CM1} \nabla_i \nabla_k (L)_{lj} - \nabla_k \nabla_l (L)_{ij} = \nabla_k \bar{R}_{lij0} - {{R_{ki}}^m}_l L_{mj} - {{R_{ki}}^m}_j L_{lm}. \end{equation} Similarly, we differentiate the Codazzi-Mainardi equation $$ \nabla_j (L)_{kl} - \nabla_k (L)_{jl} = \bar{R}_{kjl0} $$ and obtain \begin{equation}\label{CM2} \nabla_i \nabla_j (L)_{kl} - \nabla_i \nabla_ k (L)_{jl} = \nabla_i \bar{R}_{kjl0}. \end{equation} Adding \eqref{CM1} and \eqref{CM2} proves the assertion. \end{proof} Lemma \ref{pre-Simons} implies \begin{align*} (\nabla^i \nabla^j (L)_{ik} - \nabla^j \nabla^i (L)_{ik}) L^k_j & = (\nabla^i \nabla^j (L)_{ik} - \nabla^k \nabla^i (L)_{ij}) L^k_j \quad \mbox{(by the symmetry of $L$)} \\ & = R_{ikmj} L^{mi} L^{kj} + {R_{ikm}}^i L^{jm} L^k_j. \end{align*} One can easily check that this identity also holds if $L$ is replaced by $\lo$. This reproves \eqref{kappa-curv}. Now taking a trace in Lemma \ref{pre-Simons} gives \begin{lem}\label{pre-Simons-trace} In general dimensions, it holds $$ \Delta (L)_{ij} = n \Hess_{ij}(H) - \nabla^k \bar{R}_{kij0} + \nabla_i (\overline{\Ric}_0)_j + {{R_{ik}}^k}_m L_j^m - R_{kijm} L^{km}. $$ \end{lem} In particular, this gives a \begin{proof}[Second proof of Corollary \ref{Laplace-L}.] Lemma \ref{pre-Simons-trace} and $(L,\Delta (L)) = (\lo,\Delta(\lo)) + 3 H \Delta (H)$ imply \begin{align*} (\lo,\Delta(\lo)) & = 3 (\lo,\Hess (H)) + L^{ij} \nabla_i (\overline{\Ric}_0)_j - L^{ij} \nabla^k \bar{R}_{kij0} + L^{ij} {{R_{ik}}^k}_m L_j^m - L^{ij} R_{kijm} L^{km} \\ & = 3 (\lo,\Hess (H)) + (L^2)^{im} {{R_{ik}}^k}_m - L^{ij} L^{kl} R_{kijl} + L^{ij} \nabla_i (\overline{\Ric}_0)_j - L^{ij} \nabla^k \bar{R}_{kij0}. \end{align*} Now \begin{equation*} (L^2)^{im} {{R_{ik}}^k}_m = (L^2)^{ij} \Ric_{ij} = (L^2,\Rho) + \J |L|^2, \end{equation*} and \eqref{KN} implies \begin{align*} L^{ij} L^{kl} R_{kijl} & = - 2 (L^2,\Rho) + 6 H (L,\Rho). \end{align*} Hence \begin{align*} (\lo,\Delta(\lo)) = 3 (\lo,\Hess (H)) + 3 (L^2,\Rho) + \J |L|^2 - 6 H (L,\Rho) + 2 L^{ij} \nabla_i (\bar{\Rho}_0)_j - L^{ij} \nabla^k \bar{R}_{kij0}. \end{align*} In order to simplify this formula, we note that $$ 3 (L^2,\Rho) + \J |L|^2 - 6 H (L,\Rho) = 3 (\lo^2,\Rho) + \J |\lo|^2 $$ and $$ \nabla^k \bar{R}_{kij0} = \nabla^k (\bar{\Rho}_0)_k h_{ij} - \nabla_j (\bar{\Rho}_0)_i + \nabla^k \overline{W}_{kij0}. $$ Hence \begin{align*} (\lo,\Delta(\lo)) = 3 (\lo,\Hess (H)) + 3 (\lo^2,\Rho) + \J |\lo|^2 + 3 L^{ij} \nabla_i (\bar{\Rho}_0)_j - 3H \nabla^k (\bar{\Rho}_0)_k - L^{ij} \nabla^k \overline{W}_{kij0}. \end{align*} The last term is unchanged if we replace $L$ by $\lo$. This completes the proof. \end{proof} Combining Lemma \ref{pre-Simons} with some arguments using the Gauss formula relating the curvature tensors of $X$ and $M$ leads to the following well-known identities which are due to Simons \cite{Simons, SchSY, HP,V}. \begin{prop}\label{S-Id-g} For any hypersurface $M^n \hookrightarrow X^{n+1}$ with the second fundamental form $L$, it holds \begin{align}\label{S-I} \nabla_i \nabla_j (L)_{kl} & = \nabla_k \nabla_l (L)_{ij} + L_{ij} L^2_{kl} - L_{kl} L^2_{ij} + L_{il} L^2_{jk} - L_{jk} L^2_{il} \notag \\ & - L_i^m \bar{R}_{jklm} - L_j^m \bar{R}_{iklm} + L_k^m \bar{R}_{lijm} + L_l^m \bar{R}_{kijm} \notag \\ & + L_{ij} \bar{R}_{0kl0} - L_{kl} \bar{R}_{0ij0} + \bar{\nabla}_i (\bar{R})_{kjl0} + \bar{\nabla}_k (\bar{R})_{lij0}. \end{align} Taking a trace gives \begin{align}\label{S-II} \Delta (L)_{ij} & = n \Hess_{ij}(H) + n H L^2_{ij} - L_{ij} |L|^2\notag \\ & + L^s_j \bar{R}_{ikks} + L_i^s \bar{R}_{jkks} - 2 L^{rs} \bar{R}_{rijs} \notag \\ & + n H \bar{R}_{0ij0} - L_{ij} \overline{\Ric}_{00} + \bar{\nabla}_k (\bar{R})_{ikj0} + \bar{\nabla}_i (\bar{R})_{jkk0}. \end{align} \end{prop} For flat backgrounds, Proposition \ref{S-Id-g} specializes to \begin{prop}\label{S-Id-flat} For any hypersurface $M^n \hookrightarrow \R^{n+1}$ with the second fundamental form $L$, it holds \begin{equation}\label{S-1} \nabla_i \nabla_j (L)_{kl} = \nabla_k \nabla_l(L)_{ij} + L_{ij} L^2_{kl} - L_{kl} L^2_{ij} + L_{il} L^2_{kj} - L_{kj} L^2_{il}. \end{equation} Hence \begin{equation}\label{S-2} \Delta (L) = n \Hess (H) +n H L^2 - L |L|^2 \end{equation} and \begin{equation}\label{S-2a} \frac{1}{2} \Delta (|L|^2) = n (L,\Hess (H)) + |\nabla L|^2 + n H \tr (L^3) - |L|^4. \end{equation} \end{prop} \begin{rem}\label{kappa1-2} The first part of Proposition \ref{S-Id-g} again confirms Lemma \ref{kappa-1a}. In fact, by the symmetry of $L$, we obtain $$ (\nabla^i \nabla^j (L)_{ki} - \nabla^j \nabla^i (L)_{ki}) L^k_j = (\nabla^i \nabla^j (L)_{ki} - \nabla^k \nabla^i (L)_{ji}) L^k_j. $$ In this identity one can replace $L$ by $\lo$. Hence \eqref{S-I} implies $$ \kappa_1 = (L^2)^{kl} \bar{R}_{kiil} - L^{il} L^{jk} \bar{R}_{kilj}. $$ By the Gauss identity, we obtain \begin{align*} \kappa_1 & = 3 H \tr(L^3) - |L|^4 + (L^2)^{kl} R_{kiil} - L^{il} L^{jk} R_{kilj} \\ & + (L^2)^{kl} (L_{ki} L_{il} - L_{kl} L_{ii}) - L^{il} L^{jk} (L_{kl} L_{ij} - L_{kj} L_{il}) \\ & = (L^2)^{kl} R_{kiil} - L^{il} L^{jk} R_{kilj}. \end{align*} The remaining arguments are as in the proof of Lemma \ref{kappa-1a}. \end{rem} \begin{rem}\label{Simons-flat} For flat backgrounds, it holds $\kappa_2 = \kappa_1 = 3 H \tr(L^3) - |L|^4$ (Example \ref{kappa1-flat}) and the above results yield \begin{equation*}\label{Id-1-flat} \frac{1}{2} \Delta (|\lo|^2) = 3 (\lo,\Hess(H)) + |\nabla \lo|^2 + 3 H \tr(L^3) - |L|^4 \end{equation*} and \begin{equation*}\label{Id-2-flat} \delta \delta (\lo^2) = 4 (\lo,\Hess(H)) + |\nabla \lo|^2 + 2 |dH|^2 + 3H \tr(L^3) - |L|^4. \end{equation*} As a consequence, we find the difference formula \begin{equation}\label{basic-div} \frac{1}{2} \Delta (|\lo|^2) - \delta \delta (\lo^2) = - (\lo,\Hess(H)) - 2 |dH|^2. \end{equation} \end{rem} Now combining Lemma \ref{Id-basic} with \eqref{M2}, we obtain \begin{align*} \delta \delta (\lo^2) & = \frac{4}{3} (\lo,\Delta (\lo)) + |\nabla \lo|^2+ \frac{1}{2} |\delta (\lo)|^2 - \frac{1}{2} |\overline{W}_{0}|^2 - \frac{4}{3} \kappa_2 + \kappa_1. \end{align*} Hence \begin{align}\label{NEW} \delta \delta ((\lo^2)_\circ) & = \delta \delta (\lo^2) - \frac{1}{3} \Delta (|\lo|^2) \notag \\ & = \delta \delta (\lo^2) - \frac{2}{3} |\nabla \lo|^2 - \frac{2}{3} (\lo,\Delta (\lo)) \notag \\ & = \frac{2}{3} (\lo,\Delta (\lo)) + \frac{1}{3} |\nabla \lo|^2 + \frac{1}{2} |\delta (\lo)|^2 - \frac{1}{2}|\overline{W}_{0}|^2 - \frac{1}{3} \kappa_1 + \frac{4}{3}(\kappa_1-\kappa_2). \end{align} Thus, using $$ (\Rho,(\lo^2)_\circ) = (\Rho,\lo^2) - \frac{1}{3} \J |\lo|^2, $$ Lemma \ref{kappa-1a} and Lemma \ref{diff-simple}, we obtain \begin{lem}\label{NEW3a} \begin{align*} \delta \delta ((\lo^2)_\circ) + (\Rho,(\lo^2)_\circ) & = \frac{2}{3} (\lo,\Delta \lo) + \frac{1}{3} |\nabla \lo|^2 + \frac{1}{2} |\delta (\lo)|^2 - \frac{2}{3} \J |\lo|^2 \\ & - \frac{4}{3}\lo^{ij} \nabla^k (\overline{W}_0)_{kij} - \frac{1}{2} |\overline{W}_{ikj0}|^2. \end{align*} \end{lem} Lemma \ref{NEW3a} confirms \cite[Proposition 2.4]{GGHW} up to the sign of the term $|\overline{W}_{0}|^2$. The following result extends the difference formula \eqref{basic-div} to general backgrounds. It will play an important role in Section \ref{B3-general}. \begin{lem}\label{diff-key-g} It holds \begin{equation}\label{basic-diff-g} \Delta (|\lo|^2) - 2 \delta \delta (\lo^2) = - 2 (\lo,\Hess(H)) - 2(\lo,\nabla (\bar{\Rho}_0)) - |\delta(\lo)|^2 - 2 \lo^{ij} \nabla^k \overline{W}_{kij0} + |\overline{W}_{0}|^2. \end{equation} \end{lem} \begin{proof} We recall that \begin{align*}\label{Simons-I} \Delta (|\lo|^2) & = 2 (\lo,\Delta(\lo)) + 2 |\nabla (\lo)|^2 \notag \\ & = 6 (\lo,\Hess(H)) + 2 \kappa_1 + 6 (\lo,\nabla (\bar{\Rho}_{0})) - 2 \lo^{ij} \nabla^k \overline{W}_{kij0} + 2 |\nabla (\lo)|^2 \end{align*} (by Lemma \ref{kappa-1a} and Lemma \ref{Laplace-L}) and \begin{equation*}\label{Simons-II} 2 \delta \delta (\lo^2) = 8 (\lo, \Hess(H)) + 8 (\lo,\nabla (\bar{\Rho}_0)) + 2 |\nabla (\lo)|^2 + |\delta(\lo)|^2 - |\overline{W}_{0}|^2 + 2 \kappa_1 \end{equation*} (by \eqref{dd-eval} and $\delta(\lo) = 2 dH + 2 \bar{\Rho}_0$ (Codazzi-Mainardi)). The difference of both sums equals \begin{equation}\label{diff-ex} -2 (\lo,\Hess(H)) - 2 (\lo,\nabla (\bar{\Rho}_0)) - |\delta(\lo)|^2 - 2 \lo^{ij} \nabla^k \overline{W}_{kij0} + |\overline{W}_{0}|^2. \end{equation} The proof is complete. \end{proof} Note that the left-hand side of \eqref{basic-diff-g} is a total divergence, i.e., integrates to $0$ on a closed $M$. The fact that the sum of the first three terms on the right-hand side of \eqref{basic-diff-g} is a total divergence follows by partial integration and the Codazzi-Mainardi equation. In fact, for closed $M$, we calculate \begin{align*} & \int_M -2 (\lo,\Hess(H)) - 2(\lo,\nabla (\bar{\Rho}_0)) - |\delta(\lo)|^2 dvol_h \\ & = \int_M 2 (\delta(\lo),dH) + 2( \delta(\lo),\bar{\Rho}_0) - |\delta (\lo)|^2 dvol_h = 0 \end{align*} by $2dH + \bar{\Rho}_0 = \delta(\lo)$. The fact that the additional terms on the right-hand side of \eqref{basic-diff-g} also form a total divergence can be seen as follows. Partial integration gives \begin{align*} - 2 \int_M \lo^{ij} \nabla^k \overline{W}_{kij0} dvol_h = 2 \int_M \nabla^k (\lo)^{ij} \overline{W}_{kij0} dvol_h. \end{align*} By the trace-free part of the Codazzi-Mainardi equation \label{total} $$ \nabla_k (\lo)_{ij} - \nabla_i (\lo)_{kj} - \frac{1}{2} \delta(\lo)_i h_{kj} + \frac{1}{2} \delta(\lo)_k h_{ij} = \overline{W}_{ikj0} = - \overline{W}_{kij0} $$ and partial integration, this integral equals \begin{align*} & 2 \int_M \nabla^i (\lo)^{kj} \overline{W}_{kij0} dvol_h - 2 \int_M \overline{W}^{kij0} \overline{W}_{kij0} dvol_h \\ & = - 2 \int_M \lo^{kj} \nabla^i \overline{W}_{kij0} dvol_h - 2 \int_M |\overline{W}_{kij0}|^2 dvol_h \end{align*} Hence $$ \int_M (-4 \lo^{ij} \nabla^k \overline{W}_{kij0} + 2 |\overline{W}_{kij0}|^2) dvol_h = 0. $$ This proves the claim. \section{The singular Yamabe problem and the obstruction}\label{SYP} The material in this section rests on \cite{ACF} and \cite{GW-LNY}. Let $(X^{n+1},g)$ be a compact manifold with boundary $M$ of dimension $n$. The singular Yamabe problem asks to find a defining function $\sigma$ of $M$ so that \begin{equation}\label{syp} \scal (\sigma^{-2}g) = -n(n+1). \end{equation} The conformal transformation law of scalar curvature shows that $$ \scal(\sigma^{-2}g) = -n(n+1) |d\sigma|_g^2 + 2n \sigma \Delta_g(\sigma) + \sigma^2 \scal(g). $$ Following \cite{GW-LNY}, we write this equation in the form $$ \scal(\sigma^{-2}g) = -n(n+1) \SC(g,\sigma), $$ where \index{$\SC(g,\sigma)$} $$ \SC(g,\sigma) \st |d\sigma|_g^2 + 2 \rho \sigma, \quad (n+1) \rho \st - \Delta_g(\sigma) - \sigma \J \quad \mbox{and} \quad 2n \J = \scal(g). $$ In these terms, $\sigma$ is a solution of \eqref{syp} iff $\SC(g,\sigma)=1$. Although such $\sigma$ exist and are unique, in general, $\sigma$ is not smooth up to the boundary. The smoothness is obstructed by a locally determined conformally invariant scalar function on $M$ which is called the singular Yamabe obstruction. In order to describe the structure of a solution $\sigma$ of the singular Yamabe problem more precisely, we use geodesic normal coordinates. Let $r$ be the distance function of $M$ for the background metric $g$. Then there are uniquely determined coefficients $\sigma_{(k)} \in C^\infty(M)$ for $2 \le k \le n+1$ so that the smooth defining function \begin{equation}\label{sigma-finite} \sigma_F \st r + \sigma_{(2)} r^2 + \dots + \sigma_{(n+1)} r^{n+1} \end{equation} satisfies \begin{equation}\label{Yamabe-finite} \SC(g,\sigma_F) = 1 + R r^{n+1} \end{equation} with a smooth remainder term $R$. The coefficients are recursively determined. In geodesic normal coordinates, the metric $g$ takes the form $dr^2 + h_r$ with a one-parameter family $h_r$ of metrics on $M$. The condition \eqref{Yamabe-finite} is equivalent to $$ |d\sigma_F|_g^2 - \frac{2}{n+1} \sigma_F \Delta_g(\sigma_F) - \frac{1}{n(n+1)} \sigma_F^2 \scal(g) = 1 + R r^{n+1}. $$ We write the left-hand side of this equation in the form \begin{align}\label{Y-F} & \partial_r(\sigma_F)^2 + h_r^{ij} \partial_i (\sigma_F) \partial_j (\sigma_F) \notag \\ & - \frac{2}{n+1} \sigma_F \left (\partial_r^2 (\sigma_F) + \frac{1}{2} \tr (h_r^{-1} h_r') \partial_r (\sigma_F) + \Delta_{h_r} (\sigma_F) \right) - \frac{1}{n(n+1)} \sigma_F^2 \scal(g) \end{align} and expand this sum into a Taylor series in the variable $r$. Then the vanishing of the coefficient of $r^k$ for $k \le n$ is equivalent to an identity of the form $$ (k-1-n) \sigma_{(k+1)} = LOT, $$ where $LOT$ involves only lower-order Taylor coefficients of $\sigma$. The latter relation also indicates that there is a possible obstruction to the existence of an improved solution $\sigma_F'$ which contains a term $\sigma_{(n+2)} r^{n+2}$ and satisfies $\SC(g,\sigma_F') = 1 + R r^{n+2}$. Following \cite{ACF}, we define the {\em singular Yamabe obstruction} by \index{$\B_n$ \quad singular Yamabe obstruction} \begin{equation}\label{B-def} \B_n \st \left( r^{-n-1} (\SC(g,\sigma_F) - 1) \right)|_{r=0} . \end{equation} Since $\sigma_F$ is determined by $g$, $\B_n$ is a functional of $g$. It is a key result that $\B_n$ is a conformal invariant of $g$ of weight $-(n+1)$. More precisely, we write $\hat{\B}_n$ for the obstruction defined by $\hat{g}=e^{2\varphi} g$ with $\varphi \in C^\infty(X)$. Then \begin{lem}\label{B-CTL} $e^{(n+1) \iota^*(\varphi)} \hat{\B}_n = \B_n$. \end{lem} \index{$S_k$} Let us be a bit more precise about the above algorithm. We set $S_k = \sum_{j=1}^k \sigma_{(j)}$ so that $S_{n+1} = \sigma_F$. Then the coefficients of $\sigma_F$ are recursively determined by the conditions \begin{equation*} \SC(S_k) = 1 + O(r^k). \end{equation*} More precisely, we recursively find $$ \SC(S_k) = 1 + r^{k-1} (c (n-k+2) \sigma_{(k)} + \cdots) + \cdots $$ with $c = 2k/(n+1)$. Then the condition $\SC(S_k) -1 = O(r^k)$ with an unknown coefficient $\sigma_{(k)}$ is satisfied iff the coefficient of $r^{k-1}$ in this expansion vanishes. This can be solved for $\sigma_{(k)}$ if $k =2,3,\dots,n+1$. In the case $k=n+1$, we obtain $$ \SC(S_{n+1}) = 1 + O(r^{n+1}) $$ and the restriction of the latter remainder is the obstruction $\B_{n+1}$. In the following, we shall need explicit formulas for the coefficients $\sigma_{(k)}$ for $k \le 4$. First, we consider flat backgrounds. We approximately solve the equation $\SC(g,\sigma_F) = 1$ for the flat metric $g$ by differentiation of the relation \begin{align}\label{Y-F-flat} \partial_r(\sigma_F)^2 + h_r^{ij} \partial_i (\sigma_F) \partial_j (\sigma_F) - \frac{2}{n+1} \sigma_F \left (\partial_r^2 (\sigma_F) + \frac{1}{2} \tr (h_r^{-1} h_r') \partial_r (\sigma_F) + \Delta_{h_r} (\sigma_F) \right) = 0 \end{align} in the variable $r$. Then, for general $n \ge 3$, we find the solution \begin{equation}\label{sigma-F-flat} \sigma_F = r + \frac{r^2}{2} H - \frac{r^3}{3(n-1)} |\lo|^2 + r^4 \sigma_{(4)} + \cdots \end{equation} with the coefficient \begin{equation}\label{sigma4-flat} \sigma_{(4)} = \frac{1}{24(n-2)} \left(6 \tr(\lo^3) + \frac{7n-11}{n-1} H |\lo|^2 + 3 \Delta(H)\right). \end{equation} Note that $\sigma_{(3)}$ is singular for $n=1$ and $\sigma_{(4)}$ is singular for $n=2$. In particular, we have \begin{equation*} \sigma_F = r + \frac{r^2}{2} H - \frac{r^3}{6} |\lo|^2 + \frac{r^4}{24} \left( 6 \tr (\lo^3) + 5 H |\lo|^2 + 3 \Delta (H) \right) + \cdots \end{equation*} if $n=3$ (\cite[(2.16)-(2.18]{GG}). These results are determined by the conditions \begin{align*} \SC(S_2) = 1 + O(r^2), \quad \SC(S_3) = 1 + O(r^3), \quad \SC(S_4) = 1 + O(r^4). \end{align*} In particular, the obstructions $\B_2$ and $\B_3$ are the restrictions of the remainder terms in the second and the third expansions. More precisely, for $\B_2$ ($n=2$), we find \begin{equation}\label{B2-new} \B_2 = (r^{-3}(\SC(S_3) -1))|_0 = -\frac{1}{3} (H |\lo|^2 + \Delta (H)) - \frac{2}{3} \tr(\lo^3). \end{equation} Since for $n=2$ the term $\tr(\lo^3)$ vanishes, we get $$ \B_2 = -\frac{1}{3} (H |\lo|^2 + \Delta(H)) . $$ Similarly, for $n=3$, we get \begin{align*} \B_3 & = (r^{-4}(\SC(S_4) -1))|_0 \\ & = \frac{1}{12} (|\lo|^4 - 6 H \Delta(H) + \Delta (|\lo|^2) + 6 H \tr (\lo^3) + 3 |dH|^2 - 3 \Delta' (H)). \end{align*} where $\Delta_{h_r} = \Delta_h + r \Delta'_h + \cdots$. By the variation formula $\Delta'(u) = -2 (L,\Hess(u)) - 3 (dH,du)$ (see the proof of Lemma \ref{last-line}), this leads to \begin{equation}\label{B3-flat} 12 \B_3 = \Delta (|\lo|^2) + 6 (\lo,\Hess(H)) + |\lo|^4 + 6 H \tr(\lo^3) + 12 |dH|^2. \end{equation} For general background, we shall express the coefficients $\sigma_{(k)}$ in terms of the volume coefficients $v_k$ of $h_r$, which are defined by the expansion $$ v(r) = \sum_{k\ge 0} r^k v_k $$ of $$ v(r) \st dvol(h_r)/dvol(h) = ( \det (h_r)/\det(h) )^{\frac{1}{2}}. $$ This is convenient since the identity \begin{equation}\label{trace-vol} \frac{v'(r)}{v(r)} = \frac{1}{2} \tr (h_r^{-1} h_r') \end{equation} provides natural formulas for the expansion of the coefficient $\tr (h_r^{-1} h_r')$ in \eqref{Y-F}. Note that for the background $\R^{n+1}$ it holds \begin{equation}\label{vol-flat} v(r) = \det (\id + r L) \end{equation} (see \cite[Section 3.4]{Gray}). Hence $v_{n+1} = 0$ in this case. For a general backgrounds, the volume coefficient $v_{n+1}$ does not vanish, however. \index{$v_k$ \quad volume coefficients} The calculation of the remainder term in the expansion of $\SC(S_k)$ requires $k$ normal derivatives of the equation \eqref{Y-F-flat}. Since the expansion of $S_k$ has a vanishing zeroth-order term, this amounts to take $k-1$ derivatives of the trace term. In turn, this involves volume coefficients of the metric $h_r$ up to order $k$. In general, we find $$ \B_n \st (r^{-(n+1)}(\SC(\sigma_F) -1))|_0 = (\cdots) - 2v_{n+1}. $$ In particular, $\B_2$ involves the coefficient $v_3$ and $\B_3$ involves $v_4$. Now, the above algorithm shows that, for general backgrounds and in general dimensions, the coefficients $\sigma_{(k)}$ are given by the formulas \begin{align}\label{Y-sol-g} \sigma_{(2)} & = \frac{1}{2n} v_1, \notag \\ \sigma_{(3)} & = \frac{2}{3(n-1)} v_2 - \frac{1}{3n} v_1^2 + \frac{1}{3(n-1)} \bar{\J}, \notag \\ \sigma_{(4)} & = \frac{3}{4(n-2)} v_3 - \frac{9n^2-20n+7}{12n(n-1)(n-2)} v_1 v_2 + \frac{6n^2-11n+1}{24n^2(n-2)} v_1^3 \notag \\ & + \frac{2n-1}{6n(n-1)(n-2)} v_1 \bar{\J} + \frac{1}{4(n-2)} \bar{\J}' + \frac{1}{4(n-2)} \Delta (\sigma_{(2)}). \end{align} The observation that $\sigma_{(4)}$ has a (formal) pole at $n=2$ reflects the fact that there is no approximate solution $\sigma_F$ up to order $r^4$ in that dimension. Similarly, $\sigma_{(5)}$ has a pole at $n=3$ - we shall not display an explicit formula for $\sigma_{(5)}$, however. The obstruction to the existence of a smooth solution in $n=3$ is defined in terms of $\SC(S_4)$. In the flat case, the identity \eqref{vol-flat} implies that the volume coefficients $v_k$ are given by the elementary symmetric polynomials $\sigma_k(L)$ in the eigenvalues of the shape operator defined by $L$. Hence Newton's formulas show that \begin{align}\label{vol-flat-N} v_1 & = n H, \notag \\ v_2 & = \frac{1}{2} (n H)^2 -\frac{1}{2} |L|^2, \notag \\ v_3 & = \frac{1}{6} (n H)^3 - \frac{1}{2} n H |L|^2 + \frac{1}{3} \tr(L^3). \end{align} A combination of these formulas with \eqref{Y-sol-g} reproduces the expressions in \eqref{sigma-F-flat}, \eqref{sigma4-flat}. \section{The singular Yamabe obstruction $\B_2$}\label{B2-cl} In this section, we derive an explicit formula for the obstruction $\B_2$ from its definition in Section \ref{SYP}. This reproves a result in \cite{ACF}. We also briefly recall the relation to the conformal Willmore functional $\mathcal{W}_2$. Let $n=2$. The formula for $\B_2$ in terms of volume coefficients reads \begin{equation}\label{B2-new-g} \B_2 \st (r^{-3}(\SC(S_3) -1))|_0 = - 2 v_3 - \frac{1}{12} v_1^3 + \frac{1}{3} v_1 v_2 - \frac{2}{3} \Delta (\sigma_{(2)}) - \frac{2}{3} v_1\bar{\J} - \frac{2}{3} \bar{\J}'. \end{equation} We recall that the term $v_3$ vanishes in $n=2$ in the flat case but not in the curved case. In the flat case, this formula reduces to \eqref{B2-new}. The proof easily follows using $v_1 = 2H$, $v_2 = H^2 - |\lo|^2/2$ (see \eqref{vol-flat-N}) and $\sigma_{(2)} = H/2$. In the general case, the formulas for the volume coefficients in Lemma \ref{v-coeff-3} imply \begin{align*} \B_2 & = \left( \frac{1}{3} \bar{\nabla}_0(\overline{\Ric})_{00} - \frac{1}{6} \overline{\scal}' \right) - \frac{1}{3} H \overline{\scal} + H \overline{\Ric}_{00} - \frac{2}{3} (\lo,\bar{\G}) - \frac{1}{3} \Delta (H) - \frac{1}{3} H |\lo|^2 \end{align*} using $\tr(\lo^3)=0$ in dimension $n=2$. Now the second Bianchi identity implies $$ \frac{1}{3} \bar{\nabla}_0(\overline{\Ric})_{00} - \frac{1}{6} \overline{\scal}' = - \frac{1}{3} \delta (\overline{\Rho}_0) + \frac{1}{3} (\lo,\bar{\Rho}) - H \overline{\Ric}_{00} + \frac{1}{3} H \overline{\scal}. $$ (see \cite[(13.6.5)]{JO}). Hence using $(\lo,\bar{\G}) = (\lo,\bar{\Rho})$ we find \begin{equation}\label{B2-g} \B_2 = -\frac{1}{3} ( \Delta (H) + H |\lo|^2 + \delta (\overline{\Rho}_0) + (\lo,\overline{\Rho}) ). \end{equation} By Codazzi-Mainardi $dH = \delta(\lo) - \overline{\Ric}_0$, this formula is equivalent to $$ \B_2 = -\frac{1}{3} (\delta \delta (\lo) + H |\lo|^2 + (\lo,\bar{\Rho})). $$ The latter formula for the obstruction was first derived in \cite[Theorem 1.3]{ACF}. \begin{rem}\label{residue} The coefficient $\sigma_{(4)}$ has a simple (formal) pole in $n=2$. Moreover, \eqref{Y-sol-g} implies $$ \res_{n=2}(\sigma_{(4)}) = \frac{3}{4} v_3 + \frac{1}{32} v_1^3 - \frac{1}{8} v_1 v_2 + \frac{1}{4} v_1 \bar{\J} + \frac{1}{4} \bar{\J}' + \frac{1}{4} \Delta (\sigma_{(2)}). $$ This formula shows the residue formula \begin{equation*}\label{res-f2} \res_{n=2}(\sigma_{(4)}) = - \frac{3}{8} \B_2 \end{equation*} being a special case of \cite[Lemma 16.3.9]{JO}). \end{rem} Let $K$ be the Gauss curvature of a surface $M \hookrightarrow \R^3$. By $2(H^2-K^2) = |\lo|^2$, the equation $\Delta(H) + H|\lo|^2 = 0$ is equivalent to $$ \Delta(H) + 2 H(H^2-K) = 0 $$ This equation is well-known as the Willmore equation for a surface $M$. It is the Euler-Lagrange equation of the Willmore functional $$ \mathcal{W}_2 = \int_M |\lo|^2 dvol_h $$ for variations of the embedding of $M$ \cite[Section 7.4]{Will}. In other words, $\B_2$ provides the Euler-Lagrange equation of $\mathcal{W}_2$. This fact extends to the curved case (for details see \cite[Section 13.9]{JO}). \index{$\mathcal{W}_2$ \quad conformal Willmore functional} \section{The singular Yamabe obstruction $\B_3$}\label{B3-general} In this section, we determine explicit formulas for the obstruction $\B_3$. We shall start by expressing the definition \eqref{B-def} in terms of volume coefficients of the background metric and two normal derivatives of the scalar curvature. We simplify that result by repeated applications of the second Bianchi identity. A sequence of further transformations finally leads to Theorem \ref{main1}. \subsection{$\B_3$ in terms of volume coefficients}\label{B3-vol} For $n=3$, the formulas in \eqref{Y-sol-g} read \begin{align*} \sigma_{(2)} = \frac{1}{6} v_1, \quad \sigma_{(3)} = \frac{1}{9} (3 v_2 - v_1^2) + \frac{1}{6} \bar{\J} \end{align*} and \begin{equation*} \sigma_{(4)} = \frac{1}{108} (81 v_3 -42 v_1 v_2 + 11 v_1^3) + \frac{5}{36} v_1 \bar{\J} + \frac{1}{4} \bar{\J}' + \frac{1}{4} \Delta (\sigma_{(2)}). \end{equation*} These quantities define $S_4$. We also recall the expansion $\Delta_{h_r} = \Delta_h + r \Delta_h' + \cdots$. In these terms, we obtain \index{$\Delta'$} \begin{lem}\label{B3-volume} It holds \begin{align}\label{B3-start} \B_3 \st (r^{-4}(\SC(S_4) -1))|_0 & = - 2v_4 + \frac{1}{2} v_1 v_3 + \frac{1}{3} v_2^2 - \frac{7}{18} v_1^2 v_2 + \frac{2}{27} v_1^4 \notag \\ & - \frac{1}{3} \bar{\J} v_2 - \frac{5}{12} \bar{\J}' v_1 - \frac{1}{4} \bar{\J}'' \notag \\ & - \frac{1}{2} \Delta (\sigma_{(3)}) - \frac{1}{3} v_1 \Delta(\sigma_{(2)}) - \frac{1}{2} \Delta' (\sigma_{(2)}) + |d\sigma_{(2)}|^2. \end{align} \end{lem} This result follows by direct evaluation of the definition of $\B_3$. We omit the details. In the remaining part of this section, we evaluate this formula. First of all, we calculate the last line in \eqref{B3-start}. \begin{lem}\label{last-line} It holds \begin{align*} & - \frac{1}{2} \Delta (\sigma_{(3)}) - \frac{1}{3} v_1 \Delta(\sigma_{(2)}) - \frac{1}{2} \Delta' (\sigma_{(2)}) + |d\sigma_{(2)}|^2 \\ & = \frac{1}{12} \Delta (|\lo|^2) + \frac{1}{2} (\lo,\Hess(H)) + \frac{1}{6} \Delta (\bar{\Rho}_{00}) + \frac{1}{2} (dH,\overline{\Ric}_0) + |dH|^2. \end{align*} \end{lem} \begin{proof} We recall that $v_1 = 3 H$. By \begin{align*} \sigma_{(2)} = \frac{1}{2} H \quad \mbox{and} \quad \sigma_{(3)} = \frac{1}{6} (-|\lo|^2 - 2 \bar{\Rho}_{00}), \end{align*} we obtain \begin{align*} & - \frac{1}{2} \Delta (\sigma_{(3)}) - \frac{1}{3} v_1 \Delta(\sigma_{(2)}) - \frac{1}{2} \Delta' (\sigma_{(2)}) + |d\sigma_{(2)}|^2 \\ & = \frac{1}{12} \Delta (|\lo|^2) + \frac{1}{6} \Delta (\bar{\Rho}_{00}) - \frac{1}{2} H \Delta H - \frac{1}{4} \Delta' (H) + \frac{1}{4} |dH|^2. \end{align*} Now the variation formula \cite[Proposition 1.184]{Besse} \begin{align}\label{vDelta} (d/dt)|_0(\Delta_{g+th}(u)) = - (\nabla^g (du),h)_g - (\delta_g(h),du)_g + \frac{1}{2} (d (\tr_g(h)),du)_g \end{align} for the Laplacian implies (for $h = 2L$ and $g=h$) $$ \Delta' (u) = - 2 (L,\Hess(u)) - 2 (\delta(L), du) + 3 (dH,du). $$ By Codazzi-Mainardi it holds $\delta(L) = 3dH + 2 \bar{\Rho}_0 = 3 dH + \overline{\Ric}_0$. Hence $$ \Delta'(u) = - 2 (L,\Hess(u)) - 3 (dH,du) - 2 (\overline{\Ric}_0,du). $$ These results imply the assertion. \end{proof} \subsection{The volume coefficients}\label{vol-c} The volume coefficients $v_j$ can be expressed in terms of the Taylor coefficients of $h_r$. These relations follow by Taylor expansion of the identity \eqref{trace-vol} in the variable $r$ and solving the resulting relations for $v_j$. We find \begin{align*} 2 v_1 & = \tr (h_{(1)}), \\ 8 v_2 & = \tr (h_{(1)})^2 + 4 \tr (h_{(2)}) - 2 \tr (h_{(1)}^2), \\ 48 v_3 & = \tr (h_{(1)})^3 + 12 \tr (h_{(1)}) \tr (h_{(2)}) + 24 \tr (h_{(3)}) - 6 \tr(h_{(1)}) \tr (h_{(1)}^2) \\ & - 24 \tr(h_{(1)} h_{(2)}) + 8 \tr (h_{(1)}^3) \end{align*} and \begin{align*} 384 v_4 & = \tr (h_{(1)})^4 + 24 \tr (h_{(1)})^2 \tr (h_{(2)}) + 48 \tr(h_{(2)})^2 + 96 \tr(h_{(1)}) \tr (h_{(3)}) + 192 \tr (h_{(4)}) \\ & - 12 \tr(h_{(1)})^2 \tr (h_{(1)}^2) - 48 \tr(h_{(2)}) \tr(h_{(1)}^2) + 12 \tr(h_{(1)}^2)^2 - 96 \tr(h_{(1)}) \tr (h_{(1)} h_{(2)}) \\ & - 192 \tr(h_{(1)} h_{(3)}) - 96 \tr(h_{(2)}^2) +32 \tr(h_{(1)}) \tr (h_{(1)}^3) + 192 \tr(h_{(1)}^2 h_{(2)}) - 48 \tr(h_{(1)}^4. \end{align*} These formulas are valid in general dimension. In order to evaluate them, we apply the following results for the coefficients $h_{(k)}$ for $k\le 3$ \cite{GG}, \cite[Proposition 13.2.1]{JO}. \begin{lem}\label{h-coeff} In general dimensions, it holds $$ h_{(1)} = 2 L, \quad h_{(2)} = L^2 - \bar{\G} \quad \mbox{and} \quad 3 (h_{(3)})_{ij} = -\bar{\nabla}_0 (\bar{R})_{0ij0} - 2 L_i^k \bar{\G}_{jk} - 2 L_j^k \bar{\G}_{ik}, $$ where $\bar{\G}_{ij} \st \bar{R}_{0ij0}$. \index{$\bar{\G}$} \end{lem} As consequences, we find explicit formulas for the volume coefficients $v_k$ for $k \le 3$. \begin{lem}\label{v-coeff-3} In general dimensions, it holds \begin{align*} v_1 & = n H \\ 2 v_2 & = - \overline{\Ric}_{00} - |\lo|^2 + n(n\!-\!1) H^2 = \overline{\Ric}_{00} + \scal - \overline{\scal} \\ 6 v_3 & = - \bar{\nabla}_0(\overline{\Ric})_{00} + 2 (\lo,\bar{\G}) - (3n\!-\!2) H \overline{\Ric}_{00} + 2 \tr(\lo^3) - 3 (n\!-\!2) H |\lo|^2 + n(n\!-\!1)(n\!-\!2) H^3. \end{align*} \end{lem} These formulas coincide with the corresponding terms in the expansion of the volume form in \cite[Theorem 3.4]{AGV}. Note that this is obvious for $v_1$ and $v_2$ but requires to apply the Gauss identities \begin{align*} \overline{\scal}-\scal & = 2 \overline{\Ric}_{00} + |L|^2- n^2 H^2, \\ \overline{\Ric} - \Ric & = \bar{\G} - n H L - L^2 \end{align*} for $v_3$. Equivalent formulas can be found in \cite[Section 2]{GG}. The coefficient $v_4$ is more involved. It depends on $h_{(k)}$ for $k\le 3$ and $\tr(h_{(4)})$. We shall not discuss an explicit formula for $h_{(4)}$. For our purpose, it will be enough to prove the following formula for the quantity $\tr(h_{(4)})$. \begin{lem}\label{trace-h4} In general dimensions, it holds \begin{equation}\label{trace-4} 12 \tr (h_{(4)}) = - \bar{\nabla}_0^2 (\overline{\Ric})_{00} - 6 L^{ij} \bar{\nabla}_0 (\bar{R})_{0ij0} - 4 (L^2,\bar{\G}) + 4 (\bar{\G},\bar{\G}). \end{equation} \end{lem} \begin{proof} A calculation of Christoffel symbols shows that \begin{equation}\label{R-formula} \bar{R}_{0jk0} = \frac{1}{4} g^{ab} g_{aj}' g_{bk}' - \frac{1}{2} g_{jk}'' \end{equation} \cite[(13.2.5)]{JO}. The assertion then follows by evaluating the second-order derivative in $r$ of this equation followed by contraction with $h^{jk}$. Here are the details. Differentiating \eqref{R-formula} twice in $r$ at $r=0$ yields \begin{align*} \partial_r^2 (\bar{R}_{0jk0}) & =\frac{1}{4} (g^{ab})'' g_{aj}' g_{bk}' + \frac{1}{4} g^{ab} (g_{aj})''' g_{bk}' + \frac{1}{4} g^{ab} g_{aj}' (g_{bk})''' \\ & + \frac{1}{2} (g^{ab})' (g_{aj})'' g_{bk}' + \frac{1}{2} (g^{ab})' (g_{aj})' (g_{bk})'' + \frac{1}{2} g^{ab} (g_{aj})'' (g_{bk})'' - \frac{1}{2} g_{jk}'''' \\ & = 2 (3 L^2 + \bar{\G})^{ab} L_{aj} L_{bk} + 3 h^{ab} (h_{(3)})_{aj} L_{bk} + 3 h^{ab} L_{aj} (h_{(3)})_{bk} \\ & - 4 L^{ab} (L^2-\bar{\G})_{aj} L_{bk} - 4 L^{ab} L_{aj} (L^2-\bar{\G})_{bk} + 2 h^{ab} (L^2- \bar{\G})_{aj} (L^2-\bar{\G})_{bk} - 12 (h_{(4)})_{jk} \\ & = 2 (3 L^2 + \bar{\G})^{ab} L_{aj} L_{bk} + h^{ab} (-\bar{\nabla}_0(\bar{R})_{0aj0} - 2 (L \bar{\G})_{aj} - 2 (\bar{\G} L)_{aj}) L_{bk} \\ & + h^{ab} L_{aj} (-\bar{\nabla}_0(\bar{R})_{0bk0} - 2 (L \bar{\G})_{bk} - 2 (\bar{\G} L)_{bk}) \\ & - 4 L^{ab} (L^2-\bar{\G})_{aj} L_{bk} - 4 L^{ab} L_{aj} (L^2-\bar{\G})_{bk} + 2 h^{ab} (L^2- \bar{\G})_{aj} (L^2-\bar{\G})_{bk} - 12 (h_{(4)})_{jk} \end{align*} using $(h_r^{-1})_{ij} = h^{ij} - 2 L^{ij} r + (3(L^2)^{ij} + \bar{\G}^{ij}) r^2 + \cdots$. Hence $$ h^{jk} \partial_r^2(\bar{R}_{0jk0}) $$ equals \begin{equation}\label{exp-1} - 2 L^{ij} \bar{\nabla}_0(\bar{R})_{0ij0} -2 (L^2,\bar{\G}) + 2 (\bar{\G},\bar{\G}) - 12 \tr (h_{(4)}). \end{equation} On the other hand, we calculate \begin{align*} h^{jk} \partial_r^2 (\bar{R}_{0jk0}) & = \partial_r^2 ((h_r^{-1})_{jk} \bar{R}_{0jk0}) - 2 (h_r^{-1})'_{jk} \partial_r (\bar{R}_{0jk0}) - (h_r^{-1})''_{jk} \bar{R}_{0jk0} \\ & = \partial_r^2 (\overline{\Ric}_{00}) + 4 L^{jk} \partial_r (\bar{R}_{0jk0}) - 2(3 L^2 + \bar{\G},\G). \end{align*} Therefore, the relations \begin{align*} \partial_r^2 (\overline{\Ric}_{00}) & = \bar{\nabla}_0^2(\overline{\Ric})_{00}, \\ \partial_r(\bar{R}_{0jk0}) & = \bar{\nabla}_0(\bar{R})_{0jk0} + (L \bar{\G} + \bar{\G} L)_{jk} \end{align*} imply \begin{equation}\label{exp-2} h^{jk} \partial_r^2 (\bar{R}_{0jk0}) = \bar{\nabla}_0^2(\overline{\Ric})_{00} + 4 L^{jk} \bar{\nabla}_0(\bar{R})_{0jk0} + 8 (L^2,\bar{\G}) - 6 (L^2,\bar{\G}) - 2(\bar{\G},\bar{\G}). \end{equation} Now combining \eqref{exp-1} and \eqref{exp-2} proves the assertion. \end{proof} \begin{comment} In the following example it is not totally clear what happens if the Weyl tensor of $h$ is nontrivial. Then the Bach tensor contributes to $h_{(4)}$ but it does not contribute to \tr(h_{(4)}). On the other hand, the calculation seems to give that the trace contains a term $|\W|^2$. But $\W=0$ in the present case! Either by tractor calculus or by comparison of $r^2$: $\Rho = \bar{\G}$. Hence $\W=0$. So this also holds in general! \end{comment} \begin{example}\label{PE-trace-test} Let $n \ge 3$ be general. Assume that $g = r^2 g_+$ for a Poincar\'e-Einstein metric $g_+ = r^{-2} (dr^2 + (h - \Rho r^2 + \Rho^2 r^4/4))$ with conformally flat conformal infinity $h$. $\Rho$ is the Schouten tensor of $h$. $r$ is the distance in the metric $g$ from the hypersurface $r=0$. The formula for $g$ shows that $\tr(h_{(4)}) = 1/4 |\Rho|^2$. Comparing the coefficients of $r$ and $r^2$ in the expansions of $h_r$, shows that $L=0$ and $\bar{\G} = \Rho$. Hence the above formula reduces to $$ 12 \tr (h_{(4)}) = - \bar{\nabla}_0^2 (\overline{\Ric})_{00} + 4(\Rho,\Rho) = - \partial_r^2 (\overline{\Ric}_{00}) + 4 (\Rho,\Rho). $$ But \cite[Lemma 6.11.2]{J1} shows that $\bar{\Rho} = -1/(2r) \partial_r ({h}_r)$. Hence $\bar{\Rho} = \Rho - 1/2 r^2 \Rho^2$. It follows that $\partial_r^2 (\bar{\Rho}) = - \Rho^2$. Therefore, $\partial_r^2(\bar{\Rho}_{00}) = 0$ and we conclude that $\partial_r^2(\overline{\Ric}_{00}) = \partial_r^2 (\bar{\J}) \stackrel{!}{=} |\Rho|^2$ (for $r=0$) using \cite[Lemma 6.11.1]{J1}. Hence the right-hand side gives $3 |\Rho|^2$, i.e., we reproduced the result $4 \tr(h_{4)}) = |\Rho|^2$. For general $h$, the Poincare\'e-Einstein metric also involves the Bach tensor. But since the Bach tensor is trace-free, we still have $\tr(h_{(4)}) = 1/4 |\Rho|^2$ and we get the same conclusion. \end{example} The above results imply the following formula for $v_4$. \begin{lem}\label{v4-form} It holds \begin{align}\label{v4-L} 24 v_4 & = - \bar{\nabla}_0^2(\overline{\Ric})_{00} + 2 L^{ij} \bar{\nabla}_0 (\bar{R})_{0ij0} - 4 n H \bar{\nabla}_0 (\overline{\Ric})_{00} \notag \\ & + 3 (\overline{\Ric}_{00})^2 - 2 (\bar{\G},\bar{\G}) + 8 n H (L,\bar{\G}) - 8 (L^2,\bar{\G}) + 6 |\lo|^2 \overline{\Ric}_{00} - 6 n(n\!-\!1) H^2 \overline{\Ric}_{00} \notag \\ & + 24 \sigma_4(L) \end{align} or, equivalently, \begin{align*} 24 v_4 & = -\bar{\nabla}_0^2(\overline{\Ric})_{00} + 2 \lo^{ij} \bar{\nabla}_0 (\bar{R})_{0ij0} - (4n\!-\!2) H \bar{\nabla}_0 (\overline{\Ric})_{00} \notag \\ & + 3 (\overline{\Ric}_{00})^2 - 2 (\bar{\G},\bar{\G}) + 8(n\!-\!2) H (\lo,\bar{\G}) - 8 (\lo^2,\bar{\G}) \notag \\ & - 2(n\!-\!1)(3n\!-\!4) H^2 \overline{\Ric}_{00} + 6 |\lo|^2 \overline{\Ric}_{00} \notag \\ & + 24 \sigma_4(L). \end{align*} In particular, for $n=3$, we find \begin{align}\label{v4-L-3} 24 v_4 & = - \bar{\nabla}_0^2(\overline{\Ric})_{00} + 2 \lo^{ij} \bar{\nabla}_0 (\bar{R})_{0ij0} - 10 H \bar{\nabla}_0 (\overline{\Ric})_{00} \notag \\ & + 3 (\overline{\Ric}_{00})^2 - 2 (\bar{\G},\bar{\G}) + 8 H (\lo,\bar{\G}) - 8 (\lo^2,\bar{\G}) - 20 H^2 \overline{\Ric}_{00} + 6 |\lo|^2 \overline{\Ric}_{00} \end{align} using $\sigma_4(L)=0$. \end{lem} \begin{proof} This is a direct calculation. We omit the details. \end{proof} The three terms in the first line of \eqref{v4-L} coincide with the corresponding terms in the formula for $v_4$ in \cite[Theorem 3.4]{AGV}. However, the remaining terms in both formulas are expressed in different ways. Note that Newton's identity $$ 24 \sigma_4 (L) = \tr(L)^4 - 6 \tr(L)^2 |L|^2 + 3 |L|^4 + 8 \tr(L) \tr(L^3) - 6 \tr(L^4) $$ gives $$ 24 \sigma_4(L) = n(n-1)(n-2)(n-3) H^4 - 6 (n-2)(n-3) H |\lo|^2 + 8(n-3) H \tr(\lo^3) + 3 (|\lo|^4 - 2 \tr(\lo^4)). $$ \begin{cor}\label{trace-id} Let $n=3$. Then $|\lo|^4 = 2 \tr(\lo^4)$. \end{cor} This result generalizes the fact that $\tr(\lo^3)=0$ for $n=2$. \begin{example} Assume that $g = r^2 g_+$ for a Poincar\'e-Einstein metric $g_+$ with conformal infinity $h$ (as in Example \ref{PE-trace-test}). By $L=0$, the formula \eqref{v4-L} reads \begin{align*} 24 v_4 & = - \bar{\nabla}_0^2(\overline{\Ric})_{00} + 3 (\overline{\Ric}_{00})^2 - 2 (\bar{\G},\bar{\G}). \end{align*} As above, we obtain $$ 24 v_4 = -|\Rho|^2 + 3 \J^2 - 2 |\Rho|^2 $$ using the fact that $\bar{\G} = \Rho$ implies $\overline{\Ric}_{00} = \J$. This yields the well-known formula $$ v_4 = \frac{1}{8} (\J^2 - |\Rho|^2). $$ \end{example} \subsection{Evaluation I}\label{E1} Now we combine the formula \eqref{B3-start} for $\B_3$ with the results in Section \ref{vol-c}. A calculation yields the following result. \begin{lem}\label{B3-inter-1} $12 \B_3$ equals the sum of \begin{align}\label{B3-a} & \left(\bar{\nabla}_0^2(\overline{\Ric})_{00} - \frac{1}{2} \overline{\scal}''\right) + 5 H \left(\bar{\nabla}_0(\overline{\Ric})_{00} - \frac{1}{2} \overline{\scal}'\right) + 2 H \bar{\nabla}_0(\overline{\Ric})_{00} \notag \\ & + 2 |\bar{\G}|^2 - 2 (\overline{\Ric}_{00})^2 + 2 \overline{\Ric}_{00} \bar{\J} + 8 H^2 \overline{\Ric}_{00} - 12 H^2 \bar{\J} + 6 (dH, \overline{\Ric}_0) + 2 \Delta (\overline{\Rho}_{00}) , \end{align} \begin{equation}\label{B3-b} - 2 \lo^{ij} \bar{\nabla}_0(\bar{R})_{0ij0} - 2 H (\lo,\bar{\G}) + 8 (\lo^2,\bar{\G}) - 4 |\lo|^2 \overline{\Ric}_{00} + 2 |\lo|^2 \bar{\J} \end{equation} and \begin{equation}\label{B3-c} \Delta (|\lo|^2) + 6 (\lo,\Hess(H)) + 6 H \tr(\lo^3) + |\lo|^4 + 12 |dH|^2. \end{equation} \end{lem} Since the terms in \eqref{B3-a} and \eqref{B3-b} vanish for the flat metric, we immediately reproduced formula \eqref{B3-flat}. For later reference, we formulate that result as \begin{cor}\label{B3-inter-corr} For a hypersurface $M$ in the flat background $\R^4$, the obstruction $\B_3$ is given by \begin{equation}\label{B3-flat-new} \Delta (|\lo|^2) + 6 (\lo,\Hess(H)) + 6 H \tr(\lo^3) + |\lo|^4 + 12 |dH|^2. \end{equation} \end{cor} We continue with the discussion of the curved case. Next, we simplify the sum \eqref{B3-a} using the second Bianchi identity. This step is analogous to the usage of the second Bianchi identity in Section \ref{B2-cl}. \index{$\bar{G}$ \quad Einstein tensor} Let $\bar{G} \st \overline{\Ric} - \frac{1}{2} \overline{\scal} g$ be the Einstein tensor of $g$. The second Bianchi identity implies $2 \delta^g (\overline{\Ric}) = d \overline{\scal}$. Hence \begin{align*} & \bar{\nabla}_0(\overline{\Ric})(\partial_0,\partial_0) \\ & = \delta^g(\overline{\Ric})(\partial_0) - g^{ij} \bar{\nabla}_{\partial_i}(\overline{\Ric})(\partial_j,\partial_0) \\ & = \frac{1}{2} \langle d \overline{\scal},\partial_0 \rangle - g^{ij} \partial_i (\overline{\Ric}(\partial_j,\partial_0)) + g^{ij} \overline{\Ric} (\bar{\nabla}_{\partial_i}(\partial_j),\partial_0) + g^{ij} \overline{\Ric}(\partial_j,\bar{\nabla}_{\partial_i}(\partial_0)) \\ & = \frac{1}{2} \langle d \overline{\scal},\partial_0 \rangle - h_r^{ij} \partial_i (\overline{\Ric}(\partial_j,\partial_0)) + h_r^{ij} \overline{\Ric} (\nabla^{h_r}_{\partial_i}(\partial_j) - (L_r)_{ij} \partial_0,\partial_0) + h_r^{ij} \overline{\Ric}(\partial_j,\bar{\nabla}_{\partial_i}(\partial_0)) \\ & = \frac{1}{2} \langle d \overline{\scal},\partial_0 \rangle - \delta^{h_r} (\overline{\Ric}_0) - n H_r \overline{\Ric}_{00} + h_r^{ij} \overline{\Ric}(\partial_j,\bar{\nabla}_{\partial_i}(\partial_0)) \end{align*} on any level surface of $r$. Here $\delta^{h_r}$ denotes the divergence operator for the induced metric on the level surfaces of $r$. Similarly, $L_r$ and $H_r$ are the second fundamental form and the mean curvature of these level surfaces. Therefore, using $\bar{\nabla}_{\partial_i}(\partial_0) = (L_r)_{ia} h_r^{ak} \partial_k$, we obtain \begin{align*} \bar{\nabla}_0 (\bar{G})_{00} & = -\delta^{h_r} (\overline{\Ric}_0) - n H_r \overline{\Ric}_{00} + h_r^{ij} h_r^{ak} (L_r)_{ia} \overline{\Ric}_{jk}, \end{align*} i.e., we have proved the relation \begin{equation}\label{Bianchi} \bar{\nabla}_0(\bar{G})_{00} = - \delta^{h_r} (\overline{\Ric}_0) - n H_r \overline{\Ric}_{00} + (L_r,\overline{\Ric})_{h_r} \end{equation} on any level surface of $r$. Differentiating the identity \eqref{Bianchi} (for $n=3$) with respect to $r$ at $r=0$ gives a formula for the term $$ \bar{\nabla}_0^2(\overline{\Ric})_{00} - \frac{1}{2} \overline{\scal}'' = \bar{\nabla}_0^2 (\bar{G})_{00} = \partial_r (\bar{\nabla}_0 (\bar{G})_{00}) $$ for $r=0$. For that purpose, we use the variation formulas \begin{align}\label{BV} 3 H' & = - |L|^2 - \overline{\Ric}_{00}, \notag \\ L' & = L^2 - \bar{\G} \end{align} for the variation of these quantities under the normal exponential map. Here we denote the derivative in $r$ by a prime. We recall that in normal geodesic coordinates the metric $g$ takes the form $dr^2 +h_r$ with $h_r = h + 2r L + \cdots$. Moreover, let ${\delta}' \st (d/dr)|_0(\delta^{h_r})$. Then \begin{align}\label{vdelta} {\delta}' (\omega) = - 2 (L, \nabla (\omega))_h - 2 (\delta (L),\omega)_h + 3 (dH,\omega)_h \end{align} for $\omega \in \Omega^1(M^3)$ \cite[(1.185]{Besse}. Note that the latter identity fits with the variation formula \eqref{vDelta}. Now differentiating \eqref{Bianchi} (for $n=3$) implies \begin{align*} \bar{\nabla}_0^2 (\bar{G})_{00} & = - {\delta}' (\overline{\Ric}_0) - \delta (\partial_r(\overline{\Ric}_0)) - 3 {H}' \overline{\Ric}_{00} - 3 H \bar{\nabla}_0(\overline{\Ric})_{00} \\ & + ({L}', \overline{\Ric}) + L^{ij} \partial_r (\overline{\Ric}_{ij}) - 4 (L^2,\overline{\Ric}), \end{align*} where $L' = (d/dr)|_0(L_r)$. Note that the last term comes from the differentiation of $h_r$. Hence \begin{align*} \bar{\nabla}_0^2 (\bar{G})_{00} & = 2 (L, \nabla (\overline{\Ric}_0)) + 2 (\delta(L), \overline{\Ric}_0) - 3 (dH,\overline{\Ric}_0) \notag \\ & - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) - \delta ((L \overline{\Ric})_{0}) + |L|^2 \overline{\Ric}_{00} + (\overline{\Ric}_{00})^2 - 3 H \bar{\nabla}_0 (\overline{\Ric})_{00} \notag \\ & - 3 (L^2, \overline{\Ric}) - (\bar{\G}, \overline{\Ric}) + (L, \bar{\nabla}_0(\overline{\Ric})) + 2 (L^2, \overline{\Ric}) \end{align*} at $r=0$. Here we used the relations \begin{align*} \bar{\nabla}_0(\overline{\Ric}_0) = \partial_r (\overline{\Ric}_{00}) - (L \overline{\Ric})_{0} \quad \mbox{and} \quad \bar{\nabla}_0(\overline{\Ric})_{ij} = \partial_r (\overline{\Ric}_{ij}) - (L \overline{\Ric} + \overline{\Ric} L)_{ij}. \end{align*} Now, separating the trace-free part of $L$ in some terms, we obtain \begin{align*} \bar{\nabla}_0^2 (\bar{G})_{00} & = 2 (\lo,\nabla (\overline{\Ric}_0)) + 2 H \delta (\overline{\Ric}_0) + 2 (\delta(\lo),\overline{\Ric}_0) - (dH,\overline{\Ric}_0) \\ & - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) - \delta ((\lo \overline{\Ric})_{0}) - \delta (H \overline{\Ric}_0) \\ & + |L|^2 \overline{\Ric}_{00} + (\overline{\Ric}_{00})^2 - 3 H \bar{\nabla}_0 (\overline{\Ric})_{00} \\ & - (L^2, \overline{\Ric}) - (\bar{\G}, \overline{\Ric}) + (\lo, \bar{\nabla}_0(\overline{\Ric})) + H \overline{\scal}' - H \bar{\nabla}_0(\overline{\Ric})_{00}. \end{align*} This leads to the following result. \begin{lem}\label{Nabla-2G} It holds \begin{align*} \bar{\nabla}_0^2 (\bar{G})_{00} = & - 4 H \bar{\nabla}_0 (\overline{\Ric})_{00} + H \overline{\scal}' \\ & + 2 (\lo, \nabla (\overline{\Ric}_0)) - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) + (\lo, \bar{\nabla}_0(\overline{\Ric})) \\ & + H \delta (\overline{\Ric}_0) - 2 (dH,\overline{\Ric}_0) + 2 (\delta(\lo),\overline{\Ric}_0) - \delta ((\lo \overline{\Ric})_{0}) \\ & + |L|^2 \overline{\Ric}_{00} - (L^2, \overline{\Ric}) + (\overline{\Ric}_{00})^2 - (\bar{\G}, \overline{\Ric}). \end{align*} \end{lem} Lemma \ref{Nabla-2G} enables us to replace second-order normal derivatives in Lemma \ref{B3-inter-1} by first-order normal and tangential derivatives. More precisely, it follows that \eqref{B3-a} equals \begin{align*} & 3 H \bar{\nabla}_0(\bar{G})_{00} + 2 (\lo, \nabla (\overline{\Ric}_0)) - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) + (\lo, \bar{\nabla}_0(\overline{\Ric})) \\ & + H \delta (\overline{\Ric}_0) + 4 (dH,\overline{\Ric}_0) + 2 (\delta(\lo),\overline{\Ric}_0) - \delta ((\lo \overline{\Ric})_{0}) \\ & + |L|^2 \overline{\Ric}_{00} - (L^2, \overline{\Ric}) + (\overline{\Ric}_{00})^2 - (\bar{\G}, \overline{\Ric}) \\ & + 2 |\bar{\G}|^2 - 2 (\overline{\Ric}_{00})^2 + 2 \overline{\Ric}_{00} \bar{\J} + 2 \Delta (\overline{\Rho}_{00}) + 8 H^2 \overline{\Ric}_{00} - 12 H^2 \bar{\J}. \end{align*} Now a second application of the second Bianchi identity \eqref{Bianchi} enables us to replace the first-order normal derivative of the Einstein tensor in this formula by tangential derivatives. Hence \eqref{B3-a} equals the sum \begin{align*} & -3 H \delta (\overline{\Ric}_0) - 9 H^2 \overline{\Ric}_{00} + 3 H (L,\overline{\Ric}) \\ & + 2 (\lo, \nabla (\overline{\Ric}_0))) - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) + (\lo, \bar{\nabla}_0(\overline{\Ric})) \\ & + H \delta (\overline{\Ric}_0) + 4 (dH,\overline{\Ric}_0) + 2 (\delta(\lo),\overline{\Ric}_0) - \delta ((\lo \overline{\Ric})_{0}) \\ & + |L|^2 \overline{\Ric}_{00} - (L^2, \overline{\Ric}) + (\overline{\Ric}_{00})^2 - (\bar{\G}, \overline{\Ric}) \\ & + 2 |\bar{\G}|^2 - 2 (\overline{\Ric}_{00})^2 + 2 \overline{\Ric}_{00} \bar{\J} + 2 \Delta (\overline{\Rho}_{00}) + 8 H^2 \overline{\Ric}_{00} - 12 H^2 \bar{\J}. \end{align*} A slight reordering and simplification of this sum shows that the sum \eqref{B3-a} equals \begin{align}\label{H1} & - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) - 2 H \delta (\overline{\Ric}_0) +4 (dH, \overline{\Ric}_0) \notag \\ & + (\lo, \bar{\nabla}_0(\overline{\Ric})) + 2 (\lo, \nabla (\overline{\Ric}_0)) + 2 (\delta(\lo),\overline{\Ric}_0) - \delta ((\lo \overline{\Ric})_{0}) \notag \\ & + |L|^2 \overline{\Ric}_{00} - (L^2, \overline{\Ric}) + 3 H (L,\overline{\Ric}) \notag \\ &- (\overline{\Ric}_{00})^2 - (\bar{\G}, \overline{\Ric}) + 2 |\bar{\G}|^2 + 2 \overline{\Ric}_{00} \bar{\J} + 2 \Delta (\overline{\Rho}_{00}) - H^2 \overline{\Ric}_{00} - 12 H^2 \bar{\J} . \end{align} We continue by further simplifying the sum \eqref{H1}. First of all, we observe \begin{lem}\label{van-term} Let $n=3$. Then $$ - (\overline{\Ric}_{00})^2 - (\bar{\G}, \overline{\Ric}) + 2 |\bar{\G}|^2 + 2 \overline{\Ric}_{00} \bar{\J} = 2 (\bar{\Rho},\W) + 2 |\W|^2. $$ \end{lem} \begin{proof} We recall that $\bar{\G} _{ij} = \bar{\Rho}_{ij} + \bar{\Rho}_{00} h_{ij} + \W_{ij}$. Therefore, we get $\overline{\Ric}_{ij} = 2 \bar{\Rho}_{ij} + \bar{\J} h_{ij} = 2 \bar{\G}_{ij} - 2 \W_{ij} - (2 \bar{\Rho}_{00} - \bar{\J}) h_{ij}$. Thus $ (\bar{\G},\overline{\Ric}) = 2 (\bar{\G},\bar{\G}) - 2 (\bar{\G},\W) - (2 \bar{\Rho}_{00} - \bar{\J}) \overline{\Ric}_{00}. $ This relation implies $$ 2 |\bar{\G}|^2 - (\bar{\G}, \overline{\Ric}) = (2 \bar{\Rho}_{00} -\bar{\J}) \overline{\Ric}_{00} + 2 (\bar{\G},\W) = (\overline{\Ric}_{00})^2 - 2\overline{\Ric}_{00} \bar{\J} + 2 (\bar{\Rho},\W) + 2 |\W|^2. $$ The proof is complete. \end{proof} Next, we have the following identities. \begin{lem}\label{help-2} Let $n=3$. Then $$ |L|^2 \overline{\Ric}_{00} - (L^2, \overline{\Ric}) = |\lo|^2 \overline{\Ric}_{00} - (\lo^2,\overline{\Ric}) - 2H (\lo,\overline{\Ric}) + 4 H^2 \overline{\Ric}_{00} - 6 H^2 \bar{\J} $$ and $$ (L,\overline{\Ric}) = (\lo,\overline{\Ric}) + 6 H \bar{\J} - H \overline{\Ric}_{00}. $$ \end{lem} \begin{proof} The assertions follow by direct calculation. \end{proof} By Lemma \ref{van-term} and Lemma \ref{help-2}, the last two lines of \eqref{H1} simplify to \begin{align*} |\lo|^2 \overline{\Ric}_{00} - (\lo^2,\overline{\Ric}) + H (\lo,\overline{\Ric}) + 2 \Delta (\overline{\Rho}_{00}) + 2(\bar{\Rho},\W) + 2 |\W|^2. \end{align*} Therefore, \eqref{B3-a} equals \begin{align*}\label{H1a} & - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) - 2 H \delta (\overline{\Ric}_0) + 4 (dH, \overline{\Ric}_0) \notag \\ & + (\lo, \bar{\nabla}_0(\overline{\Ric})) + 2 (\lo, \nabla (\overline{\Ric}_0)) + 2 (\delta(\lo),\overline{\Ric}_0) - \delta ((\lo \overline{\Ric})_{0}) \notag \\ & +|\lo|^2 \overline{\Ric}_{00} - (\lo^2,\overline{\Ric}) + H (\lo,\overline{\Ric}) + 2 \Delta (\overline{\Rho}_{00}) + 2(\bar{\Rho},\W) + 2 |\W|^2. \end{align*} Hence Lemma \ref{B3-inter-1} implies \begin{lem}\label{B3-inter2} $12 \B_3$ equals the sum of \begin{equation}\label{B3-g} - \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) - 2 \delta ( H \overline{\Ric}_0) + 6 (dH,\overline{\Ric}_0) + 2 \Delta (\overline{\Rho}_{00}), \end{equation} \begin{align}\label{B3-u} & (\lo, \bar{\nabla}_0(\overline{\Ric})) - 2 \lo^{ij} \bar{\nabla}_0(\bar{R})_{0ij0} + 2 (\lo, \nabla (\overline{\Ric}_0)) + 2 (\delta(\lo),\overline{\Ric}_0) - \delta ((\lo \overline{\Ric})_{0}) \notag \\ & -3 |\lo|^2 \overline{\Ric}_{00} - (\lo^2,\overline{\Ric}) + H (\lo,\overline{\Ric}), \end{align} \begin{equation}\label{B3-gf} - 2 H (\lo,\bar{\G}) + 8 (\lo^2,\bar{\G}) + 2 |\lo|^2 \bar{\J}, \end{equation} \begin{equation}\label{Weyl} 2(\bar{\Rho},\W) + 2 |\W|^2 \end{equation} and the flat terms \begin{align}\label{B3-gc} & 6 (\lo,\Hess(H)) + \Delta (|\lo|^2) + 6 H \tr(\lo^3) + |\lo|^4 + 12 |dH|^2. \end{align} \end{lem} Note that the first term in \eqref{B3-g} contains a normal derivative of $\overline{\Ric}$. Likewise the first two terms in \eqref{B3-u} contain normal derivatives of the curvature of $g$. All other terms in \eqref{B3-g}--\eqref{B3-gc} live on $M$. The {\em mixed} terms in \eqref{B3-u} and \eqref{B3-gf} involve the curvature of $g$ and $L$. Finally, the terms in \eqref{Weyl} involve the Weyl tensor, and the terms in \eqref{B3-gc} are completely determined by $L$. \begin{example}\label{B3-Einstein} If the background metric $g$ is Einstein, i.e., if $\overline{\Ric} = \lambda g$, then \begin{align*} 12 \B_3 & = - 2 \lo^{ij} \bar{\nabla}_0(\widebar{W})_{0ij0} - 2 H (\lo,\W) + 8 (\lo^2,\W) + 2 |\W|^2\\ & + 6 (\lo,\Hess(H)) + \Delta (|\lo|^2) + 6 H \tr(\lo^3) + |\lo|^4 + 12 |dH|^2. \end{align*} \end{example} Of course, if in addition $\overline{W}=0$, then this formula reduces to Corollary \ref{B3-inter-corr}. \begin{proof} The assumption implies $\bar{\J} = \frac{2}{3} \lambda$, $\bar{\Rho} = \frac{1}{6} \lambda g$, $\bar{\G} = \frac{1}{3} \lambda h + \W$ and $|\bar{\G}|^2 = \frac{1}{3} \lambda^2 + |\W|^2$. Furthermore, the terms in \eqref{B3-g} and the terms in the first line of \eqref{B3-u} except the second one vanish. The remaining terms in \eqref{B3-u}--\eqref{Weyl} read $$ -3 \lambda |\lo|^2 - \lambda |\lo|^2 -2 H(\lo,\W) + \frac{8}{3} \lambda |\lo|^2 + 8 (\lo^2,\W) + \frac{4}{3} \lambda |\lo|^2 + 2 |\W|^2. $$ Simplification proves the claim. \end{proof} \subsection{Evaluation II} We further simplify Lemma \ref{B3-inter2}. The following result shows that, up to some contributions of the Weyl tensor, the first two terms in \eqref{B3-u} cancel and that the last term of \eqref{B3-u} cancels against the first term of \eqref{B3-gf}. \begin{lem}\label{HL1} Let $n=3$. Then it holds \begin{align*} (\lo, \bar{\nabla}_0(\overline{\Ric})) + 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} & = 2 \lo^{ij} \bar{\nabla}_0(\bar{R})_{0ij0}, \\ (\lo,\overline{\Ric}) + 2 (\lo, \W) & = 2 (\lo,\bar{\G}). \end{align*} \end{lem} \begin{proof} By the Kulkarni-Nomizu decomposition $R = - \Rho \owedge g + W$, we have $$ \bar{\G}_{ij} = \bar{R}_{0ij0} = \bar{\Rho}_{ij} + \bar{\Rho}_{00} (h_r)_{ij} + \overline{W}_{0ij0}. $$ Hence using $$ \bar{\nabla}_0(\bar{R})_{0ij0} = \partial_0 (\bar{R}_{0ij0}) - \bar{R}(\partial_0,\bar{\nabla}_0(\partial_i),\partial_j,\partial_0) - \bar{R}(\partial_0,\partial_i,\bar{\nabla}_0(\partial_j),\partial_0) $$ and $\bar{\nabla}_0(\partial_i) = L_i^k \partial_k$ we find \begin{align*} \bar{\nabla}_0(\bar{R})_{0ij0} - \bar{\nabla}_0(\overline{W})_{0ij0} & = \partial_0 (\bar{\Rho}_{ij}) + \partial_0 (\bar{\Rho}_{00}) h_{ij} + \bar{\Rho}_{00} h_{ij}' \\ & - \bar{\Rho}(\bar{\nabla}_0(\partial_i),\partial_j) - \bar{\Rho}(\partial_i,\bar{\nabla}_0(\partial_j)) - \bar{\Rho}_{00} h(\bar{\nabla}_0(\partial_i),\partial_j) - \bar{\Rho}_{00} h(\partial_i,\bar{\nabla}_0(\partial_j)) \\ & = \bar{\nabla}_0(\bar{\Rho})_{ij} + \partial_0 (\bar{\Rho}_{00}) h_{ij} + 2 \bar{\Rho}_{00} L_{ij} - 2 \bar{\Rho}_{00} L_{ij} \\ & = \bar{\nabla}_0(\bar{\Rho})_{ij} + \partial_0 (\bar{\Rho}_{00}) h_{ij} \end{align*} for $r=0$. Therefore, $$ 2 \lo^{ij} \bar{\nabla}_0(\bar{R})_{0ij0} = 2 \lo^{ij} \bar{\nabla}_0(\bar{\Rho})_{ij} + 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} = \lo^{ij} \bar{\nabla}_0(\overline{\Ric})_{ij} + 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0}. $$ This proves the first identity. The second identity follows from the decomposition $\bar{\G} = \bar{\Rho} + \bar{\Rho}_{00} h + \W$. \end{proof} Next, we evaluate the first term of \eqref{B3-g}. \begin{lem}\label{del-Nabla} In general dimensions, it holds \begin{equation}\label{del-nabla} \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) = \frac{1}{2} \Delta (\overline{\scal}) - n \delta (H \overline{\Ric}_0) - \delta ((L \overline{\Ric})_{0}) - \delta \delta (\overline{\Ric}). \end{equation} \end{lem} \begin{proof} The result follows from the second Bianchi identity. Combining the identity \begin{equation*} \bar{\nabla}_0(\overline{\Ric})(\partial_0,\partial_a) = \delta^g (\overline{\Ric})(\partial_a) - h^{ij} \bar{\nabla}_{\partial_i} (\overline{\Ric})(\partial_j,\partial_a) \end{equation*} on $M$ with $2 \delta^g (\Ric^g) =d \scal^g$, we obtain \begin{align*} \bar{\nabla}_0(\overline{\Ric})(\partial_0,\partial_a) & = \frac{1}{2} \langle d \overline{\scal},\partial_a \rangle - h^{ij} \partial_i (\overline{\Ric}(\partial_j,\partial_a)) + h^{ij} \overline{\Ric}(\bar{\nabla}_{\partial_i}(\partial_j),\partial_a) + h^{ij} \overline{\Ric}(\partial_j,\bar{\nabla}_{\partial_i}(\partial_a)) \\ & = \frac{1}{2} \langle d \overline{\scal},\partial_a \rangle - h^{ij} \partial_i (\overline{\Ric}(\partial_j,\partial_a)) \\ & + h^{ij} \overline{\Ric}(\nabla_{\partial_i}(\partial_j) - L_{ij} \partial_0,\partial_a) + h^{ij} \overline{\Ric}(\partial_j,\nabla_{\partial_i}(\partial_a) - L_{ia} \partial_0) \\ & = \frac{1}{2} \langle d \overline{\scal},\partial_a \rangle - \delta^h (\overline{\Ric})(\partial_a) - n H \overline{\Ric}(\partial_0,\partial_a) - h^{ij} L_{ia} \overline{\Ric}(\partial_j,\partial_0). \end{align*} Now we apply $\delta = \delta^h$ to this identity of $1$-forms on $M$. We obtain $$ \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) = \frac{1}{2} \Delta (\overline{\scal}) - \delta \delta (\overline{\Ric}) -n \delta (H \overline{\Ric}_0) - \delta ((L \overline{\Ric})_{0}). $$ The proof is complete. \end{proof} In order to apply Lemma \ref{del-Nabla}, we combine it with the following formula for the last term on the right-hand side of \eqref{del-nabla}. \begin{lem}\label{deldel} Let $n=3$. Then $$ \delta \delta (\overline{\Ric}) = 2 \Delta (\J) + \Delta (\bar{\J}) - \Delta (H^2) - 2 \delta \delta (H \lo) + 2 \delta \delta (\lo^2) - \frac{1}{2} \Delta (|\lo|^2) + 2 \delta \delta (\W). $$ \end{lem} \begin{proof} First, we note that $$ \delta \delta (\overline{\Ric}) = 2 \delta \delta (\bar{\Rho}) + \delta \delta (\bar{\J} h) = 2 \delta \delta (\bar{\Rho}) + \Delta (\bar{\J}). $$ Now we utilize the identity \eqref{Fial}. It follows that $$ 2 \delta \delta (\bar{\Rho}) = 2 \delta \delta (\Rho) - 2 \delta \delta (H \lo) - \Delta (H^2) + 2 \delta \delta (\lo^2) - \frac{1}{2} \Delta (|\lo|^2) + 2 \delta \delta (\W). $$ Combining these results with $\delta (\Rho) = d\J$ proves the assertion. \end{proof} Now, combining Lemma \ref{del-Nabla} and Lemma \ref{deldel} with the Gauss identity $$ \bar{\J} - \J = \bar{\Rho}_{00} + 1/4 |\lo|^2 - 3/2 H^2 $$ gives \begin{align}\label{surprise} \delta (\bar{\nabla}_0(\overline{\Ric})_{0}) & = 2 \Delta (\bar{\J} - \J) - 3 \delta (H \overline{\Ric}_0) - \delta ((L \overline{\Ric})_{0}) \notag \\ & + \Delta(H^2) + 2 \delta \delta (H \lo) - 2 \delta \delta (\lo^2) + \frac{1}{2} \Delta (|\lo|^2) - 2 \delta \delta (\W) \notag \\ & = 2 \Delta (\bar{\Rho}_{00}) + \Delta (|\lo|^2) - 2 \Delta(H^2) - 3 \delta (H \overline{\Ric}_0) - \delta ((L \overline{\Ric})_{0}) \notag \\ & + 2 \delta \delta (H \lo) - 2 \delta \delta (\lo^2) - 2 \delta \delta (\W). \end{align} This result shows that \eqref{B3-g} equals \begin{align}\label{R1} & - \Delta (|\lo|^2) - 2 \delta \delta (H \lo) + 2 \delta \delta (\lo^2) + 2 \Delta(H^2) \notag \\ & + \delta (H \overline{\Ric}_0) + \delta ((L \overline{\Ric})_{0}) + 6 (dH,\overline{\Ric}_0) + 2 \delta \delta (\W). \end{align} \begin{rem}\label{surp2} The identity \eqref{surprise} shows that $$ \Delta (|\lo|^2) - 2 \delta \delta (\lo^2) + 2 \delta \delta (H \lo) - 2 \Delta(H^2) = 0 $$ for a flat background. This relation also is a consequence of the difference formula \eqref{basic-div} and the identity \begin{equation}\label{dd-flat} \delta \delta (H \lo) = (\lo,\Hess(H)) + 4 |dH|^2 + 2 H \Delta(H) \end{equation} which is a consequence of Lemma \ref{deldel2} and the Codazzi-Mainardi relation $\delta (\lo) = 2dH$ for a flat background. More generally, if $g$ is Einstein, i.e., if $\overline{\Ric} = \lambda g$, then $\overline{\Ric}_0 = 0$ and the identity \eqref{surprise} implies $$ \Delta (|\lo|^2) - 2 \delta \delta (\lo^2) + 2 \delta \delta (H \lo) - 2 \Delta(H^2) - 2 \delta \delta (\W) = 0. $$ By combination with the Codazzi-Mainardi relation $\delta(\lo) = 2 dH$, Lemma \ref{diff-key-g} and Lemma \ref{deldel2}, we conclude the interesting identity $$ -2 \lo^{ij} \nabla^k \overline{W}_{kij0} + |\overline{W}_{0}|^2 - 2 \delta \delta (\W) = 0. $$ \end{rem} \begin{lem}\label{deldel2} In general dimensions, it holds $$ \delta \delta (H \lo) = (\lo,\Hess(H)) + 2 (dH,\delta(\lo)) + H \delta \delta (\lo). $$ \end{lem} \begin{proof} The identity is obvious. \end{proof} Now we combine formula \eqref{R1} with \eqref{B3-u}--\eqref{B3-gc}. Note that the term $\delta ((L \overline{\Ric})_{0})$ in \eqref{R1} sums up with the fifth term in \eqref{B3-u} to $\delta (H \overline{\Ric}_0)$ and that the term $-\Delta (|\lo|^2)$ in \eqref{R1} cancels with the term $\Delta (|\lo|^2)$ in \eqref{B3-gc}. We also use Lemma \ref{deldel2}. By Lemmas \ref{HL1}--\ref{deldel2}, the formula in Lemma \ref{B3-inter2} turns into the sum of \begin{equation}\label{F1} 2 \Delta(H^2) + 2 \delta (H \overline{\Ric}_0) \stackrel{!}{=} 2 \delta ( H \delta (\lo)) = 2 H \delta \delta (\lo) + 2 (dH,\delta(\lo)) \end{equation} (by Codazzi-Mainardi $\overline{\Ric}_0 = \delta(\lo) - 2dH$), $$ 6 (dH,\overline{\Ric}_0), $$ \begin{align}\label{F2} & - 2 \delta \delta (H \lo) + 2 \delta\delta (\lo^2) + 2 (\lo, \nabla (\overline{\Ric}_0)) + 2(\delta(\lo),\overline{\Ric}_0) -3 |\lo|^2 \overline{\Ric}_{00} - (\lo^2,\overline{\Ric}) \notag \\ & \stackrel{!}{=} -2 (\lo,\Hess(H)) - 4 (dH,\delta(\lo)) + 2(\delta(\lo),\overline{\Ric}_0) - 2H \delta \delta (\lo) + 2 \delta\delta (\lo^2) \notag \\ & + 2 (\lo, \nabla (\overline{\Ric}_0)) -3 |\lo|^2 \overline{\Ric}_{00} - (\lo^2,\overline{\Ric}) \end{align} (by Lemma \ref{deldel2}), \begin{align*} 8 (\lo^2,\bar{\G}) + 2 |\lo|^2 \bar{\J} & = 8 (\lo^2,\bar{\Rho}) + 8 |\lo|^2 \bar{\Rho}_{00} + 2 |\lo|^2 \bar{\J} + 8 (\lo^2,\W) & \mbox{(by $\bar{\G} = \bar{\Rho} + \bar{\Rho}_{00} h + \W$)} \\ & = 4 (\lo^2,\overline{\Ric}) - 2|\lo|^2 \bar{\J} + 8 |\lo|^2\bar{\Rho}_{00} + 8 (\lo^2,\W) \\ & \stackrel{!}{=} 4 (\lo^2,\overline{\Ric}) - 6 |\lo|^2 \bar{\J} + 4 |\lo|^2 \overline{\Ric}_{00} + 8 (\lo^2,\W), \end{align*} \begin{equation}\label{W-terms} 2 (\bar{\Rho},\W) + 2 |\W|^2 + 2 \delta \delta (\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 2H (\lo,\W) \end{equation} and \begin{align*} & 6 (\lo,\Hess(H)) + 6 H \tr(\lo^3) + |\lo|^4 + 12 |dH|^2. \end{align*} Note that the contributions $2H \delta \delta (\lo)$ in \eqref{F1} and \eqref{F2} cancel. By Codazzi-Mainardi, we find $$ (\delta(\lo),\overline{\Ric}_0) - (\delta(\lo),dH) = |\delta(\lo)|^2 - 3 (\delta(\lo),dH). $$ Hence $$ 2(dH,\delta(\lo)) - 4 (dH,\delta(\lo)) + 6 (dH,\overline{\Ric}_0) + 2 (\delta(\lo),\overline{\Ric}_0) \stackrel{!}{=} 2 |\delta(\lo)|^2 - 12 |dH|^2. $$ Thus, we have proved \begin{prop}\label{B3-main} $12 \B_3$ equals the sum of \begin{align}\label{B3F1} 2 \delta \delta (\lo^2) + 2 |\delta(\lo)|^2, \end{align} \begin{equation}\label{B3F2} 2 (\lo, \nabla (\overline{\Ric}_0)) +|\lo|^2 \overline{\Ric}_{00} + 3 (\lo^2,\overline{\Ric}) - 6 |\lo|^2 \bar{\J}, \end{equation} the Weyl-curvature terms \begin{equation}\label{W-terms-F} 2 (\bar{\Rho},\W) + 2 |\W|^2 + 2 \delta \delta (\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 2H (\lo,\W) + 8 (\lo^2,\W) \end{equation} and \begin{equation}\label{B3F3} 4 (\lo,\Hess(H)) + 6 H \tr(\lo^3) + |\lo|^4. \end{equation} \end{prop} Note that there is no Laplace term in that formula. \subsection{Proof of the main result. Equivalences}\label{equiv} Proposition \ref{B3-main} has the disadvantage that the conformal invariance of $\B_3$ is not obvious. Therefore, it is natural to reformulate the results in a way which makes the conformal invariance transparent. For this purpose, we relate Proposition \ref{B3-main} to the formula \begin{align}\label{B3-GW} 12 \B_3 & = 6 \LOP ((\lo^2)_\circ) + 2 |\lo|^4 + \star \notag \\ & = 6 \delta \delta (\lo^2) - 2 \Delta (|\lo|^2) + 6 (\lo^2,\Rho) - 2 |\lo|^2 \J + 2 |\lo|^4 + \star \end{align} in \cite[Proposition 1.1]{GGHW}, where\footnote{Here we refer to \url{arXiv:1508.01838v1}. This result differs from the version in the published paper.} \begin{equation}\label{GGHW-terms} \star \st 2 \LOP (\W) + 4 |\W|^2 + 2 |\overline{W}_{0}|^2 - 2 (\lo,B) + 14 (\lo^2,\W) - 2 \lo^{ab} \lo^{cd} \overline{W}_{cabd}. \end{equation} Here $B$ is a certain conformally invariant symmetric bilinear form of weight $-1$ which will be defined in \eqref{Bach-def}. Here we took into account that in \cite{GGHW} the signs of the components of the curvature tensor and the Weyl tensor are opposite to ours. All terms in \eqref{GGHW-terms} are conformally invariant. The difference of both formulas is \begin{align}\label{diff-g} & 2 ( \Delta (|\lo|^2) - 2 \delta \delta (\lo^2)) + 2 |\delta(\lo)|^2 \notag \\ & + 4 (\lo,\Hess(H)) + 6 H \tr(\lo^3) - |\lo|^4 \notag \\ & + 2 (\lo, \nabla (\overline{\Ric}_0)) + |\lo|^2 \overline{\Ric}_{00} + 3 (\lo^2,\overline{\Ric}) - 6 |\lo|^2 \bar{\J} - 6 (\lo^2,\Rho) + 2 |\lo|^2 \J \notag \\ & + 2 (\bar{\Rho},\W) + 2 |\W|^2 + 2 \delta \delta (\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 2H (\lo,\W) + 8 (\lo^2,\W) - \star. \end{align} \begin{lem}\label{van-equiv} The sum \eqref{diff-g} vanishes. \end{lem} In other words, Proposition \ref{B3-main} is equivalent to \cite[Proposition 1.1]{GGHW}. The proof of this result will also establish the equivalence to Theorem \ref{main1}. \begin{rem}\label{equiv-van} Lemma \ref{van-equiv} holds for a flat background metric. In this case $\star = 0$. In fact, the identity \eqref{Fial} implies \begin{equation}\label{JP} 6(\lo^2,\Rho) - 2 \J |\lo|^2 = 6 H \tr(\lo^3) - |\lo|^4. \end{equation} By $\delta(\lo) = 2 dH$ (Codazzi-Mainardi), the sum \eqref{diff-g} equals \begin{align*} & 2 ( \Delta (|\lo|^2) - 2 \delta \delta (\lo^2)) + 8 |dH|^2 \\ & + 4 (\lo,\Hess(H)) + 6 H \tr (\lo^3) - |\lo|^4 - 6 H \tr(\lo^3) + |\lo|^4 \\ & = 2 ( \Delta (|\lo|^2) - 2 \delta \delta (\lo^2)) + 8 |dH|^2 + 4 (\lo,\Hess(H)). \end{align*} The identity \eqref{basic-div} shows that this sum vanishes. \end{rem} A key role in the argument in Remark \ref{equiv-van} is played by the formula \eqref{basic-div} for the divergence term $\Delta (|\lo|^2) - 2 \delta \delta (\lo^2)$. Lemma \ref{diff-key-g} extends this result to general backgrounds. We also need the following curved analog of \eqref{JP}. \begin{lem}\label{FH} If $n=3$, then it holds $$ 6(\lo^2,\Rho) - 2 |\lo|^2 \J = 6 H \tr(\lo^3) - |\lo|^4 + 6 (\lo^2,\bar{\Rho}) - 2 |\lo|^2 \bar{\J} + 2 |\lo|^2 \bar{\Rho}_{00} - 6(\lo^2,\W). $$ \end{lem} \begin{proof} The identity \eqref{Fial} yields $$ \iota^* \bar{\Rho} - \Rho = \lo^2 - \frac{1}{4} |\lo|^2 h - H \lo - \frac{1}{2} H^2 h + \W. $$ Taking the trace yields the Gauss identity $$ \bar{\J} - \bar{\Rho}_{00} - \J = |\lo|^2 -\frac{3}{4} |\lo|^2 - \frac{3}{2} H^2 = \frac{1}{4} |\lo|^2 - \frac{3}{2} H^2. $$ These relations imply the assertion. \end{proof} Now, by Lemma \ref{FH}, \eqref{diff-g} simplifies to \begin{align}\label{diff-h} & 2 ( \Delta (|\lo|^2) - 2 \delta \delta (\lo^2)) + 2 |\delta(\lo)|^2 + 4 (\lo,\Hess(H)) \notag \\ & + 2 (\lo, \nabla (\overline{\Ric}_0)) + |\lo|^2 \overline{\Ric}_{00} + 3 (\lo^2,\overline{\Ric}) - 4 |\lo|^2 \bar{\J} -6 (\lo^2,\bar{\Rho}) - 2 |\lo|^2 \bar{\Rho}_{00} \notag \\ & + 2 (\bar{\Rho},\W) + 2 |\W|^2 + 2 \delta \delta (\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 2H (\lo,\W) + 14 (\lo^2,\W) - \star \end{align} But $$ |\lo|^2 \overline{\Ric}_{00} + 3 (\lo^2,\overline{\Ric}) - 4 |\lo|^2 \bar{\J} - 6 (\lo^2,\bar{\Rho}) - 2 |\lo|^2\bar{\Rho}_{00} = 0, $$ i.e., the second last line of \eqref{diff-h} reduces to $2 (\lo, \nabla (\overline{\Ric}_0))$. Therefore, \eqref{diff-g} further simplifies to \begin{align*} & 2 ( \Delta (|\lo|^2) - 2 \delta \delta (\lo^2)) + 2 |\delta(\lo)|^2 + 4 (\lo,\Hess(H)) + 2 (\lo, \nabla (\overline{\Ric}_0)) \\ & + 2 (\bar{\Rho},\W) + 2 |\W|^2 + 2 \delta \delta (\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 2H (\lo,\W) + 14 (\lo^2,\W) - \star. \end{align*} Now, by Lemma \ref{diff-key-g}, this sum equals \begin{align}\label{a-full} & 4 \lo^{ij} \nabla^k \overline{W}_{ikj0} + 2 |\overline{W}_{0}|^2 \notag \\ & + 2 (\bar{\Rho},\W) + 2 |\W|^2 + 2 \delta \delta (\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 2 H (\lo,\W) + 14 (\lo^2,\W) - \star. \end{align} Now we apply the identity $$ (\bar{\Rho},\W) = (\Rho,\W) + (\lo^2,\W) - H (\lo,\W) + |\W|^2 $$ (see \eqref{Fial}). Hence the sum \eqref{a-full} equals \begin{align*} & 2 \delta \delta (\W) + 2 (\Rho,\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} - 4 H(\lo,\W) \\ & + 16 (\lo^2,\W) + 4 |\W|^2 + 2 |\overline{W}_{0}|^2 + 4 \lo^{ij} \nabla^k \overline{W}_{ikj0} - \star. \end{align*} Therefore, Lemma \ref{van-equiv} holds true iff \begin{align}\label{star} \star & = 2 \LOP(\W) - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} + 4 \lo^{ij} \nabla^k \overline{W}_{ikj0} \notag \\ & - 4 H(\lo,\W) + 16 (\lo^2,\W) + 4 |\W|^2 + 2 |\overline{W}_{0}|^2. \end{align} Equivalently, Lemma \ref{van-equiv} holds true iff \begin{align}\label{final} & -2 (\lo,B) + 14 (\lo^2,\W) - 2 \lo^{ij} \lo^{kl} \overline{W}_{kijl} \notag \\ & = - 2 \lo^{ij} \bar{\nabla}_0(\overline{W})_{0ij0} + 4 \lo^{ij} \nabla^k \overline{W}_{ikj0} - 4 H(\lo,\W) + 16 (\lo^2,\W) \end{align} It remains to prove \eqref{final}. As a preparation, we observe \begin{lem}\label{LW-T} Let $n=3$. Then $$ \lo^{ij} \bar{\nabla}^k \overline{W}_{ikj0} = \lo^{ij} \nabla^k \overline{W}_{ikj0} + (\lo^2,\W) - 3 H(\lo,\W) + \lo^{ij} \lo^{kl} \overline{W}_{kijl}. $$ \end{lem} \begin{proof} By $\bar{\nabla}_i (\partial_j) = \nabla_i(\partial_j) - L_{ij} \partial_0$ and $\bar{\nabla}_k(\partial_0) = L_k^m \partial_m$, we find \begin{align*} \bar{\nabla}^k \overline{W}_{ikj0} & = \nabla^k \overline{W}_{ikj0} - L^{kl} \overline{W}_{ikjl} + L^k_i \overline{W}_{0kj0} + 3 H \overline{W}_{i0j0} + L^k_j \overline{W}_{ik00} \\ & = \nabla^k \overline{W}_{ikj0} + L^{kl} \overline{W}_{kijl} + L^k_i \overline{W}_{0kj0} - 3 H \overline{W}_{0ij0} \\ & = \nabla^k \overline{W}_{ikj0} + \lo^{kl} \overline{W}_{kijl} - H \overline{W}_{0ij0} + \lo_i^k \overline{W}_{0kj0} + H \overline{W}_{0ij0} - 3 H \overline{W}_{0ij0} \\ & = \nabla^k \overline{W}_{ikj0} + \lo^{kl} \overline{W}_{kijl} + \lo_i^k \overline{W}_{0kj0} - 3 H \overline{W}_{0ij0}. \end{align*} The assertion follows by contraction with $\lo^{ij}$. \end{proof} \begin{lem}\label{Bach-relation} It holds \begin{equation}\label{Bach-deco} (\lo,B) = \lo^{ij} \bar{\nabla}^0 (\widebar{W})_{0ij0} + 2 H (\lo,\W) - 2 \lo^{ij} \nabla^k \overline{W}_{jki0} - (\lo^2,\W) - \lo^{ij}\lo^{kl} \overline{W}_{kijl}. \end{equation} \end{lem} \index{$B$ \quad hypersurface Bach tensor} \begin{proof} We first restate the definition of $B$ in our conventions:\footnote{We recall that our signs of the components of $\overline{W}$ are opposite.} \begin{align}\label{Bach-def} B_{ij} & = \bar{C}_{0(ij)} - H \W_{ij} + \nabla^k \overline{W}_{0(ij)k} \\ & = \bar{\nabla}^k (\overline{W})_{0(ij)k} - H \W_{ij} + \nabla^k \overline{W}_{0(ij)k}. \notag \end{align} Here \index{$C$ \quad Cotton tensor} $$ (n-3) C_{ijk} \st \nabla^l (W)_{ijkl} $$ defines the Cotton tensor $C$ on a manifold of dimension $n$. It satisfies the conformal transformation law $$ \hat{C}_{ijk} = C_{ijk} + W_{ijk \grad(\varphi)}. $$ We emphasize that, in the definition \eqref{Bach-def} of $B$, the index $k$ in the first term runs over the tangential {\em and} the normal vectors. $(ij)$ denotes symmetrization. We first verify that the symmetric tensor $B$ satisfies the conformal transformtion law $e^{\varphi} \hat{B} = B$. For this purpose, we first observe that $$ e^\varphi \hat{\bar{C}}_{\hat{0}ij} = \hat{\bar{C}}_{0ij} = \bar{C}_{0ij} + \overline{W}_{0ij\grad (\varphi)} = \bar{C}_{0ij} + \overline{W}_{0ij\grad^t (\varphi)} + \W_{ij} \partial_0(\varphi), $$ where $\hat{0} = \hat{\partial}_0 = e^{-\varphi} \partial_0$ and $\grad^t(\varphi)$ is the tangential component of the gradient. We also recall that $$ e^{-2\varphi} \hat{\widebar{W}}_{ijkl} = \widebar{W}_{ijkl}, \quad \hat{\bar{\W}}_{ij} = \bar{\W}_{ij} \quad \mbox{and} \quad e^\varphi \hat{H} = H + \partial_0(\varphi). $$ We calculate \begin{align*} e^{2\varphi} \hat{\nabla}^k \hat{\widebar{W}}_{\hat{0}ijk} & = \partial^k (e^\varphi \widebar{W}_{0ijk}) \\ & - e^{\varphi} \widebar{W}(\partial_0,\hat{\nabla}^k(\partial_i),\partial_j,\partial_k) - e^{\varphi} \widebar{W}(\partial_0,\partial_i, \hat{\nabla}^k(\partial_j),\partial_k) - e^{\varphi} \widebar{W}(\partial_0,\partial_i, \partial_j, \hat{\nabla}^k(\partial_k)). \end{align*} Now the general transformation law $$ \hat{\nabla}_i(\partial_j) = \nabla_i(\partial_j) + \partial_i(\varphi) \partial_j + \partial_j(\varphi) \partial_i - g_{ij} \grad(\varphi) $$ implies that in general dimensions ($\dim (M) = n$)\footnote{That identity corrects \cite[(2.10)]{GGHW}.} $$ e^{\varphi} \hat{\nabla}^k \hat{\widebar{W}}_{\hat{0}ijk} = \nabla^k \widebar{W}_{0ijk} + (n-4) \widebar{W}_{0ij\grad^t(\varphi)} - \widebar{W}_{0\grad^t(\varphi)ij}. $$ Hence for $n=3$ we find $$ e^{\varphi} \hat{\nabla}^k \hat{\widebar{W}}_{\hat{0}(ij)k} = \nabla^k \widebar{W}_{0(ij)k} - \widebar{W}_{0(ij)\grad^t(\varphi)}. $$ Therefore, $$ e^\varphi \hat{B}_{ij} = B_{ij} + \overline{W}_{0(ij)\grad^t (\varphi)} + \W_{ij} \partial_0(\varphi) - \W_{ij} \partial_0(\varphi) - \widebar{W}_{0(ij)\grad^t(\varphi)} = B_{ij}. $$ Now the definition of $B$ gives \begin{align*} (\lo,B) & \st \lo^{ij} \bar{\nabla}^k (\overline{W})_{0ijk} - H (\lo,\W) + \lo^{ij} \nabla^k \overline{W}_{0ijk} \\ & = - \lo^{ij} \bar{\nabla}^k (\overline{W})^t_{jki0} + \lo^{ij} \bar{\nabla}^0 (\overline{W})_{0ij0} - H (\lo,\W) - \lo^{ij} \nabla^k \overline{W}_{jki0}, \end{align*} where the superscript $t$ in the first sum indicates that indices are only tangential. Now Lemma \ref{LW-T} yields \begin{align*} (\lo,B) & = \lo^{ij} \bar{\nabla}^0 (\overline{W})_{0ji0} + 2 H(\lo,\W) - 2 \lo^{ij} \nabla^k \overline{W}_{jki0} - (\lo^2,\W) - \lo^{ij} \lo^{kl} \overline{W}_{kijl} . \end{align*} The proof of \eqref{Bach-deco} is complete. \end{proof} Lemma \ref{Bach-relation} shows that $$ - 2(\lo,B) = -2 \lo^{ij} \bar{\nabla}^0 (\widebar{W})_{0ij0} - 4 H (\lo,\W) +4 \lo^{ij} \nabla^k \overline{W}_{jki0} + 2 (\lo^2,\W) +2 \lo^{ij}\lo^{kl} \overline{W}_{kijl}. $$ Hence $$ -2 (\lo,B) + 14 (\lo^2,\W) - 2 \lo^{ij} \lo^{kl} \overline{W}_{kijl} = 16 (\lo^2,\W) -2 \bar{\nabla}^0 (\widebar{W})_{0ij0} -4 H \W_{ij} + 4 \lo^{ij} \nabla^k \overline{W}_{jki0} $$ (note the cancellation!). This proves \eqref{final} and hence Lemma \ref{van-equiv}. Now in order to finish the {\bf proof of Theorem \ref{main1}}, it suffices to combine \eqref{B3-GW} with \eqref{star}. \begin{rem}\label{GGHW-wrong} The formula for $\B_3$ in the published version of \cite{GGHW} reads $$ 12 \B_3 = 4 \LOP ((\lo^2)_\circ) + 2 \LOP (\Fo) - 2 (\lo,B) + |\lo|^4 + 4 (\Fo,\JF) + 2(\Fo,\lo^2) + 2|\overline{W}_0|^2. $$ By $\Fo = (\lo^2)_\circ + \W$, this formula is equivalent to \begin{align*} 12 \B_3 & = 6 \LOP((\lo^2)_\circ) + 2 \LOP(\W) - 2 (\lo,B) + |\lo|^4 + 2|\overline{W}_0|^2 \\ & + 2 ((\lo^2)_\circ + \W,\lo^2) + 4 ((\lo^2)_\circ + \W, \lo^2 - \frac{1}{4} |\lo|^2 h + \W). \end{align*} The second line simplifies to $$ |\lo|^4 + 10 (\lo^2,\W) + 4 |\W|^2. $$ If follows that the resulting formula for $12 \B_3$ differs from \eqref{B3-GW}, \eqref{GGHW-terms}. \end{rem} Finally, we note that for a conformally flat background Theorem \ref{main1} states that $$ 6 \B_3 = 3 \LOP ((\lo^2)_\circ) + |\lo|^4 = 3 \delta \delta ((\lo^2)_\circ) + 3 ((\lo^2)_\circ,\Rho) + |\lo|^4. $$ Lemma \ref{NEW3a} implies that this formula is equivalent to \begin{equation}\label{origin} 6 \B_3 = \Delta (|\lo|^2) - |\nabla \lo|^2 + 3/2 |\delta(\lo)|^2 - 2 \J |\lo|^2 + |\lo|^4 \end{equation} (as also stated in \cite[Proposition 2.10]{GW-LNY}). \begin{comment} We quickly confirm that equivalence. In fact, the equivalence means that $$ 4 (\lo,\Delta(\lo)) + 3 |\nabla \lo|^2 - 3 \delta \delta (\lo^2) - |\nabla \lo|^2 + 3/2 |\delta(\lo)|^2 - \J |\lo|^2 - 3 (\lo,\Rho) = 0. $$ But this identity follows by combining Lemma \ref{Id-basic}, Lemma \ref{kappa-1a} and Corollary \ref{Laplace-L}. \end{comment} \section{Variational aspects}\label{var} \index{$\var$ \quad variation} \index{$\mathcal{W}_3$ \quad higher Willmore functional} Let $\iota: M^3 \hookrightarrow X^4$ be an embedding. In this section, we prove that the conformally invariant equation $\B_3 = 0$ is the Euler-Lagrange equation of the conformally invariant functional \begin{equation}\label{W3} \mathcal{W}_3(\iota) \st \int_{\iota(M)} (\tr(\lo^3) + (\lo,\W)) dvol \end{equation} under normal variations of the embedding $\iota$. Let $u \in C^\infty(M)$ and $\partial_0$ be a unit normal field of $M$. We set $\iota_t (m) = \exp (t u(m) \partial_0)$, where $\exp$ is the exponential map. Then $\iota_0 = \iota$ and $\iota_t$ is a variation of $M$ with variation field $u \partial_0$. Let $\mathcal{W}_3(\iota_t)$ be the analogous functional for $\iota_t$ and define $$ \var (\mathcal{W}_3)[u] \st (d/dt)|_0 (\mathcal{W}_3(\iota_t)). $$ \begin{thm}\label{variation} It holds \begin{equation*} -\var(\mathcal{W}_3)[u] = 6 \int_M u \B_3 dvol. \end{equation*} \end{thm} This result reproves \cite[Proposition 1.2]{GGHW}. Our arguments are classical and differ substantially from those in the reference (see the comments after the proof). \begin{proof} We first note that the variation of $\int_M \tr (\lo^3) dvol$ has been determined in \cite[Lemma 13.9.1]{JO} for conformally flat backgrounds. The given arguments easily extend to the general case and yield \begin{align*} - \var\left(\int_M \tr (\lo^3) dvol_h\right)[u] & = 3 \int_M u \left(\delta \delta ((\lo^2)_\circ) + (\Rho,(\lo^2)_\circ) + 2 (\lo^2,\W) + \frac{1}{3} |\lo|^4\right) dvol_h \\ & = \int_M u \left( 3 \LOP ((\lo^2)_\circ) + 6 (\lo^2,\W) + |\lo|^4\right) dvol_h. \end{align*} In the second part of the proof, we determine the variation of $\int_M (\lo,\W) dvol_h$. We write the integrand as $h^{ai} h^{bj} \lo_{ab} \W_{ij}$ and apply the well-known variation formulas \cite[Theorem 3-15]{A}, \cite[Theorem 3.2]{HP} \begin{align*} \var (h)[u] & = 2 u L, \\ \var (L)[u] & = - \Hess(u) + u L^2 - u \bar{\G}, \\ 3 \var (H)[u] & = - \Delta (u) - u |L|^2 - u \overline{\Ric}_{00} \end{align*} and $$ \var(dvol_h)[u] = 3 u H dvol_h. $$ It follows that the variation is given by the integral of the sum of \begin{equation}\label{V1} - 4u (\lo^2,\W) - 4 u H (\lo, \W) \qquad \mbox{(by variation of the metric)}, \end{equation} \begin{align}\label{V2} (\var(\lo)[u],\W) & = \W^{ij} (\var(L)_{ij} - H \var(h)_{ij}) \notag \\ & = \W^{ij} (-\Hess_{ij}(u) + u (L^2)_{ij} - u \bar{\G}_{ij}) - 2 u H \W^{ij} L_{ij} \notag \\ & = - (\Hess (u), \W) + u (\lo^2,\W) - u (\bar{\G},\W), \end{align} \begin{align}\label{V3} (\lo,\var(\W)[u]) & = u \lo^{ij} \bar{\nabla}_0 (\overline{W})_{0ij0} + u \lo^{ij} (L_i^k \overline{W}_{0kj0} + L^{k}_j \overline{W}_{0ik0}) \notag \\ & = u \lo^{ij} \bar{\nabla}_0 (\overline{W})_{0ij0} + 2 u (\lo^2,\W) + 2 u H(\lo,\W) \end{align} and \begin{align}\label{V5} & -2 \lo^{ij} \overline{W}_{\grad(u) ij0} \qquad \mbox{(by variation of the normal vector)}, \notag \\ & 3 u H (\lo,\W) \qquad \mbox{(by variation of the volume form)}. \end{align} Now using partial integration we obtain \begin{align*} & \var\left(\int_M (\lo,\W) dvol_h \right)[u] \\ & = \int_M u \left[ - \delta \delta (\W) - (\lo^2,\W) + H (\lo,\W) - (\bar{\G},\W) + \lo^{ij} \bar{\nabla}_0 (\overline{W})_{0ij0} \right] dvol_h \\ & - 2 \int_M \lo^{ij} \overline{W}_{\grad(u)ij0} dvol_h. \end{align*} Since $$ (\bar{\G},\W) = (\bar{\Rho},\W) + (\W,\W) = (\Rho,\W) - H (\lo,\W) + (\lo^2,\W) + 2 |\W|^2 $$ by \eqref{Fial}, we get \begin{align*} & \var\left(\int_M (\lo,\W) dvol_h \right)[u] \\ & = \int_M u \left[ - \LOP(\W) + 2H (\lo,\W) - 2 (\lo^2,\W) - 2 |\W|^2 + \lo^{ij} \bar{\nabla}_0 (\overline{W})_{0ij0} \right] dvol_h \\ & - 2 \int_M (du,\lo^{ij} \overline{W}_{\cdot ij0}) dvol_h. \end{align*} By partial integration, we find \begin{align*} 2 \int_M (du,\lo^{ij} \overline{W}_{\cdot ij0}) dvol_h & = - 2 \int_M u \delta (\lo^{ij} \overline{W}_{\cdot ij0}) dvol_h = - 2 \int_M u \nabla^k (\lo^{ij} \overline{W}_{kij0}) dvol_h \\ & = - 2 \int_M u \lo^{ij} \nabla^k \overline{W}_{kij0} dvol_h + \int_M u |\overline{W}_0|^2 dvol_h \end{align*} using the trace-free Codazzi-Mainardi equation \eqref{tf-CM} (similarly as on page \pageref{total}). Summarizing these results, proves the claim. \end{proof} Graham's theorem \cite[Theorem 3.1]{Graham-Yamabe} and \cite[(3.8)]{JO} imply that the variation of $\int v_3 dvol$ equals $4 \int u \B_3 dvol$. Here the singular Yamabe renormalized volume coefficient $v_3$ (as defined in \cite{Graham-Yamabe}) satisfies $$ 12 \int_Mv_3 dvol = - \int_M \mathbf{Q}_3 dvol = - 8 \int_M (\tr(\lo^3) + (\lo,\W)) dvol = - 8 \mathcal{W}_3, $$ where ${\bf{Q}}_3$ is the extrinsic $Q$-curvature (see \cite[Example 13.10.2 and (13.10.7)]{JO}). In other words, the variation of $\mathcal{W}_3$ equals $-6 \int u \B_3 dvol$. This shows that Theorem \ref{variation} fits with Graham's theorem. On the other hand, \cite[Proposition 1.2]{GGHW} states that the variation of $\mathcal{W}_3$ equals $6 \int u \B_3 dvol$. The discrepancy of the sign is due to the altered definition of variations. Note that the notion of variation exploited in \cite{GGHW} leads to the result $\var (L)[u] = \Hess (u)$ (see \cite[(3.8)]{GGHW}). This formula differs from the usual formula used above.
proofpile-arXiv_065-3857
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $P$ be a set of $n$ points in $\mathbb{R}^2$. We study a new variant of outlier detection for convex hull: Given $P$ and a positive integer $w<n$, find a subset $C \subset P$ of points as the {\em outliers}, such that $CH(P\setminus C)$ has the smallest number of vertices, and $0\le |C|\le n-w$, where $|C|$ denotes the cardinality of $C$; see Figure~\ref{fig:def}. One of the motivations is reducing the input sample size to $w$ for support vector machine (SVM) classifiers~\cite{wang2013online}. \begin{definition}[Min-Size Convex Hull with Outliers (MinCH)]\label{def:problem} Let $P=\{p_1,\ldots,p_n\}$ be a set of $n$ points. For a positive integer $w$, we are interested in computing $\underset{\substack{C\subset P,\\|C| \le n-w}}{\arg \min}\; |CH(P\setminus C)|$. Let $P^*_v$ denote $CH(P\setminus C)$. \end{definition} \begin{figure}[h] \centering \includegraphics[scale=0.8]{PDef} \caption{Problem definition: (a) A set $P$ of points with $n-w=2$; (b) $CH(P)$; (c) $CH(P \setminus C)$ with $C=\{p_i,p_j\}$; the objective function is minimizing the number of vertices of $CH(P \setminus C)$. } \label{fig:def} \end{figure} A closely related problem is detecting at most $n-w$ points as outliers in $P$, such that after removing these outliers, the convex hull of the remaining points has the smallest area/perimeter. This problem can be solved in $O(n \log n)$ time for a constant number of outliers~\cite{atanassov2009algorithms}. Since we improve the algorithm of~\cite{eppstein1992finding} for computing a minimum area $w$-gon, we also define this problem in the following. \begin{definition}[Min-area $w$-gon~\cite{eppstein1992finding}]\label{def:problem} Let $P=\{p_1,\ldots,p_n\}$ be a set of $n$ points. For a positive integer $w<n$, we are interested in computing a $w$-gon $P^*_a$ with vertices chosen from $P$, such that $P^*_a$ has the smallest possible area among all choices. \end{definition} \begin{figure}[t] \centering \includegraphics[scale=0.8]{minarea} \caption{Problem definition on a set $P$ of 10 points: For $w=4$, the narrow rectangle is the solution to the Min-area, and the trapezoid is the solution to the Min-perimeter problem. } \label{fig:defarea} \end{figure} Min-perimeter $w$-gon would be defined analogously. See \Cref{fig:defarea} for an illustration. In~\cite{eppstein1992finding}, a dynamic programming algorithm with $O(wn^3)$ time and $O(wn^2)$ space is given for Min-area $w$-gon, that carries over to solve any problem with an objective function in form of a {\em monotone decomposable function (MDF)}, i.e., the objective function can be computed incrementally, but needs to change monotonicity increasing or decreasing: \begin{definition} [Monotone Decomposable Function~\cite{eppstein1992finding}]\label{def:decomposable} A weight function $W$ is called monotone decomposable if and only if for any polygon $P$ and any index~$2<i<m$ \begin{align*} W(P)= M (W(\langle p_1,\ldots,p_i\rangle),W(\langle p_1,p_i,p_{i+1}, \ldots, p_m\rangle),p_1,p_i) \end{align*} where $M$ can be computed in constant time, and $M$ is monotone in its first argument $W(\langle p_1,\ldots,p_i\rangle)$, which also means it is monotone in its second argument. \end{definition} So, computing the minimum/maximum perimeter $w$-gon, computing an optimal empty $w$-gon or computing an optimal polygon with $w$ as the number of vertices plus the interior points can be solved with the same technique. Finding a possibly non-convex polygon of size $w$ with minimum/maximum area is known as {\em polygonalization}, and is NP-complete even in $\mathbb{R}^2$~\cite{fekete2000simple}. We note that it is not always possible to find a strictly convex $w$-gon on any point set~\cite{mitchell1995counting}. If all the points of $P$ are on $CH(P)$, i.e. for points in convex position, computing the maximum area/perimeter $k$-gon takes $O(kn+n\log n)$ time~\cite{van2020maximum,boyce1985finding,aggarwal1987geometric}, and computing the minimum perimeter $k$-gon takes $O(n \log n+k^4 n)$ time~\cite{dobkin1983finding,aggarwal1991finding}. The general idea of~\cite{eppstein1992finding} is to consider all optimal convex $(k-1)$-gons that can be extended by adding one more ear. Presented algorithm uses several preprocessing on the points, e.g., the clockwise ordering of all the vertices around each input point can be computed in $O(n^2)$ time~\cite{edelsbrunner1989topologically}. Another related problem is tabulating the number of convex $w$-gons for $w=3,\ldots,m$, that takes $O(mn^3)$ time~\cite{mitchell1995counting}. We refer to~\cite{parhami2006introduction} for the standard models and definitions of distributed computations in the exclusive read exclusive write parallel random-access machine (EREW PRAM). \subsubsection*{Contribution} We improve the $O(wn^3)$ time dynamic programming algorithm of~\cite{eppstein1992finding} to $O(n^3 \log w)$, that improves the running time of all monotone decomposable weight functions if their input is a convex polygon (\Cref{sec:mdwf,sec:extension}). Our technique can also improve the running time of the dynamic programming algorithms in \cite{mitchell1995counting}. We discuss an easy extension of the algorithm for the class of monotone decomposable weight functions (with convex input) to parallel settings (\Cref{sec:crew}). \Cref{table:results} gives a summary of the new and known results. \begin{table}[t] \centering \begin{tabular}{|p{2.5cm}|c|c|c|c|c|c|} \hline Measure &$\#$ Excluded pts. & Time & Space & Apprx. & Ref\\ \hline\hline Min $\mathcal{A}/\mathcal{P}$ & $n-w$ & $O(wn^3)$& $O(wn^2)$ & exact &\cite{eppstein1992finding} \\ Min $\mathcal{A}/\mathcal{P}$ & $O(1)$ & $O(n \log n)$& $O(n)$ & exact &\cite{atanassov2009algorithms} \\ Min $\#$ of interior points& $n-w$ & $O(wn^3)$& $O(wn^2)$ & exact &\cite{eppstein1992finding} \\ MDF & $n-w$ & $O(wn^3+G(n))$& $O(wn^2)$ & exact &\cite{eppstein1992finding} \\ MDF of convex polygons & $n-w$ & $O(n^3\log w)$& $O(n^3\log w)$ & exact & Thm. \ref{thm:dp}\\ Min $\#$ of CH vertices & $n-w$ & $O(n^3 \log w)$& $O(n^3\log w)$ & exact &Thm. \ref{thm:dp} \\ Min $\mathcal{A}/\mathcal{P}$ & $n-w$ & $O(n^3\log w)$ & $O(n^3\log w)$ & exact &Thm. \ref{thm:dp} \\ \hline \end{tabular} \caption{New and known results. $\mathcal{A}$ and $\mathcal{P}$ stand for the area and the perimeter, respectively. $G(n)$ is the time complexity of computing the decomposable function on at most $O(n^3)$ triangles.} \label{table:results} \end{table} \section{The Dynamic Programming of~\cite{eppstein1992finding}} \label{ap:eppalg} Since the original algorithm in~\cite{eppstein1992finding} is for minimizing the area of a $w$-gon as the weight function, we recall it with the original description. For two points $p_i$ and $p_j$, let $H_{i,j}$ denote the half-plane constructed to the right of the directed line passing through the line segment $\overrightarrow{p_ip_j}$. \textbf{Preprocess: Angular sort around points} For any point $p_i$, we first compute the sorted order of $P\setminus\{p_i\}$ in counter-clockwise (CCW) order around $p_i$ and denote it by $\phi(p_i)$. The area of a triangle $p_ip_jp_l$ is defined as $area(p_ip_jp_l)=|\overrightarrow{p_ip_l}\times \overrightarrow{p_ip_j} \sin \widehat {p_jp_ip_l}|$. The authors make a $4$-dimensional table $A$. Formally, $A[p_i,p_j,p_l,m]$ denote the minimum area (at most) $m$-gon with $p_i$ as the bottommost vertex, $p_j$ as the next vertex of $p_i$ in counter-clockwise order, and all the vertices lie on the same side of $p_jp_l$ as $p_i$. In the initialization, $A[*,*,*,2]=2$ ($*$ means any point in $P$), and they start the recursion from $A[*,*,*,3]$. In each iteration, the following cases should be considered. If $p_l \in H_{i,j}$, we cannot add a new vertex, and $A[p_i,p_j,p_l,m]= A[p_i,p_l,pred(p_l),m]$, where $pred(p_l)$, is the last processed vertex in $\phi(p_j)$. If $p_l \notin H_{i,j}$, we may add one more vertex. So we have $$\displaystyle A[p_i,p_j,p_l,m]=\min_{p_l \in P\setminus H_{i,j}} \big(A[p_i,p_l,p_j,m-1]+area(p_ip_jp_l),A[p_i,p_j,pred(p_l),m]\big),$$ where $p_l$ is the first unprocessed predecessor neighbour of $p_j$ in $\phi(p_j)$; see Figure~\ref{fig:DP}. In the first term, $p_l$ is considered as a new vertex, while in the second term, omitting $p_l$ is more optimal. The total number of considered vertices in $P^*_a$ at most equals $w$, so we recurse from $3,\ldots,w$. A solution to $P^*_a$ is the minimum of $A[*,*,*,w]$. This algorithm is outlined in~\Cref{alg:DParea} and runs in $O(wn^3)$ time. \begin{figure}[t] \centering \includegraphics[scale=0.8]{DP} \caption{For any candidate $m$-gon with a fixed $p_i,p_j$, we check whether $p_l$ can be considered as a vertex (not lying on $H_{i,j}$), and also which of $p_l$ or $p_{l-1}$ determines the optimal choice.} \label{fig:DP} \end{figure} \begin{algorithm}[h] \caption{Min-area $w$-gon~\cite{eppstein1992finding}} \label{alg:DParea} \begin{algorithmic}[1] \Require{A set $P$ of points, $w>0$} \Ensure{A Min-area $w$-gon of $P$} \For{$p_i \in P$} \State{$A[p_i,p_l,p_j,2]=2,\quad \forall (p_j,p_l)\in P$} \For{$m=3,\ldots,w$} \For{$p_j\in P$, $p_l\in P$: $\overrightarrow{p_ip_l}\times \overrightarrow{p_ip_j} \leq0$} \State{$v=\min_{p_l \in P\setminus H_{i,j}} A[p_i,p_l,p_j,m-1]+area(p_ip_jp_l)$}\label{line:cp} \EndFor \State{$\displaystyle A[p_i,p_j,p_l,m]=v$} \EndFor \EndFor\\ \Return{$\min_{i,j,l} A[p_i,p_j,p_l,w]$} \end{algorithmic} \end{algorithm} \section{Exact Algorithms for MinCH} \label{sec:MainDP} As $P^*_v$ can be decomposed into a triangle and a $(w-1)$-gon with the optimal number of vertices, we can find $P^*_v$ recursively. We use this idea combined with a divide-and-conquer technique to improve the running time of~\cite{eppstein1992finding}. We first prove that MinCH is an MDF. \begin{lemma} MinCH is a monotone decomposable function. \end{lemma} \begin{proof} First observe that one can cut $P^*_v$ by any chord $p_1p_i$, where $p_1,p_i \in P^*_v$, and summing over the number of vertices of the induced sub-polygons minus 2 (using the general position assumption in~\cite{eppstein1992finding} that no three points are collinear) determines the number of vertices of $P^*_v$. Hence, $$M(|CH(p_1,\ldots,p_i)|,|CH(p_1,p_i,p_{i+1},\ldots,p_w)|,p_1,p_i)=$$ $$|CH(p_1,\ldots,p_i)|+|CH(p_1,p_i,p_{i+1},\ldots,p_w)|-2=w.$$ \end{proof} \subsection{Adjusting the Dynamic Programming of~\cite{eppstein1992finding} to MinCH} \label{ap:minchrote} For adjusting~\Cref{alg:DParea} for minimizing the number of vertices of the convex hull, the recursion would be $A[p_i,p_j,p_l,m]= \underset {p_l \notin H_{i,j}} {\min}\big( A[p_i,p_l,p_j,m-1]+1,A[p_i,p_j,pred(p_l),m]\big),$ where $p_l$ is the first unprocessed predecessor neighbour of $p_j$ in $\phi(p_j)$. In the first term, $p_l$ is considered as a new vertex, while in the second term, omitting $p_l$ is more optimal. \begin{algorithm}[t] \caption{Improved Min-area $w$-gon } \label{alg:betterminch} \begin{algorithmic}[1] \Require{A set $P$ of points, $w>0$} \Ensure{A Min-area $w$-gon of $P$} \State{$\phi(p_i)=$the sorted list of $p_j, j\ne i$ in CCW direction around $p_i, \forall p_i\in P$} \State{$Cost[i][j][l]=\infty,~End[i][j]= Nil, \forall i\ne j$ ($Nil$: nothing in list)} \State{$Cost[i][i][l]=0,~End[i][i]= i$} \For{$p_i \in P$} \For{$w=2^t, t=0,\ldots,\log_2 w$} \For{$p_j$ in the order of $\phi(p_i),j\ne i$} \For{$p_r$ in the order of $\phi(p_j),r\ne i,j$ and $p_ip_jp_r$ is CCW} \State{$l=End[i][r]$} \If{$Cost[i][j][l]> Cost[i][j][r]+ Cost[i][r][l]$} \label{line:costdef} \State{$Cost[i][j][l]= Cost[i][j][r]+ Cost[i][r][l]$} \label{line:computeweight} \State{$End[i][j]=l$} \EndIf \EndFor \EndFor \EndFor \EndFor \\ \Return{$\min_{i,j,l} Cost[i][j][l]$} \end{algorithmic} \end{algorithm} \subsection{An $O(n^3 \log w)$ Time Algorithm} \label{sec:mdwf} As we give an improvement for the dynamic programming algorithm in~\cite{eppstein1992finding}, and the original algorithm in~\cite{eppstein1992finding} is formulated to find a Min-area $w$-gon (but it is shown that it works for the broader class MDF), we first adopt our explanation based on finding a $w$-gon of smallest area, and then we discuss the generalizations. Our idea is that we do not recurs on $w$, and instead, we design a divide-and-conquer algorithm, at which we merge two convex polygons of size $2^{t}$ vertices in each iteration, for $t=0,\ldots,\log w$. Then we merge the constructed sub-polygons. Each sub-polygon is constructed based on the dynamic programming algorithm of~\cite{eppstein1992finding}. For ease of exposition, let $w=2^t$ for some $t>0$. We define a 3-dimensional array $Cost[i][j][r]$, that stores the optimal objective function on a convex polygon with the lowest vertex $p_i$, and with $p_j$ as the next vertex in CCW direction, and $p_r$ is the first unprocessed predecessor neighbour of $p_j$ in $\phi(p_j)$. In Line~\ref{line:computeweight} of \Cref{alg:betterminch}, $Cost[i][j][l]$ achieves the optimal objective function resulted by merging two smaller polygons stored in $Cost[i][j][r]$ and $Cost[i][r][l]$, each with half the number of vertices of $Cost[i][j][l]$. We kept the index $l$ at another array $End[i][j]$, which we would know the last vertex of the constructed convex chain on $p_i$ and $p_r$ so far is $p_l$. We remove this assumption of~\cite{eppstein1992finding} that all the vertices lie on the same side of the supporting line of $p_jp_l$ as $p_i$, that is because we merge two convex polygons of $t\ge 3$ vertices instead of merging a polygon with a triangle. Keeping the convexity constraints at the vertex ($p_r$) we perform the merging prevents making non-feasible solutions. The initialization rules of the tables are straightforward. In Min-area $w$-gon problem, we keep the area of the constructed sub-polygons in $Cost[i][j][l]$, and return $\min_{i,j,l} Cost[i][j][l]$ at the end. \subsubsection{Correctness} Observe that we consider all possible configurations for which three vertices $p_i,p_j$ and $p_l$ define an optimal convex polygon, where $p_i$ is the lowest vertex, $p_j$ is the next vertex of $p_i$ in CCW and $p_l$ is the vertex immediately before $p_i$ in CCW, and the constructed angle at $\widehat{ p_ip_jp_r}$ is CCW (i.e., the inner angle at $p_r$ is convex), where $p_r$ is the vertex we concatenate two small polygons and make a polygon with $p_ip_r$ as a chord. In Algorithm~\ref{alg:betterminch}, throughout $O(\log w)$ steps, we compute the optimal $w$-gon (among all candidates) by concatenating two optimal convex polygons each of $2^t$ vertices for $t=1,\ldots,\log w$. Optimality of the concatenated sub-polygons and holding the convexity constraint at the concatenating vertex $p_r$ guarantee the correctness and the optimality of the reported solution. (Note that the reported polygon has indeed $2w$ vertices, but the adjustment is straightforward.) \subsubsection{Extension to MinCH} \label{sec:minch_ex} In MinCH, $Cost[i][j][l]$ keeps the number of vertices, and we return $\min_{i,j,l} Cost[i][j][l]$. The initialization rules of the tables are straightforward. \subsubsection*{Extension to Other Monotone Decomposable Functions} \label{sec:extension} For computing Min-perimeter $w$-gon, we keep the perimeter in $Cost[i][j][l]$, but in line \ref{line:computeweight}, we also remove the weight of the edge $p_ip_r$ multiplied by 2 from $Cost[i][j][l]$. We note that one may need to alter the condition of Line \ref{line:costdef} of Algorithm~\ref{alg:betterminch} proportional to the minimization or maximization criterion. We add the convexity condition to \Cref{def:decomposable} to make the extension to the other weight functions (if the weight function is convex decomposable): \begin{definition} [Convex Decomposable Function]\label{def:cdecomposable} A weight function $W$ is called convex decomposable if and only if for any convex polygon $P=\langle p_1,\ldots,p_m\rangle$ and any index $2<i<m$ \begin{align*} W(P)= M (W(\langle p_1,\ldots,p_i\rangle),W(\langle p_1,p_i,p_{i+1}, \ldots, p_m, p_1\rangle)) \end{align*} where $M$ can be computed in constant time, and $M$ is semigroup in its first argument $W(\langle p_1,\ldots,p_i\rangle)$, which also means it is semigroup in its second argument. \end{definition} \begin{theorem} \label{thm:dp} Let $P$ be a set of $n$ points, and $0 \le w<n$ be an integer. An optimal solution to the MinCH, Min-area $w$-gon, Min-perimeter $w$-gon and any convex decomposable function can be computed in $O(n^3\log w)$ time and $O(n^3 \log w)$ space. \end{theorem} \begin{proof} The correctness of the algorithm is already discussed. The running time follows from the fact that we do not recurse on $w$ to perform one vertex insertion per iteration; we merge two convex polygons of $2^t$ vertices for $t=0,\ldots,\log w$. As our computations are not in place, obviously $O(n^3 \log w)$ space is required to run the algorithm. \end{proof} We finally note that \begin{itemize} \item the weight function and the objective function are not required to be necessarily the same. \item another generalization is the case where the given $w$ is a bound on a continues measure, e.g., a bound on the area or the perimeter. Then the objective function is to find a convex hull with the smallest number of vertices so that the area or the perimeter is at most $w$. The set of candidate solutions is still the ones computed by~\Cref{alg:betterminch}. \end{itemize} \section{Extension to Distributed Environments}\label{sec:crew}\label{sec:MPC} In this section, we show that any problem with a convex decomposable weight function can be easily transformed to work in parallel settings. Indeed as the particles of the solution are convex, the order of concatenation is not important. \Cref{alg:betterminch} in EREW PRAM, using $O(n^2)$ space and $O(n^3)$ processors, works by evaluating any triple $p_ip_jp_k,i\ne j\ne k$ in one processor, which takes $O(\log w)$ time. Merging the sub-problems takes $O(\log n)$ time, using parallel prefix min~\cite{parhami2006introduction}. Computation of $\phi(p_i)$, for $i=1,\ldots,w$ also takes $O(\log n)$ time in this model. So, the overall time is $O(\log w \log n)$ and the total work is $O(n^3 \log w \log n)$. The parallel version of \Cref{alg:DParea} (restated from~\cite{eppstein1992finding}, as the algorithm is sequential itself) is analysed similarly, where $w$ values are checked instead of $\log w$ values; resulting in $O(w\log n)$ time and $O(n^3 w \log n)$ work. What we discussed in this section is not quite work optimal, but any improvement on this gives an improvement to the work of a broad class of problems (belonging to MDFs) in parallel settings. \section{Discussion} Further improvements to the running time of the problems in the class MDF or providing a lower bound for the problems in this class remained open. \paragraph*{Acknowledgement} The author would like to thank Sepideh Aghamolaei for all the fruitful discussions on the results of this paper. V.Keikha is supported by the Czech Science Foundation, grant number GJ19-06792Y, and with institutional support RVO:67985807. \bibliographystyle{abbrv}
proofpile-arXiv_065-3875
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Completeness} \label{s:completeness} In this section we show that the focus systems are complete, that is, every valid sequent is provable in either $\ensuremath{\mathsf{Focus}}\xspace$ or $\ensuremath{\mathsf{Focus}_\infty}\xspace$. As for the soundness argument in the previous section, we rely on Theorem~\ref{t:adequacy} which states that Prover\xspace has a winning strategy in any tableau for a given valid formula, and on Theorem~\ref{t:same} which claims that every formula that is provable in \ensuremath{\mathsf{Focus}_\infty}\xspace is also provable in \ensuremath{\mathsf{Focus}}\xspace. Thus, it suffices to show that winning strategies for Prover\xspace in the tableau game can be transformed into \ensuremath{\mathsf{Focus}_\infty}\xspace-proofs. \begin{theorem} \label{t:completeness} If Prover\xspace has a winning strategy in some tableau game for a sequent $\Phi$ then $\Phi$ is provable in \ensuremath{\mathsf{Focus}_\infty}\xspace. \end{theorem} \begin{proof} Let $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$ be a tableau for $\Phi$ and let $S$ be a winning strategy for Prover\xspace in $\game{\mathstr{T}}$. Because of Proposition~\ref{p:tableau exists}, Corollary~\ref{cor:invariant} and Remark~\ref{r:treestrat} of we may assume that $\mathstr{T}$ is tree based, with root $v_I$, and that $S \subseteq V$ is a subtree of $\mathstr{T}$. We will construct a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ for $\Phi^f$. Applications of the focus rules in $\Pi$ will be very restricted. To start with, the unfocus rule $\ensuremath{\mathsf{U}}\xspace$ will not be used at all, and the focus rule $\ensuremath{\mathsf{F}}\xspace$ will only occur in series of successive applications, with the effect of transforming an annotated sequent of the form $\Psi^{u}$ into its totally focused companion $\Psi^{f}$. It will be convenient to think of this series of applications of $\ensuremath{\mathsf{F}}\xspace$ as a \emph{single} proof rule, which we shall refer to as the total focus rule $\ensuremath{\mathsf{F}^t}\xspace$: \begin{prooftree} \AxiomC{$\Phi^f$} \RightLabel{\ensuremath{\mathsf{F}^t}\xspace} \UnaryInfC{$\Phi^u$} \end{prooftree} We construct the pre-proof $\Pi$ of $\Phi^f$ together with a function $g : S \to T$ in such a way that the following conditions are satisfied: \begin{enumerate} \item \label{i:first} \label{i:ordpres} If $E v u$ then $P^+ g(v) g(u)$. \item \label{i:path lifting} For every $v \in S$ and every infinite branch $\beta = (v_{n})_{n\in\omega}$ in $\Pi$ with $v_0 = g(v)$ there is some $i \in \omega$ and some $u \in S$ such that $Evu$ and $g(u) = v_i$. \item \label{i:push formulas down} $\Sigma_{g(v)}$ is thin. \item \label{i:trace preserving} If $Evu$ and $(\varphi,\psi) \in \gtrail_{v,u}$ then $(\varphi^{a_\varphi}, \psi^{a_\psi}) \in \gtrail_{g(v),g(u)}$. \item \label{i:unfocus reflects mu} If $Evu$, and $s$ and $t$ are nodes on the path from $g(v)$ to $g(u)$ such that $P^+st$, $(\chi^a,\varphi^f) \in \gtrail_{g(v),s}$ for some $a \in \{f,u\}$ and $(\varphi^f,\psi^u) \in \gtrail_{s,t}$, then $\chi = \varphi$ and $\chi$ is a $\mu$-formula. \item \label{i:last} \label{i:eventually focus} If $\alpha$ is an infinite branch of $\Pi$ and \ensuremath{\mathsf{F}^t}\xspace is applicable at some node on $\alpha$, then $\ensuremath{\mathsf{F}^t}\xspace$ is applied at some later node on $\alpha$. \end{enumerate} The purpose of these conditions is that they allow us to prove later that every branch in $\Pi$ is successful. We construct $\Pi$ and $g$ as the limit of finite stages, where at stage $i$ we have constructed a finite pre-proof $\Pi_i$ and a partial function $g_i : S \to \Pi_i$. At every stage we make sure that $g_i$ and $\Pi_i$ satisfy the following conditions: \begin{enumerate}[resume] \item \label{i:open leaves in range} All open leaves of $\Pi_i$ are in the range of $g_i$. \item \label{i:same sequent} All nodes $v \in S$ for which $g_i(v)$ is defined satisfy $\Phi_v = \uls{\Sigma}_{g_{i}(v)}$. \end{enumerate} In the base case we define $\Pi_0$ to consist of just one node $r$ that is labelled with the sequent $\Phi^f$. The partial function $g_0$ maps $r$ to $v_I$. Clearly, this satisfies the conditions \ref{i:open leaves in range} and \ref{i:same sequent}. In the inductive step we consider any open leaf $m$ of $\Pi_i$, which has a minimal distance from the root of $\Pi_i$. This ensures that in the limit every open leaf is eventually treated, so that $\Pi$ will not have any open leaves. By condition~\ref{i:open leaves in range} there is a $u \in S$ such that $g(u) = m$. Our plan is to extend the proof $\Pi_i$ at the open leaf $m$ to mirror the rule that is applied at $u$ in $\mathstr{T}$. In general this is possible because by condition~\ref{i:same sequent} the formulas in the annotated sequent at $m = g_i(u)$ are the same as the formulas at $u$. All children of $u$ that are in $S$ should then be mapped by $g_{i+1}$ to new open leaves in $\Pi_{i + 1}$. This guarantees that condition~\ref{i:open leaves in range} is satisfied at step $i+1$ and because we are going to simulate the rule in the tableau by rules in the focus system we ensure that condition~\ref{i:same sequent} holds at these children as well. Clearly, the precise definition of $\Pi_{i+1}$ depends on the rule applied at $u$. Before going into the details we address two technical issues that feature in all the cases. First, to ensure that condition~\ref{i:eventually focus} is satisfied by our construction we will apply \ensuremath{\mathsf{F}^t}\xspace at $m$, whenever it is applicable. Thus, we need to check whether all formulas in the sequent of $m$ are annotated with $u$. If this is the case then we apply the total focus rule and proceed with the premise $n$ of this application of the focus rule. Otherwise we just proceed with $n = m$. Note that in either case the sequent at $n$ contains the same formulas as the sequent at $m$ and if $n \neq m$ then the trace relation relates the formulas at $n$ in an obvious way to those at $m$. The second technical issue is that to ensure condition~\ref{i:push formulas down} we may need to apply \ensuremath{\mathsf{W}}\xspace to the new leaves of $\Pi_{i + 1}$. To see how this is done assume we have already extended $\Pi_i$ and obtained a new leaf $v$ which we would like to add into the range of $g_{i + 1}$. The annotated sequent at $v$, however, might contain both instances $\varphi^f$ and $\varphi^{u}$ of some formula $\varphi$, which would violate condition~\ref{i:push formulas down}. To take care of this we apply \ensuremath{\mathsf{W}}\xspace to get rid of the unfocused occurrence $\varphi^u$. in fact, we might need to apply \ensuremath{\mathsf{W}}\xspace multiple times to get rid of all unfocused duplicates of formulas. In the following we will refer to the node of the proof, that is obtained by repeatedly applying \ensuremath{\mathsf{W}}\xspace in this way at an open leaf $l$, as the \emph{thin normalisation} of $l$. \medskip We are now ready to discuss the main part of the construction, which is based on a case distinction depending on the rule $\mathsf{Q}(u)$ that is applied at $u$. \smallskip \textit{Case $\mathsf{Q}(u) = \ensuremath{\mathsf{Ax1}}\xspace$ or $\mathsf{Q}(u) = \ensuremath{\mathsf{Ax2}}\xspace$:} In this case we can just apply the corresponding rule at $m = g(u)$. We might need to apply \ensuremath{\mathsf{W}}\xspace to get rid of side formulas that were present in the tableau. There is no need to extend $g_i$. \textit{Case $\mathsf{Q}(u) = \ensuremath{\mathsf{R}_{\lor}}\xspace$:} In this case we can just apply \ensuremath{\mathsf{R}_{\lor}}\xspace at $m$. This generates a new open leaf $l$ which corresponds to the successor node $v$ of $u$ in the tableau. We define $g_{i + 1}$ such that it maps $v$ to the thin normalisation of $l$. \textit{Case $\mathsf{Q}(u) = \ensuremath{\mathsf{R}_{\land}}\xspace$:} In this case we also apply \ensuremath{\mathsf{R}_{\land}}\xspace in the focus system at $m$. This generates two successors which we can associate with the two children of $u$, both of which must be in $S$. Thus, $g_{i+1}$ will map the children of $u$ to the thin normalisations of the successors we have added to $m$. \textit{Case $\mathsf{Q}(u) = \ensuremath{\mathsf{\mathsf{M}}}\xspace$:} In this case we want to apply the rule \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace in the focus system. However, the sequent $\Sigma_m$ might contain multiple box formulas, whereas \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace can only be applied to one of those. To select the proper formula $\Box \varphi^{a} \in \Sigma_m$ we use the fact that the successors of $u$ are indexed by the box formulas in $\Phi_{u}$, and that the strategy $S$ contains precisely one of these successors. That is, let $\Box \varphi^{a} \in \Sigma_m$ be such that its associated successor $v_\varphi$ of $u$ belongs to $S$. We then apply \ensuremath{\mathsf{W}}\xspace at $m$ until we have removed all formulas from the sequent that are not diamond formulas and that are distinct from $\Box \varphi$. Once this is done the sequent only contains annotated versions of the diamond formulas from $\Phi_u$ plus an annotated version of the formula $\Box \varphi$. We can then apply \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace and obtain a new node $l$ and we define $g_{i+1}(v_\varphi)$ to be the thin normalisation of $l$. \textit{Case $\mathsf{Q}(u) = \RuFp{\mu}$ or $\mathsf{Q}(u) = \RuFp{\nu}$:} This is analogous to the case for \ensuremath{\mathsf{R}_{\lor}}\xspace. Note, however, that the application of the fixpoint rules in the focus system has an effect on the annotation. \medskip We define the function $g : S \to T$ as the limit of the maps $g_i$. To see that $g$ is actually a total function, first observe that for every $v \in S$ and $i \in \omega$ either $v$ is already in the domain of $g_i$, in which case it is in the domain of $g$, or there is some node $u$ on the branch leading to $v$ that is mapped by $g_i$ to an open leaf of $\Pi_i$. Eventually, the proof is extended at this leaf because in every step we treat an open leaf that is maximally close to the root. It is easy to check that in every step, when we extend the proof $\Pi_j$ at some open leaf, we also move forward on the branches of $\mathstr{T}$ that run through $v$. Iterating this reasoning shows that eventually $v$ must be added to the domain of some $g_j$. \medskip We now show that $g$, together with $\Pi$, satisfies the conditions~\ref{i:first}--\ref{i:last}. To start with, it is clear from the step-wise construction of $g$ and $\Pi$ that condition~\ref{i:ordpres} is satisfied. Condition~\ref{i:path lifting} holds because all trees $\Pi_i$ are finite. Thus, on every infinite branch of $\Pi$ there are infinitely many nodes that are a leaf in some $\Pi_i$ and by condition~\ref{i:open leaves in range} each of these nodes is in the range of $g_i$ and thus of $g$. Condition~\ref{i:push formulas down} is obviously satisfied at the root of $\Pi$. It is satisfied at all other nodes because of condition~\ref{i:same sequent} and because we make sure that we only add nodes to the domain of $g$ that are normalized, using the procedure described above. To see that condition~\ref{i:trace preserving} is satisfied by $\Pi$ and $g$ one has to carefully inspect each case of the inductive definition of $\Pi$. This is tedious but does not give rise to any technical difficulties. To check condition~\ref{i:unfocus reflects mu}, note that if $(\varphi^f,\psi^u) \in \gtrail_{s,t}$ then the trace from $\varphi^{f}$ to $\psi^u$ must lose its focus at some point on the path from $s$ to $t$. Since we do not use the unfocus rule in $\Pi$, the only case of the inductive construction of $\Pi$ where this is possible is the case where $\mathsf{Q}(u) = \RuFp{\mu}$. In this case the formula that loses its focus is the principal formula, which is then a $\mu$-formula and already present at the open leaf that we are extending. For condition~\ref{i:eventually focus} first observe that if \ensuremath{\mathsf{F}^t}\xspace is applicable at some node that is an open leaf of some $\Pi_i$ then it will be applied immediately when this open leaf is taken care of. Moreover, it is not hard to see that if \ensuremath{\mathsf{F}^t}\xspace becomes applicable at some node $v$ during some stage $i$ of the construction of $\Pi$, then it will remain applicable at every node that is added above $v$ at this stage. This applies in particular to the new open leaves that get added above $v$, and so the total focus rule will be applied to each of these at a later stage of the construction. \medskip It remains to show that every infinite branch in $\Pi$ is successful. Let $\beta = (v_{n})_{n\in\omega}$ be such a branch. We claim that \begin{equation} \label{eq:cpl1} \text{from some moment on, every sequent on $\beta$ contains a formula in focus}, \end{equation} and to prove \eqref{eq:cpl1} we will link $\beta$ to a match in $S$. Observe that because of condition~\ref{i:path lifting} we can `lift' $\beta$ to a branch $\alpha = (t_{n})_{n\in\omega}$ in $S$ such that there are $0 = k_0 < k_1 < k_2 < \cdots$ with $g(t_i) = v_{k_i}$ for all $i < \omega$. Because $\alpha$, as a match of the tableau game, is won by Prover\xspace, it contains a $\nu$-trail $(\phi_{n})_{n\in\omega}$. This trail being a $\nu$-trail means that there is some $m \in \omega$ such that $\varphi_h$ is a $\mu$-formula for no $h \geq m$. We then use condition~\ref{i:trace preserving} to obtain a trace $\psi_0^{a_0} \psi_1^{a_1} \cdots$ in $\beta$ such that $\varphi_i = \psi_{k_i}$. Now distinguish cases. First assume that there is an application of the total focus rule at some $v_l$, with $l \geq k_m$. Then at $v_{l + 1}$ all formulas are in focus and thus in particular the annotation $a_{l+1}$ of the formula $\psi_{l+1}$ must be equal to $f$. We show that \begin{equation} \label{eq:cpl2} a_n = f \text{ for all } n > l. \end{equation} Assume for contradiction that this is not the case and let $t$ be the smallest number larger than $l$ such that $a_t = u$; since $a_{l+1} = f$ we find that $n > l+1$, and by assumption on $n$ we have $a_{t - 1} = f$. Now let $h$ be such that $v_{n-1}$ and $v_{n}$ are on the path between $g(t_h) = v_{k_h}$ and $g(t_{h+1}) = v_{k_{h+1}}$; since $k_m \leq l \leq n-1$ it follows that $h \geq m$. But then by condition~\ref{i:unfocus reflects mu} $\varphi_h$ must be a $\mu$-formula, which contradicts our observation above that $\varphi_h$ is \emph{not} a $\mu$-formula for any $h \geq m$. This proves \eqref{eq:cpl2}, which means that for every $n > l$, the formula $\psi_{n}$ is in focus at $v_{n}$. From this \eqref{eq:cpl1} is immediate. If, on the other hand, there is \emph{no} application of the total focus rule on $v_{k_m} v_{k_m+1} \cdots$ then it follows by condition~\ref{i:eventually focus} that the total focus rule is not \emph{applicable} at any sequent $v_{l}$ with $l \geq k_{m}$. In other words, all these sequents contain a formula in focus, which proves~\eqref{eq:cpl1} indeed. \end{proof} \section{Conclusion \& Questions} In this paper we saw that the idea of placing formulas in \emph{focus} can be extended from the setting of logics like \textsc{ltl} and \textsc{ctl}~\cite{lang:focu01} to that of the alternation-free modal $\mu$-calculus: we designed a very simple and natural, cut-free sequent system which is sound and complete for all validities in the language consisting of all (guarded) formulas in the alternation-free fragment $\muML^{\mathit{af}}$ of the modal $\mu$-calculus. We then used this proof system $\ensuremath{\mathsf{Focus}}\xspace$ to show that the alternation-free fragment enjoys the Craig Interpolation Theorem. Clearly, both results add credibility to the claim that $\muML^{\mathit{af}}$ is an interesting logic with good meta-logical properties. \medskip \noindent Below we list some directions for future research. \begin{enumerate} \item Probably the most obvious question is whether the restriction to guarded formulas can be lifted. In fact, we believe that the focus proof system, possibly with some minor modifications in the definition of a proof, is also sound and complete for the full alternation-free fragment. To prove this observation, one may bring ideas from Friedmann \& Lange~\cite{frie:deci13} into our definition of tableaux and tableau games. \item Another question is whether we may tidy up the focus proof system, in the same way that Afshari \& Leigh did with the Jungteerapanich-Stirling system~\cite{afsh:cutf17,jung:tabl10,stir:tabl14}. As a corollary of this it should be possible to obtain an annotation-free sequent system for the alternation-free fragment of the $\mu$-calculus, and to prove completeness of Kozen's (Hilbert-style) axiomatisation for $\muML^{\mathit{af}}$. \item Moving in a somewhat different direction, we are interested to see to which degree the focus system can serve as a basis for sound and complete derivation systems for the alternation-free validities in classes of frames satisfying various kinds of frame conditions. \item We think it is of interest to see which other fragments of the modal $\mu$-calculus enjoy Craig interpolation. A very recent result by L.~Zenger~\cite{zeng:proo21} shows that the fragments $\Sigma^{\mu}_{1}$ and $\Pi^{\mu}_{1}$ consisting of, respectively, the $\mu$-calculus formulas that \emph{only} contain least- or greatest fixpoint operators, each have Craig interpolation. Clearly, a particular interesting question would be whether our focus system can be used to shed some light on the interpolation problem for propositional dynamic logic (see the introduction for some more information) and other fragments of the alternation-free $\mu$-calculus. Looking at fragments of the modal $\mu$-calculus that are \emph{more} expressive than $\muML^{\mathit{af}}$, an obvious question is whether \emph{every} bounded level of the alternation hierarchy admits Craig interpolation. \item Finally, the original (uniform) interpolation proof for the full $\mu$-calculus is based on a direct automata-theoretic construction~\cite{dago:logi00}. Is something like this possible here as well? That is, given two modal automata $\mathstr{A}_{\phi}$ and $\mathstr{A}_{\psi}$ corresponding to $\muML^{\mathit{af}}$-formulas $\phi$ and $\psi$, can we directly construct a modal automaton $\mathstr{B}$ which serves as an interpolant for $\mathstr{A}_{\phi}$ and $\mathstr{A}_{\psi}$ (so that we may obtain an $\muML^{\mathit{af}}$-interpolant for $\phi$ and $\psi$ by translating the automaton $\mathstr{B}$ back into $\muML^{\mathit{af}}$)? Recall that the automata corresponding to the alternation-free $\mu$-calculus are so-called \emph{weak} modal parity automata~\cite{mull:alte92,carr:powe20}. \end{enumerate} \section{Additional proofs} \newcommand{\depth}[1]{|{#1}|_d} \begin{definition} An annotated sequent $\Sigma$ is \emph{thin} if there is no formula $\varphi \in \muML^{\mathit{af}}$ such that $\varphi^f \in \Sigma$ and $\varphi^u \in \Sigma$. A pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{thin} if for all $v \in T$ with $\varphi^f,\varphi^u \in \Sigma_v$ we have that $\mathsf{R}_v = \ensuremath{\mathsf{W}}\xspace$ and $\varphi^u \notin \Sigma_u$ for the unique $u$ with $P v u$.\footnote{It might be worth considering the weaker condition that $\varphi^u$ is not a side formula of the rule application.} \end{definition} \begin{definition} A pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ of $\Gamma$ with open assumptions $\Delta_1,\dots,\Delta_n$ is \emph{non-circular} if \begin{enumerate} \item $\Pi$ is finite, and \item the discharge rule is not applied in $\Pi$. \end{enumerate} \end{definition} \begin{definition} The \emph{depth} of a non-circular pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is maximal number of applications of rules other than the weakening rule on branch of $\Pi$. \end{definition} \begin{definition} Let $\Sigma$ and $\Gamma$ be annotated sequents. We define $\morefocus{\Sigma}{\Gamma}$ to hold if \begin{enumerate} \item $\uls{\Sigma} = \uls{\Gamma}$, and \item $\varphi^f \in \Sigma$ implies $\varphi^f \in \Gamma$. \end{enumerate} \end{definition} \begin{definition} A non-circular pre-proof $\Pi'$ of $\Gamma'$ is a \emph{focus simulation} of a pre-proof $\Pi$ of $\Gamma$ if \begin{enumerate} \item $\morefocus{\Gamma}{\Gamma'}$, \item for every open assumption $\Delta'$ of $\Pi'$ there is an open assumption $\Delta$ of $\Pi$ such that $\morefocus{\Delta}{\Delta'}$. \item $\depth{\Pi'} \leq \depth{\Pi}$. \end{enumerate} \end{definition} \begin{definition} An application of a boolean or fixpoint rule in a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{destructive} if the principal formula does not occur in the sequent at any of the premises of the application. More precisely, an application of a rule at a node $v$ is \emph{destructive} if for the principal formula $\varphi^a \in \Sigma_v$ it holds that $\varphi^a \notin \Sigma_u$ for all $u$ with $R v u$. The proof $\Pi$ is \emph{destructive} if all applications of the boolean rules and the fixpoint rules in $\Pi$ are destructive. \end{definition} The main result is: \begin{theorem} Every \ensuremath{\mathsf{Focus}_\infty}\xspace-derivable sequent $\Sigma$ has a thin and destructive \ensuremath{\mathsf{Focus}_\infty}\xspace-proof. \end{theorem} To proof this result we introduce some additional notions. \begin{definition} A pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ of $\Gamma$ with open assumptions $\Delta_1,\dots,\Delta_n$ is \emph{local} if \begin{enumerate} \item $\Pi$ is non-circular (that is, it is finite and does not contain applications of the discharge rule), \item $\Pi$ does not contain applications of the modal rule \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace, and \item Every assumption $\Delta$ of $\Pi$ only contains modal formulas. \end{enumerate} \end{definition} \begin{definition} Let $\xi^a$ be an annotated formula. The \emph{$\xi^a$-complexity} of a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ with root $r$ is the number of nodes $v$ in $\Pi$ such that $\xi^a \in \Sigma_u$ for every node $u \in [r,v]$. \end{definition} \begin{proposition} \label{p:thin simulation} Let $\Pi$ be a non-circular, thin and destructive pre-proof of $\Gamma$ and $\Gamma'$ a sequent such that $\morefocus{\Gamma}{\Gamma'}$. Then there is a thin and destructive focus simulation $\Pi'$ of $\Pi$ that proves the sequent $\Gamma'$. \end{proposition} \begin{proof} I hope this can be proved with an induction on the depth of $\Pi$, moving from the root to the leaves. \end{proof} \begin{proposition} \label{p:root} Let $\Pi$ be a thin local pre-proof of $\Gamma$ such that all occurrences of boolean or fixpoint rules, except the one at the root $r$, are destructive. Then, there is a thin and destructive local pre-proof $\Pi'$ of $\Gamma$ such that $\Pi'$ is a focus simulation of $\Pi$. \end{proposition} \begin{proof} The proof is a nested induction. The outer induction proves the statement of the proposition with an induction over the depth of the local pre-proof $\Pi$. In the base case we have that $\Pi$ consists of just the root node, which is either an open assumption or an axiomatic leaf. Trivially, $\Pi$ is destructive in these cases. For the inductive step assume that the claim holds for all pre-proofs of depth that is strictly smaller than the depth of $\Pi$. Note that we might assume without loss of generality that the rule applied at the root of $\Pi$ is either a boolean or a fixpoint rule and that this application is not destructive. In all other cases $\Pi$ is already a destructive proof, because by assumption all rule applications that are not at the root are destructive. The proof proceeds by a case distinction depending on the rule that is applied at the root $r$ of $\Pi$. In all cases we make use of the following claim, which we proof further below using an inner induction. \begin{claimfirst} \label{cl:commuting rules} Assume that $\xi$ is a disjunction, a conjunction or a fixpoint formula. Let $\Lambda$ be a destructive, thin and local pre-proof of $\Gamma^\star$ with $\xi^a \in \Gamma^\star$ and $\depth{\Lambda} < \depth{\Pi}$. Then, there is a destructive, local and thin pre-proof $\Lambda'$ of $\Gamma^\star$ such that the $\xi^a$-complexity of $\Lambda'$ is $1$ and $\Lambda'$ is a focus simulation of $\Lambda$. \end{claimfirst} Assuming that we have already established this claim let us consider all the cases of the outer induction. \textit{Case $\mathsf{R}(r) = \ensuremath{\mathsf{R}_{\lor}}\xspace$:} In this case $\Sigma_r = (\xi_0 \lor \xi_1)^a, \Gamma$ and we have $\Sigma_v = \xi_0^a, \xi_1^a, \Gamma$ for the unique child $v$ of $r$. Let $\xi = \xi_0 \lor \xi_1$. We can restrict to the case where $\xi^a \in \Gamma$, because otherwise our proof is already destructive. We then apply Claim~\ref{cl:commuting rules} to the local proof $\Pi_v$ starting at $v$ to obtain a thin and destructive proof $\Lambda'$ of $\xi_0^a, \xi_1^a,\xi^a, \Gamma'$, where $\Gamma' = \Gamma \setminus \{\xi^a\}$ such that $\Lambda'$ is a focus simulation of $\Pi_v$ and the $\xi^a$-complexity of $\Lambda'$ is $1$. The latter means that at the root $w$ of $\Lambda'$ we either weaken $\xi^a$ away with an application of \ensuremath{\mathsf{W}}\xspace or we have a destructive application of \ensuremath{\mathsf{R}_{\lor}}\xspace to $\xi^a$. We consider both cases separately. If at $w$ there is an application of weakening that removes $\xi^a$ then $\Gamma^- \subseteq \xi_0^a, \xi_1^a, \Gamma'$ such that $\Gamma^-$ is the annotated sequent at the child $u$ of $w$. Let $\Lambda'_u$ be the proof of $\Gamma^-$ that is rooted at $u$, for which we have that $\depth{\Lambda'_u} \leq \depth{\Lambda'} \leq \depth{\Pi_v} < \depth{\Pi}$. We then let $\Pi'$ be the following proof: \begin{center} \begin{prooftree} \AxiomC{$\Lambda'_u$} \noLine \UnaryInfC{$\Gamma^-$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UnaryInfC{$\xi_0^a, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\xi_0 \lor \xi_1)^a, \Gamma'$} \end{prooftree} \end{center} Clearly, this proof is of a depth bound by $\depth{\Pi}$, it is destructive because $\xi^a \notin \Gamma'$ and it is a proof of the right sequent because $\xi^a, \Gamma' = \xi^a,\Gamma$. To see that this proof is thin observe that because $\Lambda'$ is thin the weakening that is applied at $w$ removes all duplicated formulas from $\xi_0^a, \xi_1^a,\xi^a, \Gamma'$ and thus $\Gamma^-$ does not contain any such duplications. Clearly, this entails that the above weakening in $\Pi'$ removes all duplications that might exist in $\xi_0^a, \xi_1^a, \Gamma'$. \texttt{Here, we assume that the weakening rule removes all duplications at once.} If at $w$ there is an destructive application of \ensuremath{\mathsf{R}_{\lor}}\xspace to $\xi^a$ then at the child $u$ of $w$ there is a destructive local pre-proof $\Lambda'_u$ of $\xi_0^a, \xi_1^a, \Gamma'$ of a depth that is at least $2$ smaller than the depth of $\Pi$. We can thus let $\Pi'$ be the following proof: \begin{center} \begin{prooftree} \AxiomC{$\Lambda'_u$} \noLine \UnaryInfC{$\xi_0^a, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\xi_0 \lor \xi_1)^a, \Gamma'$} \end{prooftree} \end{center} This construction preserves thinness because if the sequent $\xi_0^a,\xi_1^a, \Gamma'$ would contain a formula both focused and unfocused then so would $\Sigma_v = \xi_0^a, \xi_1^a, \Gamma$ and thus the rule applied at $v$ would need to be weakening. \textit{Case $\mathsf{R}(r) = \ensuremath{\mathsf{R}_{\land}}\xspace$:} In this case we have that $\Sigma_r = (\xi_0 \land \xi_1)^a, \Gamma$ with principal formula $\xi = \xi_0 \land \xi_1$ and there must be two children $v_0$ and $v_1$ of $r$ such that the pre-proof $\Pi_{v_i}$ rooted at $v_i$ proves the sequent $\Sigma_{v_i} = \xi_i^a, \Gamma$ for $i \in \{0,1\}$. Let $\Gamma' = \Gamma \setminus \{\xi^a\}$. We show that for both $i \in \{0,1\}$ \begin{equation} \label{eq:proofs for conjuncts} \mbox{there is a thin and destructive local pre-proof } \Lambda^-_i \mbox{ of } \xi_i^a, \Gamma' \mbox{ such that $\Lambda^-_i$ is a thin simulation of $\Pi_{v_i}$}. \end{equation} To prove \eqref{eq:proofs for conjuncts} fix $i \in \{0,1\}$. Then consider the proof $\Pi_{v_i}$ of $\xi_i^a, \Gamma$ that is rooted at the child $v_i$ inside of $\Pi$. Clearly, $\depth{\Pi_{v_i}} < \depth{\Pi}$. Moreover, we can assume that $\xi^a \in \Gamma$ because otherwise we can just let $\Lambda^-_i = \Pi_{v_i}$. Because of $\xi^a \in \Gamma$ we can apply Claim~\ref{cl:communting rules} to $\Pi_{v_i}$ to obtain a destructive, local pre-proof $\Lambda'_i$ of $\xi_i^a, \Gamma = \xi_i^a, \xi^a, \Gamma'$ such that $\Lambda'_i$ is a thin simulation of $\Pi_{v_i}$ and the $\xi^a$-complexity of $\Lambda'_i$ is $1$. From the latter it follows that at the root $w_i$ of $\Lambda'_i$ there must be an application of a rule that removes the formula $\xi^a$. The only possibilities is that we either apply $\ensuremath{\mathsf{W}}\xspace$, weakening $\xi^a$ away or we have a destructive application of $\ensuremath{\mathsf{R}_{\land}}\xspace$ to $\xi^a$. In the former case we consider the destructive local pre-proof $\Lambda''_i$ of some subset $\Gamma^- \subseteq \xi_i^a,\Gamma'$ that is rooted at the unique child of $w_i$ in $\Lambda'_i$. The depth of $\Lambda''_i$ is strictly smaller than the depth of $\Lambda'_i$ and thus we can add one application of \ensuremath{\mathsf{W}}\xspace to $\Lambda''_i$ to obtain the proof $\Lambda^-_i$ of $\xi_i^a, \Gamma'$ that still has the right depth as required by \eqref{eq:proofs for conjuncts}. Clearly, $\Lambda''_i$ will be thin, whenever $\Pi$, and thus $\Pi_i$ and $\Lambda'_i$, were thin. In the latter case, where \ensuremath{\mathsf{R}_{\land}}\xspace is applied at $w_i$ in $\Lambda'_i$ to remove $\xi^a$, we have that one of the two children $w_i$ proves the sequent $\xi_i^a,\Gamma'$. The proof that is rooted at this child inside of $\Lambda'_i$ will be the proof $\Lambda^-_i$ that is required by \eqref{eq:proofs for conjuncts}. Clearly, it is of strictly smaller depth than $\Lambda'_i$ and thus $\Pi$, and it is thin whenever $\Pi$ is thin. Having established \eqref{eq:proofs for conjuncts} we can now use the proofs $\Lambda^-_0$ and $\Lambda^-_1$ to construct the required destructive pre-proof $\Pi$ by just combining the two proofs with a new destructive application of \ensuremath{\mathsf{R}_{\land}}\xspace, as follows: \begin{center} \begin{prooftree} \AxiomC{$\Lambda^-_0$} \noLine \UnaryInfC{$\xi_0^a, \Gamma'$} \AxiomC{$\Lambda^-_1$} \noLine \UnaryInfC{$\xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BinaryInfC{$(\xi_0 \land \xi_1)^a, \Gamma'$} \end{prooftree} \end{center} This proof is destructive and of the right depth. To see that it is thin whenever $\Pi$ was thin assume for a contradiction that there is some formula $\varphi$ such that $\varphi^u,\varphi^f \in (\xi_0 \land \xi_1)^a, \Gamma'$. This would entail that also $\varphi^u,\varphi^f \in (\xi_0 \land \xi_1)^a, \Gamma$, which is the sequent at the root $r$ of $\Pi$. But then it would follow since $\Pi$ is thin that $\mathsf{R}(r) = \ensuremath{\mathsf{W}}\xspace$, contradicting $\mathsf{R}(r) = \ensuremath{\mathsf{R}_{\land}}\xspace$. \textit{Case $\mathsf{R}(r) \in \{\RuFp{\nu},\RuFp{\mu}\}$:} Let $\mathsf{R}(r) = \RuFp{\eta}$. We then have that $\Sigma_r = \xi^a, \Gamma$ with principal formula $\xi = \eta x . \xi_0$ and $\Sigma_v = \xi_0[\xi / x]^{a'}, \Gamma$ for the unique child $v$ of $r$, where $a' = u$ if $\eta = \nu$ and $a' = a$ if $\eta = \mu$. We can restrict our attention to the case where $\xi^a \in \Gamma$, because otherwise our proof is already destructive. We then apply Claim~\ref{cl:commuting rules} to the local proof starting at $v$ to obtain a destructive proof $\Lambda'$ of $\xi_0[\xi / x]^{a'}, \xi^a, \Gamma'$, where $\Gamma' = \Gamma \setminus \{\xi^a\}$ such that $\depth{\Lambda'} < \depth{\Pi}$ and the $\xi^a$-complexity of $\Lambda'$ is $1$. The latter means that at the root $w$ of $\Lambda'$ we either weaken $\xi^a$ away with an application of \ensuremath{\mathsf{W}}\xspace or we have a destructive application of \RuFp{\eta} to $\xi^a$. We consider both cases separately. If at $w$ there is an application of weakening that removes $\xi^a$ then there is a subset $\Gamma^- \subseteq \xi_0[\xi / x]^{a'}, \Gamma'$ and a destructive local pre-proof $\Lambda^-$ of $\Gamma^-$ with $\depth{\Lambda^-} < \depth{\Lambda'} < \depth{\Pi}$. We can thus let $\Pi'$ be the following proof: \begin{center} \begin{prooftree} \AxiomC{$\Lambda^-$} \noLine \UnaryInfC{$\Gamma^-$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UnaryInfC{$\xi_0[\xi / x]^{a'}, \Gamma'$} \RightLabel{$\RuFp{\eta}$} \UnaryInfC{$\xi^a, \Gamma'$} \end{prooftree} \end{center} The depth of the proof is not larger than the depth of $\Pi$ and the proof is destructive because $\xi^a \notin \Gamma'$. For the preservation of thinness assume that $\Pi$, and thus by Claim~\ref{cl:commuting rules} $\Lambda'$ and $\Lambda^-$ are thin. This means that if there is an occurrence of $\varphi^u,\varphi^f \in \xi_0[\xi / x]^{a'}, \xi^a, \Gamma'$ at the root of $\Lambda'$ then $\varphi^u \notin \Lambda^-$. Clearly this implies that the weakening in $\Pi'$ above eliminates any such duplicate occurrence in the annotated sequent $\xi_0[\xi / x]^{a'}, \Gamma'$ If at $w$ there is an destructive application of \RuFp{\eta} to $\xi^a$ then the child of $w$ is destructive local pre-proof $\Lambda^-$ of $\xi_0[\xi / x]^{a'}, \Gamma'$ with $\depth{\Lambda^-} < \depth{\Lambda'} < \depth{\Pi}$. We then define $\Pi'$ to be the following proof: \begin{center} \begin{prooftree} \AxiomC{$\Lambda^-$} \noLine \UnaryInfC{$\xi_0[\xi / x]^{a'}, \Gamma'$} \RightLabel{$\RuFp{\eta}$} \UnaryInfC{$\xi^a, \Gamma'$} \end{prooftree} \end{center} Clearly, this is a destructive prove of the right sequent that has a depth that is bound by the depth of $\Pi$. Moreover, if $\Pi$ and thus $\Lambda'$ are thin then this proof is thin as well. Any occurrence of $\varphi^u,\varphi^f \in \xi_0[\xi / x]^{a'}, \Gamma'$ for some $\varphi$ would entail a similar occurrence $\varphi^u,\varphi^f \in \xi_0[\xi / x]^{a'}, \Gamma$ at the root $w$ of $\Lambda'$. But would mean that $\Lambda'$ can not be mean weakening is not applied at $w$. We have now treated all the cases but still need to supply a proof of the crucial claim: \begin{proofof}{Claim~\ref{cl:commuting rules}} This claim is proved by an inner induction on the $\xi^a$-complexity of $\Lambda$. In the base case, where the $\xi^a$-complexity of $\xi$ in $\Lambda$ is $1$ the claim holds trivially. In the inductive step assume that the $\xi^a$-complexity of $\Pi$ is $n > 1$. Let $r$ be the root of $\Pi$. Let us first make an observation about the applicability of the inner induction hypothesis: For each child $u$ of $r$ we consider the local pre-proof $\Lambda_u$ of $\Sigma_u$ that is rooted at $u$ in $\Lambda$. Because by definition the $\xi^a$-complexity of $\Lambda_u$ is smaller than the $\xi^a$ complexity of $\Lambda$ the inner inductive hypothesis applies to $\Lambda_u$. This means that if $\xi^a \in \Sigma_u$ for some $a$ then there is a destructive, local pre-proof $\Lambda'_u$ of $\Sigma_u$ such that the $\xi^a$-complexity of $\Lambda'_u$ is $1$. To construct the proof $\Lambda'$ we then make a first case distinction depending on the rule that is applied at the root $r$ of $\Lambda$: \textit{Case if $r$ is an open leaf or $\mathsf{R}(r) \in \{\ensuremath{\mathsf{Ax1}}\xspace,\ensuremath{\mathsf{Ax2}}\xspace\}$:} These cases are impossible because $\xi^a \in \Sigma_r$ is a disjunction, conjunction of fixpoint formula, which is not allowed for nodes that a local assumption in a local proof or at which an axiom is applied. \textit{Case $\mathsf{R}(r) = \ensuremath{\mathsf{R}_{\lor}}\xspace$:} In this case $\Sigma_r = (\varphi_0 \lor \varphi_1)^b,\xi^a,\Gamma'$. Let $u$ be the unique child of $r$. We can assume that $\xi^a \in \Sigma_u$ because otherwise the $\xi^a$-complexity of $\Lambda$ is already $1$. From $\xi^a \in \Sigma_u$ it follows that $(\varphi_0 \lor \varphi_1)^b \neq \xi^a$ because the application of \ensuremath{\mathsf{R}_{\lor}}\xspace is destructive. From this it follows that $\Sigma_u = \varphi_0^b, \varphi_1^b, \xi^a, \Gamma'$ such that $(\varphi_0 \lor \varphi_1)^b \notin \Gamma'$. We can then use $\xi^a \in \Sigma_v$ to apply the inner induction hypothesis to obtain a destructive, local pre-proof $\Lambda'_u$ of $\Sigma_u$ with $\xi^a$-complexity equal to $1$ and $\depth{\Lambda'_u} \leq \depth{\Lambda_u} < \depth{\Lambda}$. We make a further case distinction on the rule that is applied at the root $u'$ of $\Lambda'_u$: \textit{Subcase if in $\Lambda'_u$ the node $u'$ is an open leaf or the leaf labeled with an axiom:} These subcases are impossible because $\xi^a \in \Sigma_u$ is not of the right shape. \textit{Subcase for \ensuremath{\mathsf{R}_{\lor}}\xspace:} It must be that $\xi^a$ is principal formula of the rule application at the root $u'$ of $\Lambda'_v$ because the $\xi^a$-complexity of $\Lambda'_u$ is $1$. For the same reason this application is destructive. Thus, $\xi^a = (\xi_0 \lor \xi_1)^a$ and $\Lambda'_u$ can be extended to the following proof on the left: \begin{center} \begin{minipage}{.45\textwidth} \begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$\varphi_0^b, \varphi_1^b, \xi_0^a, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$\varphi_0^b, \varphi_1^b, (\xi_0 \lor \xi_1)^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\varphi_0 \lor \varphi_1)^b, (\xi_0 \lor \xi_1)^a, \Gamma'$} \end{prooftree} \end{minipage} \begin{minipage}{.45\textwidth} \begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$\varphi_0^b, \varphi_1^b, \xi_0^a, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\varphi_0 \lor \varphi_1)^b, \xi_0^a, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\varphi_0 \lor \varphi_1)^b, (\xi_0 \lor \xi_1)^a, \Gamma'$} \end{prooftree} \end{minipage} \end{center} Because the upper application of $\ensuremath{\mathsf{R}_{\lor}}\xspace$ to $\xi^a$ is destructive we can assume that $\xi^a \notin \Gamma'$. Let $\Lambda''$ be the proof on the right with root $v''$ and node $u''$ just above the root. Because $\xi^a \neq (\varphi_0 \lor \varphi_1)^b$ and $\xi^a \notin \Gamma'$ it follows that the rule application on the root $v''$ of $\Lambda''$ is destructive. Moreover, $\Lambda''$ is local and $\depth{\Lambda''} \leq \depth{\Lambda} < \depth{\Pi}$. Now we would like to take $\Lambda'$ to be $\Lambda''$. But we can not show that the upper application of \ensuremath{\mathsf{R}_{\lor}}\xspace at $u''$ is destructive. To fix this issue we consider the subproof of $\Lambda''$ that is rooted at $u''$ we know that its depth must be strictly smaller than the depth of $\Lambda''$, which in turn has a depth that is strictly smaller than the depth of $\Pi$. We can thus apply the outer inductive hypothesis to the proof rooted at $u''$ and obtain the desired proof $\Lambda'$. To see that $\Lambda'$ is thin, whenever $\Lambda$ is thin assume that $\Lambda$ is thin. Then clearly the sequent $(\varphi_0 \lor \varphi_1)^b,\xi^a,\Gamma'$ is thin, because it is the sequent at the root of $\Lambda$ which is thin as the weakening rule is not applied at $r$ in $\Lambda$. \texttt{I am not quite sure how to argue, or guarantee, that $(\varphi_0 \lor \varphi)^b, \xi_0^a, \xi_1^a, \Gamma'$ is thin.} \textit{Subcase for \ensuremath{\mathsf{R}_{\land}}\xspace:} \begin{center} \begin{minipage}{.45\textwidth} \begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$\varphi_0^b, \varphi^b, \xi_0^a, \Gamma'_0$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$\varphi_0^b, \varphi^b, \xi_1^a, \Gamma'_1$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BinaryInfC{$\varphi_0^b, \varphi_1^b, (\xi_0 \land \xi_1)^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\varphi_0 \lor \varphi_1)^b, (\xi_0 \land \xi_1)^a, \Gamma'$} \end{prooftree} \end{minipage} \begin{minipage}{.45\textwidth} \begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$\varphi_0^b, \varphi^b, \xi_0^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\varphi_0 \lor \varphi_1)^b, \xi_0^a, \Gamma'$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$\varphi_0^b, \varphi^b, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UnaryInfC{$(\varphi_0 \lor \varphi_1)^b, \xi_1^a, \Gamma'$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BinaryInfC{$(\varphi_0 \lor \varphi_1)^b, (\xi_0 \land \xi_1)^a, \Gamma'$} \end{prooftree} \end{minipage} \end{center} \textit{Subcase for \RuFp{\mu} and \RuFp{\nu}} \textit{Subcase for \ensuremath{\mathsf{W}}\xspace:} \textit{Subcase for \ensuremath{\mathsf{F}}\xspace:} \textit{Case $\mathsf{R}(r) = \ensuremath{\mathsf{R}_{\land}}\xspace$:} \textit{Case $\mathsf{R}(r) = \RuFp{\mu}$:} \textit{Case $\mathsf{R}(r) = \RuFp{\nu}$:} \textit{Case $\mathsf{R}(r) = \ensuremath{\mathsf{W}}\xspace$:} \textit{Case $\mathsf{R}(r) = \ensuremath{\mathsf{F}}\xspace$:} \end{proofof} We have now treated all the cases in the proof of Claim~\ref{cl:communting rules} and have thus proved Proposition~\ref{p:root} \end{proof} \section{Infinite games} \label{sec:games} In this brief appendix we give the basic definitions of infinite two-player games. We fix two players that we shall refer to as $\exists$ (female) and $\forall$ (male). A {\em two-player game} is a quadruple $\mathstr{G} = (V,E,O,W)$ where $(V,E)$ is a graph, $O$ is a map $O: V \to \{ \exists, \forall \}$, and $W$ is a set of infinite paths in $(V,E)$. We denote $G_{\Pi} \mathrel{:=} O^{-1}(\Pi)$. An \emph{initialised game} is a pair consisting of a game $\mathstr{G}$ and an element $v$ of $V$; such a pair is usually denoted as $\mathstr{G}@v$. We will refer to $(V,E)$ as the \emph{board} or \emph{arena} of the game. Elements of $V$ will be called \emph{positions}, and $O(v)$ is the \emph{owner} of $v$. Given a position $v$ for player $\Pi \in \{ \exists, \forall\}$, the set $E[v]$ denotes the set of \emph{moves} that are \emph{legitimate} or \emph{admissible to} $\Pi$ at $v$. The set $W$ is called the \emph{winning condition} of the game. A \emph{match} of an initialised game consists of the two players moving a token from one position to another, starting at the initial position, and following the edge relation $E$. Formally, a \emph{match} or \emph{play} of the game $\mathstr{G} = (V,E,O,W)$ starting at position $v_{I}$ is simply a path $\pi$ through the graph $(V,E)$ such that $\mathsf{first}(\pi) = v_{I}$. Such a match $\pi$ is \emph{full} if it is maximal as a path, that is, either finite with $E[\mathsf{last}(\pi)] = \varnothing$, or infinite. The owner of a position is responsible for moving the token from that position to an adjacent one (that is, an $E$-successor); in case this is impossible because the node has no $E$-successors, the player \emph{gets stuck} and immediately loses the match. If neither player gets stuck, the resulting match is infinite; we declare $\exists$ to be its winner if the match, as an $E$-path, belongs to the set $W$. Full matches that are not won by $\exists$ are won by $\forall$. Given these definitions, it should be clear that it does not matter which player owns a state that has a unique successor; for this reason we often take $O$ to be a \emph{partial} map, provided $O(v)$ is defined whenever $\sz{E[v]} \neq 1$. A position $v$ is a winning position for a player if they have a way of playing the game that guarantees they win the resulting match, no matter how their opponent plays. To formalise this, we let $\mathit{PM}_{\Pi}$ denote the collection of partial matches $\pi$ ending in a position $\mathsf{last}(\pi) \in V_{\Pi}$, and define $\mathit{PM}_{\Pi}@v$ as the set of partial matches in $\mathit{PM}_{\Pi}$ starting at position $v$. A \emph{strategy for a player} $P$ is a function $f: \mathit{PM}_{P} \to V$; if $f(\pi) \not\in E[\mathsf{last}(\pi)]$, for some $\pi \in \mathit{PM}_{P}$, we say that $f$ prescribes an \emph{illegitimate move} in $\pi$. A match $\pi = (v_{i})_{i<\kappa}$ is \emph{guided} by a $P$-strategy $f$ if $f(v_{0}v_{1}\cdots v_{n-1}) = v_{n}$ for all $n<\kappa$ such that $v_{0}\cdots v_{n-1}\in \mathit{PM}_{P}$. A position $v$ is \emph{reachable} by a strategy $f$ is there is an $f$-guided match $\pi$ with $v = \mathsf{last}(\pi)$. A $P$-strategy $f$ is \emph{legitimate from a position $v$} if the moves that it prescribes to $f$-guided partial matches in $\mathit{PM}_{P}@v$ are always legitimate, and \emph{winning for $P$ from $v$} if in addition $P$ wins all $f$-guided full matches starting at $v$. When defining a strategy $f$ for one of the players in a board game, we can and in practice will confine ourselves to defining $f$ for partial matches that are themselves guided by $f$. A position $v$ is a \emph{winning position} for player $P \in \{ \exists, \forall\}$ if $P$ has a winning strategy in the game $\mathstr{G}@v$; the set of these positions is denoted as $\mathit{Win}_{P}(\mathstr{G})$. The game $\mathstr{G}$ is \emph{determined} if every position is winning for either $\exists$ or $\forall$. A strategy is \emph{positional} if it only depends on the last position of a partial match, i.e., if $f(\pi) = f(\pi')$ whenever $\mathsf{last}(\pi) = \mathsf{last}(\pi')$; such a strategy can and will be presented as a map $f: V_{P} \to V$. A \emph{priority map} on the board $V$ is a map $\Omega: V \to \omega$ with finite range. A \emph{parity game} is a board game $\mathstr{G} = (V,E,O,W_{\Omega})$ in which the winning condition $W_{\Omega}$ is given as follows. Given an infinite match $\pi$, let $\mathsf{Inf}(\pi)$ be the set of positions that occur infinitely often in $\pi$; then $W_{\Omega}$ consists of those infinite paths $\pi$ such that $\max\big(\Omega[\mathsf{Inf}(\pi)]\big)$ is even. Such a parity game is usually denoted as $\mathstr{G} = (V,E,O,\Omega)$. The following fact is independently due to Emerson \& Jutla~\cite{emer:tree91} and Mostowski~\cite{most:game91}. \begin{fact}[Positional Determinacy] \label{f:pdpg} Let $\mathstr{G} = (G,E,O,\Omega)$ be a parity game. Then $\mathstr{G}$ is determined, and both players have positional winning strategies. \end{fact} \section{Interpolation} \label{sec-itp} In this section we will show that the alternation-free fragment of the modal $\mu$-calculus enjoys the Craig interpolation property. To introduce the actual statement that we will prove, consider an implication of the form $\phi \to \psi$, with $\phi,\psi \in \muML^{\mathit{af}}$. First of all, we may without loss of generality assume that $\phi$ and $\psi$ are guarded, so that we may indeed take a proof-theoretic approach using the $\ensuremath{\mathsf{Focus}}\xspace$ system. Given our interpretation of sequents, we represent the implication $\phi \to \psi$ as the sequent $\ol{\phi},\psi$, and similarly, the implications involving the interpolant $\theta$ can be represented as, respectively, the sequents $\ol{\phi},\theta$ and $\ol{\theta},\psi$. What we will prove below is that for an arbitrary derivable sequent $\Gamma$, and an arbitrary partition $\Gamma^{L},\Gamma^{R}$ of $\Gamma$, there is an interpolant $\theta$ such that the sequents $\Gamma^{L},\theta$ and $\Gamma^{R},\ol{\theta}$ are both provable. Before we can formulate and prove our result, we need some preparation. First of all, we will assume that in our $\ensuremath{\mathsf{Focus}}\xspace$ proofs every application of the discharge rule discharges at least one assumption, i.e., every node in the proof that is labelled with the discharge rule is the companion of at least one leaf. It is easy to see that we can make this assumption without loss of generality --- we leave the details to the reader. Furthermore, it will be convenient for us to fine-tune the notion of a partition in the following way. \begin{definition} A \emph{partition} of a set $A$ is a non-empty finite tuple $(A_{1},\ldots, A_{n})$ of pairwise disjoint subsets of $A$ such that $\bigcup_{i=1}^{n} A_{i} = A$. A binary partition of $A$ may be denoted as $A^{L} \mid A^{R}$; in this setting we may refer to the members of $A^{L}$ and $A^{R}$ as being \emph{left} and \emph{right} elements of $A$, respectively. \end{definition} Finally, to formulate the condition on an interpolant, note that we may identify the \emph{vocabulary} of a sequent $\Sigma$ simply with the set $\mathit{FV}(\Sigma)$ of free variables occurring in $\Sigma$. Our interpolation result can then be stated as follows: \begin{theorem}[{\bf Interpolation}] \label{t:itp} Let $\Pi$ be a \ensuremath{\mathsf{Focus}}\xspace-proof of some sequent $\Gamma$, and let $\Gamma^{L}\mid\Gamma^{R}$ be a partition of $\Gamma$. Then there are a formula $\theta$ with $\mathit{FV}(\theta) \subseteq \mathit{FV}(\Gamma^{L}) \cap \mathit{FV}(\Gamma^{R})$, and \ensuremath{\mathsf{Focus}}\xspace-proofs $\Pi^{L}$, $\Pi^{R}$, all effectively obtainable from $\Pi, \Gamma^{L}$ and $\Gamma^{R}$, such that $\Pi^{L}$ derives the sequent $\Gamma^{L},\theta$ and $\Pi^{R}$ derives the sequent $\Gamma^{R}, \ol{\theta}$. \end{theorem} The remainder of this section contains the proof of this theorem. We first consider the definition of interpolants for the conclusion of a single proof rule, under the assumption that we already have interpolants for the premises. We then show in Proposition~\ref{p:locitp} that this definition is well-behaved. We need some additional auxiliary definitions. In this section it will be convenient to define the negation of $\theta$ in a slightly simpler manner than in section~\ref{s:prel}. This is possible since the bound variables of $\theta$ will be taken from the set $\mathcal{D}$ of discharge tokens, which is disjoint from the collection of variables used in the formulas featuring in $\Pi$. \begin{definition} \label{d:altneg} Given a formula $\phi$ such that $\mathit{BV}(\phi) \subseteq \mathcal{D}$, we define the formula $\sneg{\phi}$ as follows. For atomic $\phi$ we define \[ \sneg{\phi} \mathrel{:=} \left\{\begin{array}{ll} x & \text{if } \phi = x \in \mathcal{D} \\ \ol{\phi} & \text{otherwise}, \end{array}\right. \] and then we inductively we continue with \[\begin{array}{lllclll} \sneg{\phi \land \psi} & \mathrel{:=} & \sneg{\phi} \lor \sneg{\psi} && \sneg{\phi \lor \psi} & \mathrel{:=} & \sneg{\phi} \land \sneg{\psi} \\ \sneg{\Box \phi} & \mathrel{:=} & \Diamond \sneg{\phi} && \sneg{\Diamond \phi} & \mathrel{:=} & \Box \sneg{\phi} \\ \sneg{\mu x. \phi} & \mathrel{:=} & \nu x. \sneg{\phi} && \sneg{\nu x. \phi} & \mathrel{:=} & \mu x. \sneg{\phi} \end{array}\] \end{definition} It is not hard to see that $\sneg{\theta} = \ol{\theta}$ precisely if $\mathit{FV}(\theta)$ does not contain any discharge token from $\mathcal{D}$ as a free variable. For atomic formulas $\phi$ that are not of the form $x \in \mathcal{D}$ we will continue to write $\ol{\phi}$ rather than $\sneg{\phi}$. \begin{definition} A formula is \emph{basic} if it is either atomic, or of the form $x$, $x_{0} \land x_{1}$, $x_{0} \lor x_{1}$, $\Diamond x$ or $\Box x$, where $x$, $x_{0}$ and $x_{1}$ are discharge tokens. \end{definition} \begin{definition} \label{d:locitp} Let $\mathsf{R}$ be some derivation rule, let \begin{prooftree} \AXC{$\Sigma_{0} \quad\ldots\quad \Sigma_{n-1}$} \UIC{$\Sigma$} \end{prooftree} be an instance of $\mathsf{R}$, and let $\Sigma^{L}\mid\Sigma^{R}$ be a partition of $\Sigma$. By a case distinction as to the nature of the rule $\mathsf{R}$ we define a \emph{basic} formula $\chi(x_{0},\ldots,x_{n-1})$, together with a partition $\Sigma_{i}^{L}\mid \Sigma_{i}^{R}$ for each $\Sigma_{i}$. Here the variables $x_{0},\ldots,x_{n-1}$ correspond to the premises of the rule. \begin{description} \item[Case $\mathsf{R} = \ensuremath{\mathsf{Ax1}}\xspace$.] Let $\Sigma$ be of the form $\Sigma = \{ p, \atneg{p} \}$, and observe that since there are no premises, we only need to define the formula $\chi$. For this purpose we make a further case distinction as to the exact nature of the partition. If $\Sigma^{L}\mid \Sigma^{R} = p^a \mid \atneg{p}^{b}$, define $\chi \mathrel{:=} \atneg{p}$. If $\Sigma^{L}\mid \Sigma^{R} = \atneg{p}^a \mid p^{b}$, define $\chi \mathrel{:=} p$. If $\Sigma^{L}\mid \Sigma^{R} = p^a, \atneg{p}^{b} \mid \varnothing$, define $\chi \mathrel{:=} \bot$. If $\Sigma^{L}\mid \Sigma^{R} = \varnothing \mid p^a, \atneg{p}^{b}$, define $\chi \mathrel{:=} \top$. \item[Case $\mathsf{R} = \ensuremath{\mathsf{Ax2}}\xspace$.] Here $\Sigma$ must be of the form $\Sigma = \{\top \}$, and, as in the case of the other axiom, we only need to define the formula $\chi$ since there are no premises. We make a further case distinction. If $\Sigma^{L}\mid \Sigma^{R} = \top^a \mid \varnothing$, define $\chi \mathrel{:=} \bot$. If $\Sigma^{L}\mid \Sigma^{R} = \varnothing \mid \top^a$, define $\chi \mathrel{:=} \top$. \item[Case $\mathsf{R} = \ensuremath{\mathsf{R}_{\land}}\xspace$.] We distinguish cases, as to which side the active formula $(\phi_{0}\land \phi_{1})^{a}$ belongs to. \begin{description} \item[Subcase $(\phi_{0}\land \phi_{1})^{a} \in \Sigma^{L}$.] We may then represent the partition of $\Sigma$ as $(\phi_{0}\land\phi_{1})^{a},\Sigma_{0} \mid \Sigma_{1}$. Here we define $\chi(x_{0},x_{1}) \mathrel{:=} x_{0} \lor x_{1}$, and we partition the premises of $\ensuremath{\mathsf{R}_{\land}}\xspace$ as, respectively, $\phi_{0}^{a},\Sigma_{0} \mid \Sigma_{1} \setminus \{\phi_{0}^{a}\}$ and $\phi_{1}^{a},\Sigma_{0} \mid \Sigma_{1} \setminus \{\phi_{1}^{a}\}$. \item[Subcase $(\phi_{0}\land \phi_{1})^{a} \in \Sigma^{R}$.] We may now represent the partition of $\Sigma$ as $\Sigma_{0} \mid \Sigma_{1}, (\phi_{0}\land\phi_{1})^{a}$. Now we define $\chi(x_{0},x_{1}) \mathrel{:=} x_{0} \land x_{1}$, and we partition the premises of $\ensuremath{\mathsf{R}_{\land}}\xspace$ as, respectively, $\Sigma_{0}\setminus \{\phi_{0}^{a}\} \mid \Sigma_{1}, \phi_{0}^{a}$ and $\Sigma_{0}\setminus \{\phi_{1}^{a}\} \mid \Sigma_{1}, \phi_{1}^{a} $. \end{description} \item[Case $\mathsf{R} = \ensuremath{\mathsf{R}_{\lor}}\xspace$.] We only consider the case where the active formula $(\phi_{0}\lor\phi_{1})^{a}$ belongs to $\Sigma^{L}$ (the other case is symmetric). We may then represent the partition of $\Sigma$ as $(\phi_{0}\lor\phi_{1})^{a},\Sigma_{0} \mid \Sigma_{1}$. Here we define $\chi(x_{0}) \mathrel{:=} x_{0}$, and we partition the premise of $\ensuremath{\mathsf{R}_{\lor}}\xspace$ as $\phi_{0}^{a},\phi_{1}^{a},\Sigma_{0} \mid \Sigma_{1} \setminus \{\phi_{0}^{a},\phi_{1}^{a}\}$. \item[Case $\mathsf{R} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$.] We distinguish cases, as to whether the active formula $\Box\phi^{a}$ belongs to $\Sigma^{L}$ or to $\Sigma^{R}$. \begin{description} \item[Subcase $\Box\phi^{a} \in \Sigma^{L}$.] We may then represent the partition of $\Sigma$ as $\Box\phi^{a},\Diamond\Sigma_{0} \mid \Diamond\Sigma_{1}$. We define $\chi \mathrel{:=} \Diamond x_{0}$ and we partition the premise of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ as $\phi, \Sigma_{0} \mid \Sigma_{1} \setminus \{ \phi \}$. \item[Subcase $\Box\phi^{a} \in \Sigma^{R}$.] We may then represent the partition of $\Sigma$ as $\Sigma_{0} \mid \Sigma_{1}, \Box\phi^{a}$. Now we define $\chi \mathrel{:=} \Box x_{0}$ and we partition the premise of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ as $\Sigma_{0} \setminus \{ \phi \} \mid \Sigma_{1}, \phi$. \end{description} \item[Case $\mathsf{R} = \RuFp{\mu}$.] We only consider the case where the active formula $\mu x.\phi^{a}$ belongs to $\Sigma^{L}$ (the other case is symmetric). We may then represent the partition of $\Sigma$ as $\mu x.\phi^{a},\Sigma_{0} \mid \Sigma_{1}$. Here we define $\chi(x_{0}) \mathrel{:=} x_{0}$, and we partition the premise of $\RuFp{\mu}$ as $\phi(\mu x.\phi)^{u},\Sigma_{0} \mid \Sigma_{1} \setminus \{ \phi(\mu x.\phi)^{u} \}$. \item[Case $\mathsf{R} = \RuFp{\nu}$.] The definitions are analogous to the case of $\RuFp{\mu}$. \item[Case $\mathsf{R} = \ensuremath{\mathsf{W}}\xspace$.] We only consider the case where the active formula $\phi^{a}$ belongs to $\Sigma^{L}$ (the other case is symmetric). We may then represent the partition of $\Sigma$ as $\phi^{a},\Sigma_{0} \mid \Sigma_{1}$. Here we define $\chi(x_{0}) \mathrel{:=} x_{0}$, and we partition the premise of $\ensuremath{\mathsf{W}}\xspace$ as $\Sigma_{0} \mid \Sigma_{1}$. \item[Case $\mathsf{R} = \ensuremath{\mathsf{F}}\xspace$.] We only consider the case where the active formula $\phi^{u}$ belongs to $\Sigma^{L}$ (the other case is symmetric). We may then represent the partition of $\Sigma$ as $\phi^u, \Sigma_{0} \mid \Sigma_{1}$. In this case we define $\chi(x_{0}) \mathrel{:=} x_{0}$, and we partition the premise of $\ensuremath{\mathsf{F}}\xspace$ as $\phi^f, \Sigma_0 \mid \Sigma_{1} \setminus \{\phi^f\}$. \item[Case $\mathsf{R} = \ensuremath{\mathsf{U}}\xspace$.] This case is analogous to the case for \ensuremath{\mathsf{F}}\xspace, just swapping the annotations of $\phi$. \item[Case $\mathsf{R} = \RuDischarge{}$.] In this case the premise and the conclusions are the same, and so we also partition the premise in the same way as the conclusion. Furthermore, we define $\chi \mathrel{:=} x_{0}$. \end{description} \end{definition} \begin{proposition}[{\bf Interpolation Transfer}] \label{p:locitp} Let \begin{prooftree} \AXC{$\Sigma_{0} \quad\ldots\quad \Sigma_{n-1}$} \UIC{$\Sigma$} \end{prooftree} be an instance of some derivation rule $\mathsf{R} \neq \RuDischarge{}$, let $\Sigma^{L} \mid\Sigma^{R}$ be a partition of $\Sigma$, and let $\chi$ and $\Sigma_{i}^{L}\mid \Sigma_{i}^{R}$, for $i = 0,\ldots,n-1$ be as in Definition~\ref{d:locitp}. Then the following hold: \begin{urlist} \item \label{i:coh} $\mathit{FV}(\Sigma_{i}^{K}) \subseteq \mathit{FV}(\Sigma^{K})$ where $K \in \{ L, R \}$; \item \label{i:itptrf} For any sequence $\theta_{0},\ldots,\theta_{n-1}$ of formulas and any $b \in \{ u,f \}$ there are derivations $\Xi^{L}$ and $\Xi^{R}$: \begin{minipage}{.40\textwidth} \begin{prooftree} \AXC{$\Sigma_{0}^{L},\theta_{0}^{b} \quad \ldots \quad \Sigma_{n-1}^{L},\theta_{n-1}^{b}$} \noLine\UIC{$\vdots$} \noLine\UIC{$\Xi^L$} \noLine\UIC{$\vdots$} \UIC{$\Sigma^{L}, \chi(\theta_{0},\ldots,\theta_{n-1})^{b}$} \end{prooftree} \end{minipage} \quad and \quad \begin{minipage}{.40\textwidth} \begin{prooftree} \AXC{$\Sigma_{0}^{R},\sneg{\theta_{0}}^{b} \quad \ldots \quad \Sigma_{n-1}^{R},\sneg{\theta_{n-1}}^{b}$} \noLine\UIC{$\vdots$} \noLine\UIC{$\Xi^R$} \noLine\UIC{$\vdots$} \UIC{$\Sigma^{R}, \sneg{\chi(\theta_{0},\ldots,\theta_{n-1})}^{b}$} \end{prooftree} \end{minipage} \noindent Provided that $\mathsf{R} \notin \{\ensuremath{\mathsf{F}}\xspace,\ensuremath{\mathsf{U}}\xspace\}$, these derivations satisfy the following conditions: \begin{urlist} \item[a)] $\Xi^{L}$ and $\Xi^{R}$ do not involve the rules \ensuremath{\mathsf{F}}\xspace or \ensuremath{\mathsf{U}}\xspace. \item[b)] If, for some $i$, the assumption $\Sigma_{i}^{L},\theta_{i}^{b}$ contains a formula in focus, then so does every sequent in $\Xi^{L}$ on the path to this assumption. \item[c)] If, for some $i$, the assumption $\Sigma_{i}^{R},\sneg{\theta_{i}}^{b}$ contains a formula in focus, then so does every sequent in $\Xi^{L}$ on the path to this assumption. \item[d)] If $\mathsf{R} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ then there is an applications of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ at the root of $\Xi^L$ and $\Xi^R$. \end{urlist} \end{urlist} \end{proposition} \begin{proof} The proof of both parts proceeds via a case distinction depending on the proof rule $\mathsf{R}$, following the case distinction in Definition~\ref{d:locitp}. Part~(\ref{i:coh} easily follows from a direct inspection. For part~(\ref{i:itptrf} we restrict attention to some representative cases. Below we use $\ensuremath{\mathsf{W}}\xspace^*$ as a `proof rule' in the sense that, in a proof, we draw the configuration \AXC{$\Gamma_{t}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Gamma_{s}$} \DisplayProof to indicate that either $\Gamma_{t}$ is a proper subset of $\Gamma_{s}$, in which case we are using repeated applications of the weakening rule at node $s$, or else there is only one single node $s = t$ labelled with $\Gamma_{s} = \Gamma_{t}$. \begin{description} \item[Case $\mathsf{R} = \ensuremath{\mathsf{Ax1}}\xspace$.] As an example consider the case where the partition is such that $\Sigma^L \mid \Sigma^R = p^a \mid \atneg{p}^{c}$. Then we have by definition that $\chi = \atneg{p}$ and hence we need to supply proofs for the annotated sequents $\Sigma^L,\chi^b = p^a,\atneg{p}^b$ and $\Sigma^R,\sneg{\chi}^b = \atneg{p}^{c},p^b$. Both of these can easily be proved with the axiom \ensuremath{\mathsf{Ax1}}\xspace. As a second example consider the case where the partition is such that $\Sigma^L \mid \Sigma^R = p^a, \atneg{p}^{c} \mid \varnothing$. Then we have that $\chi = \bot$ and hence need to provide proofs for the sequents $\Sigma^L,\chi^b = p^a,\atneg{p}^{c},\bot^b$ and $\Sigma^R,\sneg{\chi}^b = \top^b$. The latter is proved with \ensuremath{\mathsf{Ax2}}\xspace and for the former we use the proof: \begin{prooftree} \AXC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UIC{$p^a,\atneg{p}^{c}$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UIC{$p^a,\atneg{p}^{c},\bot^b$} \end{prooftree} \item[Case $\mathsf{R} = \ensuremath{\mathsf{R}_{\land}}\xspace$.] First assume that the active formula $(\phi_{0}\land\phi_{1})^{a}$ belongs to $\Sigma^{L}$. We may then represent the partition of $\Sigma$ as $(\phi_{0}\land\phi_{1})^{a},\Sigma_{0} \mid \Sigma_{1}$. For the claim of the proposition, the following derivations suffice: \begin{minipage}{.45\textwidth} \begin{prooftree} \AXC{$\Sigma_{0},\phi_{0}^{a},\theta_{0}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UIC{$\Sigma_{0},\phi_{0}^{a},\theta_{0}^{b},\theta_{1}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UIC{$\Sigma_{0},\phi_{0}^{a},(\theta_{0}\lor\theta_{1})^{b}$} \AXC{$\Sigma_{0},\phi_{1}^{a},\theta_{1}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UIC{$\Sigma_{0},\phi_{1}^{a},\theta_{0}^{b},\theta_{1}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UIC{$\Sigma_{0},\phi_{1}^{a},(\theta_{0}\lor\theta_{1})^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BIC{$\Sigma_{0},(\phi_{0}\land\phi_{1})^{a},(\theta_{0}\lor\theta_{1})^{b}$} \end{prooftree} \end{minipage} \begin{minipage}{.45\textwidth} \begin{prooftree} \AXC{$\Sigma_{1}\setminus\{\phi_{0}^{a}\},\sneg{\theta_{0}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1},\sneg{\theta_{0}}^{b}$} \AXC{$\Sigma_{1}\setminus\{\phi_{1}^{a}\},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BIC{$\Sigma_{1}, (\sneg{\theta_{0}}\land\sneg{\theta_{1}})^{b}$} \end{prooftree} \end{minipage} We then consider the other possibility, where the active formula $(\phi_{0}\land\phi_{1})^{a}$ belongs to $\Sigma^{R}$. We may represent the partition of $\Sigma$ as $\Sigma_{0} \mid (\phi_{0}\land\phi_{1})^{a},\Sigma_{1}$. Now the following derivations suffice: \begin{prooftree} \AXC{$\Sigma_{0}\setminus\{\phi_{1}^{a}\},\theta_{0}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{0},\theta_{0}^{b}$} \AXC{$\Sigma_{0}\setminus\{\phi_{0}^{a}\},\theta_{1}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{0},\theta_{1}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BIC{$\Sigma_{0}, (\theta_{0}\land\theta_{1})^{b}$} \end{prooftree} \begin{prooftree} \AXC{$\Sigma_{1},\phi_{0}^{a},\sneg{\theta_{0}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UIC{$\Sigma_{1},\phi_{0}^{a},\sneg{\theta_{0}}^{b},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UIC{$\Sigma_{1},\phi_{0}^{a},(\sneg{\theta_{0}}\lor\sneg{\theta_{1}})^{b}$} \AXC{$\Sigma_{1},\phi_{1}^{a},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UIC{$\Sigma_{1},\phi_{1}^{a},\sneg{\theta_{0}}^{b},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UIC{$\Sigma_{1},\phi_{1}^{a},(\sneg{\theta_{0}}\lor\sneg{\theta_{1}})^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\land}}\xspace$} \BIC{$\Sigma_{1},(\phi_{0}\land\phi_{1})^{a},(\sneg{\theta_{0}}\lor\sneg{\theta_{1}})^{b}$} \end{prooftree} \item[Case $\mathsf{R} = \ensuremath{\mathsf{R}_{\lor}}\xspace$.] We only consider the case where the active formula $(\phi_0 \lor \phi_1)^a$ belongs to $\Sigma^{L}$ (the other case is similar). We may then represent the partition of $\Sigma$ as $(\phi_0 \lor \phi_1)^a,\Sigma_{0} \mid \Sigma_{1}$. The two derivations below then suffice to prove the proposition: \begin{prooftree} \AXC{$\phi_0^a,\phi_1^a, \Sigma_{0}, \theta_{0}^{b}$} \RightLabel{$\ensuremath{\mathsf{R}_{\lor}}\xspace$} \UIC{$(\phi_0 \lor \phi_1)^a, \Sigma_{0},\theta_{0}^{b}$} \end{prooftree} \begin{prooftree} \AXC{$\Sigma_{1}\setminus\{\phi_0^{a},\phi_1^a\},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1}\setminus\{\phi_0^{a}\},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1}, \sneg{\theta_{0}}^{b}$} \end{prooftree} \item[Case $\mathsf{R} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$.] We only consider the case where the active formula $\Box\phi^{a}$ belongs to $\Sigma^{L}$ (the other case is similar). We may then represent the partition of $\Sigma$ as $\Box\phi^{a},\Diamond\Sigma_{0} \mid \Diamond\Sigma_{1}$. The two derivations below then suffice to prove the proposition: \begin{prooftree} \AXC{$\phi^a, \Sigma_{0}, \theta_{0}^{b}$} \RightLabel{$\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$} \UIC{$\Box\phi^a, \Diamond\Sigma_{0},\Diamond\theta_{0}^{b}$} \end{prooftree} \begin{prooftree} \AXC{$\Sigma_{1}\setminus\{\phi^{a}\},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1}, \sneg{\theta_{0}}^{b}$} \RightLabel{$\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$} \UIC{$\Diamond\Sigma_{1}, \Box\sneg{\theta_{0}}^{b}$} \end{prooftree} \item[Case $\mathsf{R} = \RuFp{\mu}$.] We only consider the case where the principal formula $\mu x. \phi^a$ belongs to $\Sigma^{L}$ (the other case is similar). We may then represent the partition of $\Sigma$ as $\mu x. \phi^a,\Sigma_{0} \mid \Sigma_{1}$. The two derivations below then suffice to prove the proposition: \begin{prooftree} \AXC{$\phi(\mu x. \phi)^u, \Sigma_{0}, \theta_{0}^{b}$} \RightLabel{$\RuFp{\mu}$} \UIC{$\mu x. \phi^a, \Sigma_{0},\theta_{0}^{b}$} \end{prooftree} \begin{prooftree} \AXC{$\Sigma_{1}\setminus\{\phi(\mu x. \phi)^{u}\},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1}, \sneg{\theta_{0}}^{b}$} \end{prooftree} \item[Case $\mathsf{R} = \RuFp{\nu}$.] This case is analogous to the case of \RuFp{\mu}, simply keeping the annotation of the principal formula, instead of unfocusing. \item[Case $\mathsf{R} = \ensuremath{\mathsf{W}}\xspace$.] We only consider the case where the weakened formula $\phi^a$ belongs to $\Sigma^{L}$ (the other case is similar). We may then represent the partition of $\Sigma$ as $\phi^a,\Sigma_{0} \mid \Sigma_{1}$. For $\Xi^L$ we can use the derivation \begin{prooftree} \AXC{$\Sigma_{0}, \theta_{0}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace$} \UIC{$\phi^a, \Sigma_{0},\theta_{0}^{b}$} \end{prooftree} The derivation $\Xi^R$ consists of the single sequent $\Sigma_1,\sneg{\theta_0}^b$, without any rules being applied. \item[Case $\mathsf{R} = \ensuremath{\mathsf{F}}\xspace$.] Again, only consider the case where the principal formula is on the left. We can write the partition of $\Sigma$ as $\phi^u, \Sigma_0 \mid \Sigma_1$ and use the proofs \begin{prooftree} \AXC{$\phi^f,\Sigma_{0}, \theta_{0}^{b}$} \RightLabel{$\ensuremath{\mathsf{F}}\xspace$} \UIC{$\phi^u, \Sigma_{0},\theta_{0}^{b}$} \end{prooftree} and \begin{prooftree} \AXC{$\Sigma_{1}\setminus\{\phi^f\},\sneg{\theta_{1}}^{b}$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UIC{$\Sigma_{1}, \sneg{\theta_{0}}^{b}$} \end{prooftree} \item[Case $\mathsf{R} = \ensuremath{\mathsf{U}}\xspace$.] This case is analogous to the case for \ensuremath{\mathsf{F}}\xspace. \end{description} To finish the proof of Proposition~\ref{p:locitp}, we need to check that each of the proofs given above satisfies the conditions (a) - (c). Condition (a) can be verified by a direct inspection. One may also verify the conditions (b) and (c) directly, using the observation that for any node $t$ in the pre-proofs $\Pi^{L}$ and $\Pi^{R}$, if some formula occurring at a child of $t$ is annotated with $f$, then also some formula at $t$ is annotated with $f$. Lastly, one can check in the case for \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace that the constructed proof also contains an application of \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace at its root. \end{proof} To prove Theorem~\ref{t:itp} we assemble the interpolant $\theta$ by an induction on the tree that underlies the proof $\Pi$, where most cases of the inductive step are covered by Definition~\ref{d:locitp} and Proposition~\ref{p:locitp}. The main difficulty is treating the cases for discharged leafs and the discharge rule. The idea is to introduce a fresh variable as the interpolant of a discharged leaf and to then bind the variable with a fixpoint operator at the step that corresponds to the application of the discharge rule at the companion of the leaf. We need to ensure that this can be done in such that the interpolant stays alternation-free. The key notion that allows us to organize the introduction of fixpoint operators to the interpolant are the fixpoint colourings from Definition~\ref{d:coloring} below. The fixpoint colouring specifies for every node in $\Pi$ whether the application of the discharge rule at the node should be either a least fixpoint $\mu$ or a greatest fixpoint $\nu$. Before we can discuss this notion we need to show that the partition of $\Phi^L \mid \Phi^R$ of the root of $\Pi$ can be extended in a well-behaved way to all nodes of the proof. \begin{definition} Let $\Pi = (T,P,R,\Sigma)$ be a proof. A \emph{nodewise partition} of $\Pi$ is a pair $(\Sigma^{L},\Sigma^{R})$ of labellings such that, for every $t \in T$, the pair $\Sigma^{L}_{t}\mid \Sigma^{R}_{t}$ is a partition of $\Sigma_{t}$. Such a partition is \emph{coherent} if it agrees with the derivation rules applied in the proof, as expressed by Definition~\ref{d:locitp}. \end{definition} \begin{proposition} \label{p:itp1} Let $\Pi$ be a proof of some sequent $\Gamma$ and let $(\Gamma^{L},\Gamma^{R})$ be a partition of $\Gamma$. Then there is a unique coherent nodewise partition $(\Sigma^{L},\Sigma^{R})$ of $\Pi$ such that $\Sigma^{L}_{r} = \Gamma^{L}$ and $\Sigma^{R}_{r} = \Gamma^{R}$, where $r$ is the root of $\Pi$. \end{proposition} \begin{proof} Immediate by the definitions. \end{proof} We shall refer to the nodewise partition given in Proposition~\ref{p:itp1} as being \emph{induced} by the partition of the root sequent. \begin{definition} Let $\Pi = (T,P,R,\Sigma)$ be a proof and let $(\Sigma^{L}, \Sigma^{R})$ be a coherent nodewise partition of $\Pi$. This partition is called \emph{balanced} if $\Sigma^{L}_{l} = \Sigma^{L}_{c(l)}$ and $\Sigma^{R}_{l} = \Sigma^{R}_{c(l)}$, for every discharged leaves $l$ of $\Pi$. \end{definition} In words, a coherent nodewise partition is balanced if it splits the sequents of any discharged leaf in exactly the same manner as it splits the leaf's companion node. As a corollary of the following proposition, for every partition $(\Gamma^{L}, \Gamma^{R})$ of a provable sequent $\Gamma$ we can find a proof on which the induced partition is balanced. \begin{proposition} \label{p:itp2} Let $\Pi$ be a proof of some sequent $\Gamma$, and let $(\Gamma^{L},\Gamma^{R})$ be a partition of $\Gamma$. Then there is some finite proof $\Pi'$ of $\Gamma$ such that the nodewise partition on $\Pi'$, induced by $(\Gamma^{L},\Gamma^{R})$, is balanced. \end{proposition} \begin{proof} Let $\vec{\Pi}$ be the full unravelling of $\Pi$ into a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof according to Proposition~\ref{p:fintoinf}, and extend the nodewise partition of $\Pi$ to $\vec{\Pi}$ in the obvious way. Using the same strategy as in the proof of Proposition~\ref{p:ppp} we may `cut off' $\vec{\Pi}$ to a balanced proof $\Pi'$. \end{proof} \begin{definition} \label{d:coloring} Let $\Pi = (T,P,R,\Sigma)$ be a proof of some sequent $\Gamma$, and let $(\Sigma^{L},\Sigma^{R})$ be a nodewise partition of $\Gamma$. A \emph{fixpoint colouring} for $(\Sigma^{L},\Sigma^{R})$ is a map $\eta: T \to \{ \mu, \nu, \checkmark \}$, satisfying the conditions below (where we write $T_{\mu} \mathrel{:=} \eta^{-1}(\mu)$, etc.): \begin{urlist} \item $T_{\checkmark}$ consists of those nodes that belong to \emph{no} set of the form $\itv{c(l)}{l}$; \item for every discharged leaf $l$ of $\Pi$ we have either $\itv{c(l)}{l} \subseteq T_{\mu}$ or $\itv{c(l)}{l} \subseteq T_{\nu}$; \item \label{i:in focus} if $t \in T_{\mu}$ then $\Sigma^{L}_{t}$ contains a focused formula, and if $t \in T_{\nu}$ then $\Sigma^{R}_{t}$ contains a focused formula. \end{urlist} We usually write $\eta_{t}$ rather than $\eta(t)$ and refer to $\eta_{t}$ as the \emph{fixpoint type} of $t$. Nodes in $T_{\checkmark}, T_{\mu}$ and $T_{\nu}$ will sometimes be called \emph{transparent}, \emph{magenta} and \emph{navy}, respectively. \end{definition} \begin{proposition} \label{p:itp3} Let $(\Sigma^{L},\Sigma^{R})$ be a balanced nodewise partition of some proof $\Pi$. Then there is a fixpoint colouring $\eta$ for $(\Sigma^{L}, \Sigma^{R})$. \end{proposition} For a proof of Proposition~\ref{p:itp3}, we need the following definition and auxiliary proposition. \begin{definition} \label{d:conn} Let $u_{0}$ and $u_{1}$ be two nodes of some proof $\Pi$. We call $u_{0}$ and $u_{1}$ \emph{closely connected} if there is a non-axiomatic leaf $l$ such that $u_{0},u_{1} \in \itv{c(l)}{l}$. The relation of being \emph{connected} is the reflexive/transitive closure of that of being closely connected. \end{definition} The relation of being connected is easily seen to be an equivalence relation, which refines the partition induced by the fixpoint colouring; note that transparent nodes are only connected to themselves. Furthermore, as we will see, the partition induced by the connectedness relation refines the fixpoint colouring mentioned in Proposition~\ref{p:itp3}. Here is the key observation that makes this possible. \begin{proposition} \label{p:itp4} Let $(\Sigma^{L},\Sigma^{R})$ be a balanced nodewise partition of some proof $\Pi = (T,P,\mathsf{R},\Sigma)$, and let $u$ and $v$ be connected nodes of $\Pi$. Then, for $K \in \{ L, R \}$, we have \begin{equation} \label{eq:lr1} \Sigma^{K}_{u} \text{ contains a formula in focus iff } \Sigma^{K}_{v} \text{ contains a formula in focus}. \end{equation} \end{proposition} \begin{proof} Fix $K \in \{ L, R \}$. We first consider one direction of the equivalence in \eqref{eq:lr1}, for a special case. \begin{claimfirst} \label{cl:lr2} Let $u$ and $v$ be nodes in $\Pi$ such that $v$ is a discharged leaf and $u \in \itv{c(v)}{v}$. Then $u$ and $v$ satisfy \eqref{eq:lr1}. \end{claimfirst} \begin{pfclaim} Assume first that $\Sigma^K_u$ contains a formula in focus. Note that the discharge rule is never applied on the path $\itv{c(v)}{v}$. We can thus iteratively apply Proposition~\ref{p:lr1} backwards along the path $\itv{c(v)}{u}$ to find that $\Sigma^K_{c(v)}$ contains a formula in focus. But then the same applies to $\Sigma^K_v$: since $(\Sigma^L,\Sigma^R)$ is balanced we have $\Sigma^K_v = \Sigma^K_{c(v)}$. For the other direction assume that $\Sigma^K_v$ contains a formula in focus. Again with Proposition~\ref{p:lr1} applied iteratively, now backwards along the path $\itv{u}{v}$, we show that $\Sigma^K_u$ must contain a formula in focus as well. \end{pfclaim} Finally, it is immediate by Claim~\ref{cl:lr2} and the definitions that \eqref{eq:lr1} holds in case $u$ and $v$ are closely connected, and from this an easy induction shows that \eqref{eq:lr1} holds as well if $u$ and $v$ are merely connected. \end{proof} \begin{proofof}{Proposition~\ref{p:itp3}} Let $(\Sigma^{L},\Sigma^{R})$ be a balanced nodewise partition of some proof $\Pi$. First define $\eta_{u} = \checkmark$ for every node $u$ that does not lie on any path to a discharged leaf from its companion node. Then, consider any equivalence class $C$ of the connectedness relation defined in Definition~\ref{d:conn} such that $C \cap T_{\checkmark} = \varnothing$, and make a case distinction. If every node $u$ in $C$ is such that $\Sigma^{L}_{u}$ contains a formula in focus, then we map all $C$-nodes to $\mu$. If, on the other hand, some node $u$ in $C$ is such that $\Sigma^{L}_{u}$ contains \emph{no} formula in focus, we reason as follows. Since $\eta_{u} \neq \checkmark$, $u$ must lie on some path to a non-axiomatic leaf $l$ from its companion node $c(l)$. By the conditions on a successful proof, $\Sigma_{u}$ must contain \emph{some} formula in focus, and so this formula must belong to $\Sigma^{R}_{u}$. It then follows from Proposition~\ref{p:itp4} that \emph{every} node in $C$ has a right formula in focus. In this case we map all $C$-nodes to $\nu$. With this definition it is straightforward to verify that $\eta$ is a fixpoint colouring for $\Sigma^{L}\mid \Sigma^{R}$. \end{proofof} We will now see how we can read off interpolants from a balanced nodewise partition and an associated fixpoint colouring. Basically, the idea is that with every node of the proof we will associate a formula that can be seen as some kind of `preliminary' interpolant for the partition of the sequent of that node. \begin{definition} \label{d:itp} Let $(\Sigma^{L},\Sigma^{R})$ be a balanced nodewise partition of some proof $\Pi$, and let $\eta$ be some fixpoint colouring for $(\Sigma^{L},\Sigma^{R})$. By induction on the depth of nodes we will associate a formula $\ip{s}$ with every node $s$ of $\Pi$. The bound variables of these formulas, if any, will be supplied by the discharge tokens used in $\Pi$. For the definition of $\ip{s}$, inductively assume that $\ip{t}$ has already been defined for all proper descendants of $s$. We distinguish cases depending on whether $s \in \mathsf{Ran}(c)$ and on whether $s$ is a discharged leaf: \begin{description} \item[Case $s \in \mathsf{Dom}(c)$.] In this case we consider the discharge token $\ensuremath{\mathsf{x}}_{c(s)}$ associated with the companion of $s$ as a variable and define \[ \ip{s} \mathrel{:=} \ensuremath{\mathsf{x}}_{c(s)}. \] \item[Case $s \not\in \mathsf{Dom}(c)$ and $s \not\in \mathsf{Ran}(c)$.] Note that this case includes the situation where $s$ is an axiomatic leaf, which is one of the base cases of the induction. Let $\mathsf{R} = \mathsf{R}_{s}$ be the derivation rule applied at the node $s$, and assume that $s$ has successors $v_{0},\ldots,v_{n-1}$. Let $\chi_{s}(x_{0},\ldots,x_{n-1})$ be the basic formula provided by Definition~\ref{d:locitp}. Inductively we assume formulas $\ip{v_{i}}$ for all $i<n$, and so we may define \[ \ip{s} \mathrel{:=} \chi_{s}(\ip{v_{0}},\ldots, \ip{v_{n-1}}). \] \item[Case] $s \in \mathsf{Ran}(c)$. In this case the rule applied at $s$ is the discharge rule, with discharge token $\ensuremath{\mathsf{x}}_{s}$, $s$ has a unique child $s'$, and, obviously, we have $\eta_{s} \in \{ \mu, \nu \}$. We define \[ \ip{s} \mathrel{:=} \eta_{s}\ensuremath{\mathsf{x}}_{s} . \ip{s'}. \] In this case we bind the variable $\ensuremath{\mathsf{x}}_{s}$, which was introduced at the leaves discharged by $s$. \end{description} Finally we define \[ \fitp{\Pi} \mathrel{:=} \ip{r}, \] where $r$ is the root of $\Pi$. \end{definition} We will prove a number of statements about these interpolants $\ip{s}$, for which we need some auxiliary definitions. We call a node $u$ a \emph{proper connected ancestor of $s$}, notation: $P_{c}^{+}us$, if $u$ is both connected to and a proper ancestor of $s$. For a node $s$ in $\Pi$ we then define \[ \mathsf{X}(s) \mathrel{:=} \{ \ensuremath{\mathsf{x}}_{u} \mid u \in \mathsf{Ran}(c) \text{ and } P^{+}_{c} us \}. \] Intuitively, $\mathsf{X}(s)$ can be seen as the set of discharge tokens that may occur as free variables in the interpolant $\ip{s}$. Furthermore, we call a node \emph{special} if it is not connected to its parent, or if has no parent at all (that is, it is the root of $\Pi$). Observe that in particular all nodes in $T_{\checkmark}$ are special. \begin{proposition} \label{p:itp5} The following hold for every node $s$ in $\Pi$: \begin{urlist} \item \label{it:clitp5-1} if $\mathsf{R}_{s} \neq \RuDischarge{}$ then $\mathsf{X}(s) = \mathsf{X}(v)$ for every $v \in P(s)$ that is connected to $s$; \item \label{it:clitp5-2} if $\mathsf{R}_{s} = \RuDischarge{}$ then $\mathsf{X}(s) = \mathsf{X}(s')\setminus\{\ensuremath{\mathsf{x}}_{s}\}$, where $s'$ is the unique child of $s$; \item \label{it:clitp5-3} if $s$ is special then $\mathsf{X}(s) = \varnothing$. \end{urlist} \end{proposition} \begin{proof} For item~\ref{it:clitp5-1}, the key observation is that if $\mathsf{R}_{s} \neq \RuDischarge{}$, and $v$ is connected to $s$, then $s$ and $v$ have exactly the same connected strict ancestors. From this it is immediate that $\mathsf{X}(s) = \mathsf{X}(v)$. In case $\mathsf{R}_{s} = \RuDischarge{}$, then $s$ is connected to its unique child $s'$ --- here we use the fact that every application of the discharge rule discharges at least one leaf, so that $s'$ actually lies on some path from $s$ to a leaf of which $s$ is the companion. But if $s$ and $s'$ are connected, then they have the same connected strict ancestors, with the obvious exception of $s$ itself. From this item~\ref{it:clitp5-2} follows directly. Item~\ref{it:clitp5-3} follows from the definition of $\mathsf{X}(s)$ and the observation that if $s$ is special then it has no proper connected ancestors. \end{proof} Our next claim is that the interpolant $\fitp{\Pi}$ is of the right syntactic shape, in that it is alternation free and only contains free variables that occur in both $\Sigma^L_r$ and $\Sigma^R_r$, where $r$ is the root of $\Pi$. \begin{proposition} \label{p:itp6} The following hold for every node $s$ in $\Pi$: \begin{urlist} \item \label{it:clitp1-1} $\mathit{FV}(\ip{s}) \subseteq \Big(\mathit{FV}(\Sigma^{L}_{s}) \cap \mathit{FV}(\Sigma^{R}_{s})\Big)\cup\mathsf{X}(s)$; \item \label{it:clitp1-2} $\ip{s} \in \nth{\eta_{s}}{\mathsf{X}(s)}$ if $\eta_s \in \{\mu,\nu\}$; \item \label{it:clitp1-3} $\ip{s} \in \nth{\nu}{\varnothing} = \nth{\mu}{\varnothing}$ if $s$ is special. \end{urlist} \end{proposition} \begin{proof} We prove the first two items by induction on the depth of $s$ in $\Pi$, making the same case distinction as in Definition~\ref{d:itp}. \begin{description} \item[Case] $s \in \mathsf{Dom}(c)$. In this case $s$ is a discharged leaf, and we have $\ip{s} = \ensuremath{\mathsf{x}}_{c(s)}$, so that $\mathit{FV}(\ip{s}) = \{\ensuremath{\mathsf{x}}_{c(s)}\} \subseteq \mathsf{X}(s)$ because the companion $c(s)$ of $s$ must be a proper ancestor of $s$ and by definition $c(s)$ is connected to $s$. Moreover, we clearly find $\ip{s} \in \nth{\eta_{s}}{\mathsf{X}(s)}$. \item[Case] $s \not\in \mathsf{Dom}(c)$ and $s \not\in \mathsf{Ran}(c)$. Assume that $t$ has children $v_{0},\ldots,v_{n-1}$, then we have $\ip{s} = \chi_{s}(\ip{v_{0}},\ldots,\ip{v_{n-1}})$, where $\chi_{s}(x_{0},\ldots,x_{n-1})$ is the basic formula provided by Definition~\ref{d:locitp}. For item~\ref{it:clitp1-1} we now reason as follows: \begin{align*} \mathit{FV}(\ip{s}) & = \bigcup_{i} \mathit{FV}(\ip{v_{i}}) & \text{(definition $\ip{s}$)} \\ & \subseteq \bigcup_{i} \Big( \big(\mathit{FV}(\Sigma^{L}_{v_{i}}) \cap \mathit{FV}(\Sigma^{R}_{v_{i}})\big) \cup \mathsf{X}(v_{i}) \Big) & \text{(induction hypothesis)} \\ & \subseteq \bigcup_{i} \Big( \big(\mathit{FV}(\Sigma^{L}_{v_{i}}) \cap \mathit{FV}(\Sigma^{R}_{v_{i}})\big) \cup \mathsf{X}(s) \Big) & \text{(Proposition~\ref{p:itp5}(\ref{it:clitp5-1})} \\ & \subseteq \big(\mathit{FV}(\Sigma^{L}_{s}) \cap \mathit{FV}(\Sigma^{R}_{s})\big) \cup \mathsf{X}(s) & \text{(Proposition~\ref{p:locitp}(\ref{i:coh})}, \end{align*} which suffices to prove item \ref{it:clitp1-1}. For item~\ref{it:clitp1-2} we first show that if $\eta_s \in \{\mu,\nu\}$ then $\ip{s} \in \nth{\eta_{s}}{\mathsf{X}(s)}$. Assume that $\eta_s \in \{\mu,\nu\}$. We claim that \begin{equation} \label{eq:thekidsarefine} \ip{v_i} \in \nth{\eta_s}{\mathsf{X}(s)} \quad \mbox{for all } i < n. \end{equation} To see that this is the case fix $i$ and distinguish cases depending on whether $v_i$ is special or not. If $v_i$ is special then we reason as follows: \begin{align*} \mathit{FV}(\ip{v_i}) & \subseteq \Big(\mathit{FV}(\Sigma^{L}_{v_{i}}) \cap \mathit{FV}(\Sigma^{R}_{v_{i}})\Big)\cup\mathsf{X}(s) & \text{(induction hypothesis)} \\ & = \mathit{FV}(\Sigma^{L}_{v_{i}}) \cap \mathit{FV}(\Sigma^{R}_{v_{i}}) & \text{(Proposition~\ref{p:itp5}(\ref{it:clitp5-3})} \\ & \subseteq \mathit{FV}(\Sigma^{L}_{s}) \cap \mathit{FV}(\Sigma^{R}_{s}), & \text{(Proposition~\ref{p:locitp}(\ref{i:coh})} \end{align*} so that $\mathit{FV}(\ip{v_i}) \cap \mathsf{X}(s) = \varnothing$. From this \eqref{eq:thekidsarefine} is immediate by the definitions. On the other hand, if $v_i$ is not special then by definition it is connected to $s$. It follows that $\eta_{v_i} = \eta_s \in \{\mu,\nu\}$ and thus we obtain by the inductive hypothesis that $\ip{v_i} \in \nth{\eta_s}{\mathsf{X}(v_i)}$. But since $s \notin \mathsf{Ran}(c)$ we have $\mathsf{R}_s \neq \RuDischarge{}$ and so by Proposition~\ref{p:itp5}(\ref{it:clitp5-2} we find $\mathsf{X}(v_i) = \mathsf{X}(s)$. This finishes the proof of \eqref{eq:thekidsarefine}. To show that $\ip{s} \in \nth{\eta_s}{\mathsf{X}(s)}$ recall that $\ip{s} = \chi_s(\ip{v_0},\dots,\ip{v_{n-1}})$. Because of \eqref{eq:thekidsarefine} it suffices to check that $\nth{\eta_s}{\mathsf{X}(s)}$ is closed under the schema $\chi_s$. But since $\chi_{s}$ is a basic formula, this is immediate by the definitions. \item[Case $s \in \mathsf{Ran}(c)$.] In this case the rule applied at $s$ is the discharge rule, with discharge token $\ensuremath{\mathsf{x}}_{s}$, $s$ has a unique child $s'$, $\eta_{s} \in \{ \mu, \nu \}$ and by definition $\ip{s} = \eta_{s}\ensuremath{\mathsf{x}}_{s} . \ip{s'}$. To prove item~\ref{it:clitp1-1} we can then reason as follows: \begin{align*} \mathit{FV}(\ip{s}) & = \mathit{FV}(\ip{s'}) \setminus \{ \ensuremath{\mathsf{x}}_{s} \} & \text{(definition $\ip{s}$)} \\ & \subseteq \Big( \big(\mathit{FV}(\Sigma^{L}_{s'}) \cap \mathit{FV}(\Sigma^{R}_{s'})\big) \cup \mathsf{X}(s') \Big) \setminus \{ \ensuremath{\mathsf{x}}_{s} \} & \text{(induction hypothesis)} \\ & \subseteq \Big( \big(\mathit{FV}(\Sigma^{L}_{s}) \cap \mathit{FV}(\Sigma^{R}_{s})\big) \cup \mathsf{X}(s') \Big) \setminus \{ \ensuremath{\mathsf{x}}_{s} \} & \text{($\Sigma^L_{s'} = \Sigma^L_s$ and $\Sigma^R_{s'} = \Sigma^R_s$)} \\ & \subseteq \big(\mathit{FV}(\Sigma^{L}_{s}) \cap \mathit{FV}(\Sigma^{R}_{s})\big) \cup \big(\mathsf{X}(s') \setminus \{ \ensuremath{\mathsf{x}}_{s} \} \big) & \text{(basic set theory)} \\ & = \big(\mathit{FV}(\Sigma^{L}_{s}) \cap \mathit{FV}(\Sigma^{R}_{s})\big) \cup \mathsf{X}(s) & \text{(Proposition~\ref{p:itp5}(\ref{it:clitp5-2})} \end{align*} To check item~\ref{it:clitp1-2}, note that $\eta_s \in \{\mu,\nu\}$, because $s$ itself is on the path from $s$ to any of the leaves that it discharges, and that $\eta_{s'} = \eta_s$ because $s'$ is connected to $s$. By the inductive hypothesis we find that $\ip{s'} \in \nth{\eta_s}{\mathsf{X}(s')}$, so that it is clear from the definitions that $\ip{s} \in \nth{\eta_s}{\mathsf{X}(s') \setminus \{\ensuremath{\mathsf{x}}_s\}}$. It follows that $\ip{s} \in \nth{\eta_s}{\mathsf{X}(s)}$, since $\mathsf{X}(s) = \mathsf{X}(s') \setminus \{\ensuremath{\mathsf{x}}_s\}$ by Proposition~\ref{p:itp5}(\ref{it:clitp5-2}. \end{description} This finishes the proof of the first two items of the proposition. \medskip For item~\ref{it:clitp1-3}, let $s$ be special. It is then immediate from item~\ref{it:clitp1-2} and Proposition~\ref{p:itp5}(\ref{it:clitp5-3} that $\ip{s} \in \nth{\eta_s}{\varnothing}$. The statement then follows by the observation of Proposition~\ref{p:af2}(\ref{it:af2-2} that $\nth{\mu}{\varnothing} = \muML^{\mathit{af}} = \nth{\nu}{\varnothing}$. \end{proof} Proposition~\ref{p:itp7} is the key technical result of our proof. In its formulation we need the following. \begin{definition} Let $\Pi = (T,P,\Sigma,\mathsf{R})$ be some proof. A \emph{global annotation} for $\Pi$ is a map $a: T \to \{ u,f \}$; the dual of the global annotation $a$ is the map $\ol{a}$ given by \[ \ol{a}(t) \mathrel{:=} \left\{\begin{array}{ll} f & \text{if } a(t) = u \\ u & \text{if } a(t) = f. \end{array}\right. \] A global annotation $a$ is \emph{consistent} with a fixpoint colouring $\eta$ if it satisfies $a(t) = u$ if $\eta_t = \mu$ and $a(t) = f$ if $\eta_t = \nu$. \end{definition} Note that the conditions on an annotation $a$ to be consistent with a fixpoint colouring $\eta$ only mentions the nodes in $T_{\mu}$ and $T_{\nu}$; the annotation $a(t)$ can be arbitrary for $t \in T_{\checkmark}$. \medskip For the final part of the interpolation argument we need a general observation about the result of applying a substitution to (all formulas in a) \emph{proof}. First we need some definitions. \begin{definition} Let $\Sigma$ be an annotated sequent. We define $\mathit{BV}(\Sigma) = \bigcup\{\mathit{BV}(\psi) \mid \psi^a \in \Sigma\}$, and, for any formula $\phi$ such that $\mathit{FV}(\phi) \cap \mathit{BV}(\Sigma) = \varnothing$, we set \[ \Sigma[\phi / x] \mathrel{:=} \{(\psi[\phi / x])^a \mid \psi^a \in \Sigma\}. \] Furthermore, where $\Pi = (T,P,R,\Sigma)$ is some proof, we let $\Pi[\phi/x]$ denote the labelled tree $\Pi[\phi/x] \mathrel{:=} (T,P,R,\Sigma')$ which is obtained from $\Pi$ by replacing every annotated sequent $\Sigma_{t}$ with $\Sigma_{t}[\phi/x]$. \end{definition} \begin{proposition} \label{p:psubst} Let $\Pi$ be a \ensuremath{\mathsf{Focus}}\xspace-proof of a sequent $\Sigma$ with open assumptions $\{\Gamma_i \mid i \in I\}$, and let $\phi$ be a formula such that $\mathit{FV}(\phi) \cap \mathit{BV}(\Sigma) = \varnothing$. Then $\Pi[\phi/x]$ is a well-formed \ensuremath{\mathsf{Focus}}\xspace-proof of the sequent $\Sigma[\phi/x]$, with open assumptions $\{\Gamma_i [\phi/x] \mid i \in I\}$. \end{proposition} \begin{proof} (Sketch) One may show that $\mathit{BV}(\chi) \subseteq \mathit{BV}(\psi)$ for every $\chi \in \mathsf{Clos}(\psi)$, by an induction on the length of the trace from $\psi$ to $\chi$ witnessing that $\chi \in \mathsf{Clos}(\psi)$. Because every formula $\chi$ that occurs in one of the sequents of $\Pi$ belongs to the closure of $\Sigma$ it follows that $\mathit{BV}(\chi) \subseteq \mathit{BV}(\Sigma)$ and hence all the substitutions are well-defined. Moreover, one can check that all the proof rules remain valid if one performs the same substitution uniformly on all the formulas in the conclusion and the premises. It should also be clear that the global conditions on proofs are not affected by the substitution. \end{proof} \begin{proposition} \label{p:itp7} Let $(\Sigma^{L},\Sigma^{R})$ be a balanced nodewise partition of some proof $\Pi$, let $\eta$ be some fixpoint colouring for $(\Sigma^{L},\Sigma^{R})$, and let $a: T \to \{ u, f \}$ be a global annotation that is consistent with $\eta$. Then we can effectively construct \ensuremath{\mathsf{Focus}}\xspace-proofs $\Pi^{L}$ and $\Pi^{R}$ of the sequents $\Sigma^{L}_{r},(\fitp{\Pi})^{a(s)}$ and $\Sigma^{R}_{r},(\sneg{\fitp{\Pi}})^{\ol{a}(s)}$, respectively, where $r$ is the root of $\Pi$. \end{proposition} \begin{proof} For every node $s$ of $\Pi$ we will construct two proofs with open assumptions, $\Pi^{L}_{s}$ and $\Pi^{R}_{s}$, for the sequents $\Sigma^{L}_{s},\ip{s}^{a(s)}$ and $\Sigma^{R}_{s},\sneg{\ip{s}}^{\ol{a}(s)}$, respectively. We will make sure that the only open assumptions of these proofs will be associated with leaves $l$ of which the companion node $c(l)$ is a proper connected ancestor of $s$. We define $\Pi^{L}_s$ and $\Pi^{R}_s$ as labelled trees that satisfy conditions \ref{i:local condition}~and~\ref{i:leaf condition} from Definition~\ref{d:proof}. We check the other conditions in subsequent claims. The definition of $\Pi^{L}_{s}$ and $\Pi^{R}_{s}$ proceeds by induction on the depth of $s$ in the tree $\Pi$, where we make the same case distinction as in Definition~\ref{d:itp}. \begin{description} \item[Case $s \in \mathsf{Dom}(c)$.] In this case we let $\Pi^L_s$ and $\Pi^R_s$ be the leaves that are labelled with the discharge variable $\ensuremath{\mathsf{x}}_{c(s)}$ and the sequents $\Sigma^L_s,\ip{s} = \Sigma^L_s, \ensuremath{\mathsf{x}}_{c(l)}^{a(l)}$ and $\Sigma^R_s,\sneg{\ip{s}} = \Sigma^R_s, \ensuremath{\mathsf{x}}_{c(l)}^{\ol{a}(l)}$, respectively. Note that here we are creating an open assumption that is labelled with a discharge token and not with $\star$. This open assumption will be discharged later when the induction is at the node $c(s)$. \item[Case $s \not\in \mathsf{Dom}(c)$ and $s \not\in \mathsf{Ran}(c)$.] The basic strategy in this case is to use Proposition~\ref{p:locitp} to extend the proofs $\Pi^L_s$ and $\Pi^R_s$. The details depend on the global annotation $a$. We only consider the subcases where $a(s)$ is distinct from $a(v)$ for at least one child $v$ of $s$. The case where $a(s) = a(v)$ for all $v \in P(s)$ is similar, but easier. \begin{description} \item[Subcase $a(s) = u$, but $a(v) = f$, for some $v \in P(s)$.] As a representative example of this, consider the situation where $\mathsf{R}_{s}$ is binary, and $a(s) = a(v_{0}) = u$, while $a(v_{1}) = f$, where $v_{0}$ and $v_{1}$ are the two successors of $s$. We first consider the proof $\Pi^L_{s}$. Inductively we assume labelled trees $\Pi^{L}_{v_{0}}$ and $\Pi^{L}_{v_{1}}$ for, respectively, the sequents $\Sigma^{L}_{v_{0}},\ip{v_{0}}^{u}$ and $\Sigma^{L}_{v_{1}},\ip{v_{1}}^{f}$. Combining these with the proof with assumptions $\Xi^{L}$ from Proposition~\ref{p:locitp}, we then define $\Pi^{L}_{s}$ to be the following labelled tree: \begin{prooftree} \AXC{$\Pi^{L}_{v_{0}}$} \UIC{$\Sigma^{L}_{v_{0}},\ip{v_{0}}^{u}$} \AXC{$\Pi^{L}_{v_{1}}$} \UIC{$\Sigma^{L}_{v_{1}},\ip{v_{1}}^{f}$} \RightLabel{$\ensuremath{\mathsf{F}}\xspace$} \UIC{$\Sigma^{L}_{v_{1}},\ip{v_{1}}^{u}$} \BIC{$\Xi^{L}$} \UIC{$\Sigma^{L}_{s},\chi_{s}(\ip{v_{0}},\ip{v_{1}})^{u}$} \end{prooftree} A similar construction works for $\Pi^R_{s}$: Inductively we are given proofs $\Pi^{R}_{v_{0}}$ and $\Pi^{R}_{v_{1}}$ for, respectively, the sequents $\Sigma^{R}_{v_{0}},\sneg{\ip{v_{0}}}^{f}$ and $\Sigma^{R}_{v_{1}},\sneg{\ip{v_{1}}}^{u}$. Together with the proof $\Xi^{R}$ that we obtain from Proposition~\ref{p:locitp} we can define $\Pi^R_{s}$ as follows: \begin{prooftree} \AXC{$\Pi^{R}_{v_{0}}$} \UIC{$\Sigma^{R}_{v_{0}},\sneg{\ip{v_{0}}}^{f}$} \AXC{$\Pi^{R}_{v_{1}}$} \UIC{$\Sigma^{R}_{v_{1}},\sneg{\ip{v_{1}}}^{u}$} \RightLabel{$\ensuremath{\mathsf{U}}\xspace$} \UIC{$\Sigma^{R}_{v_{1}},\sneg{\ip{v_{1}}}^{f}$} \BIC{$\Xi^{R}$} \UIC{$\Sigma^{R}_{s},\sneg{\chi_{s}(\ip{v_{0}},\ip{v_{1}})}^{f}$} \end{prooftree} \item[Subcase $a(s) = f$, but $a(v) = u$, for some $v \in P(s)$.] Similarly as in the previous subcase, we consider a representative example where $s$ has two successors, $v_{0}$ and $v_{1}$, but now $a(s) = a(v_{0}) = f$, while $a(v_{1}) = u$. Inductively we are provided with labelled trees $\Pi^{L}_{v_{0}}$ and $\Pi^{L}_{v_{1}}$ for, respectively, the sequents $\Sigma^{L}_{v_{0}},\ip{v_{0}}^{f}$ and $\Sigma^{L}_{v_{1}},\ip{v_{1}}^{u}$. Combining these with the proof with assumptions $\Xi^{L}$, which we obtain by Proposition~\ref{p:locitp}, we then define $\Pi^{L}_{s}$ to be the following labelled tree: \begin{prooftree} \AXC{$\Pi^{L}_{v_{0}}$} \UIC{$\Sigma^{L}_{v_{0}},\ip{v_{0}}^{f}$} \AXC{$\Pi^{L}_{v_{1}}$} \UIC{$\Sigma^{L}_{v_{1}},\ip{v_{1}}^{u}$} \RightLabel{$\ensuremath{\mathsf{U}}\xspace$} \UIC{$\Sigma^{L}_{v_{1}},\ip{v_{1}}^{f}$} \BIC{$\Xi^{L}$} \UIC{$\Sigma^{L}_{s},\chi_{s}(\ip{v_{0}},\ip{v_{1}})^{f}$} \end{prooftree} Again, a similar construction works for $\Pi^R_{s}$. \end{description} \item[Case $s \in \mathsf{Ran}(c)$.] In this case the rule applied at $s$ is the discharge rule; let $\ensuremath{\mathsf{x}}_{s}$, $s'$ and $\eta_{x}$ be as in the corresponding case in Definition~\ref{d:itp}. Note that by the assumption on $a$ we have that $a(s) = a(s')$ and $a(s) = a(l)$ for any discharged leaf $l$ such that $c(l) = s$. Furthermore, there are only two possibilities: either $a(s) = u$ and $\eta_{s} = \mu$, or $a(s) = f$ and $\eta_{s} = \nu$. We cover both cases at once but first only consider the definition of $\Pi^L_{s}$. Inductively we have a proof $\Pi^L_{s'}$ of $\Sigma^L_{s'}, \ip{s'}^{a(s')}$. Note that $\Sigma^L_{s'} = \Sigma^L_s$, because the discharge rule is applied at $s$. Let $(\Pi')^{L} \mathrel{:=} \Pi^{L}_{s'}[\eta_{s}\ensuremath{\mathsf{x}}_{s}.\ip{s'}/\ensuremath{\mathsf{x}}_{s}]$; that is, $(\Pi')^{L}$ is the labelled tree $\Pi^{L}_{s'}$, with all occurrences of $\ensuremath{\mathsf{x}}_{s}$ replaced by the formula $\eta_{s}\ensuremath{\mathsf{x}}_{s}.\ip{s'}$. That this is a well-defined operation on proofs follows from Proposition~\ref{p:psubst}. However, we need to make sure that $\mathit{FV}(\eta_{s} \ensuremath{\mathsf{x}}_{s}.\ip{s'}) \cap \mathit{BV}(\Sigma^L_{s'}, \ip{s'}^{a(s')}) = \emptyset$. This follows with item~\ref{it:clitp1-1} of Proposition~\ref{p:itp6} and the observations that the variables in $\mathsf{X}(s')$ do not occur as bound variables in any of the formulas in $\Sigma^L_{s'}$ nor in $\ip{s'}$. Note that $(\Pi')^{L}$ has the open assumption $\Sigma^L_{s}, (\eta_{s}\ensuremath{\mathsf{x}}_{s}. \ip{s'})^{a(s)}$ instead of $\Sigma^L_{s}, \ensuremath{\mathsf{x}}_{s}^{a(s)}$. To obtain $\Pi^{L}_{s}$ from $(\Pi')^{L}$, add one application of the fixpoint rule for $\eta_{s}\ensuremath{\mathsf{x}}_{s}.\ip{s'}$, followed by an application of the discharge rule for the discharge token $\ensuremath{\mathsf{x}}_{s}$: \begin{prooftree} \AxiomC{$[\Sigma^{L}_{s},\big(\eta_{s}\ensuremath{\mathsf{x}}_{s}.\ip{s'}\big)^{a(s)}]^{\ensuremath{\mathsf{x}}_{s}}$} \UnaryInfC{$(\Pi')^L$} \UnaryInfC{$\Sigma^L_{s}, \big(\ip{s'}[\eta_{s} \ensuremath{\mathsf{x}}_{s}.\ip{s'}/\ensuremath{\mathsf{x}}_{s}]\big)^{a(s)}$} \RightLabel{$\mathsf{R}_{\eta_s}$} \UnaryInfC{$\Sigma^L_{s}, \big(\eta_{s} \ensuremath{\mathsf{x}}_{s}.\ip{s'}\big)^{a(s)}$} \RightLabel{\RuDischarge{\ensuremath{\mathsf{x}}_{s}}} \UnaryInfC{$\Sigma^L_s, \big(\eta_{s} \ensuremath{\mathsf{x}}_{s}.\ip{s'}\big)^{a(s)}$} \end{prooftree} The application of the rule $\mathsf{R}_{\eta_s}$ is correct because if $\eta_s = \mu$ then $a(s) = u$. Thus, the unfolded fixpoint formula in the premise of the application of $\mathsf{R}_{\eta_s}$ is still annotated with $a(s)$. If $\eta_s = \nu$ then the unfolded fixpoint stays annotated with $a(s)$ because $\mathsf{R}_\nu$ does not change the annotation of its principal formula. Also note that the proof $\Pi^L_s$ no longer contains open assumptions that are labelled with the token $\ensuremath{\mathsf{x}}_{s}$. A similar construction can be used to define $\Pi^R_{s}$. By induction there is a proof $\Pi^R_{s'}$ of $\Sigma^R_{s'}, \sneg{\ip{s'}}^{\ol{a}(s')}$. As before we use Proposition~\ref{p:psubst} to substitute all occurrences of $\ensuremath{\mathsf{x}}_{s}$ with $\ol{\eta_{s}}\ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}}$ in the proof $\Pi^R_{s'}$ to obtain a proof $(\Pi')^{R} \mathrel{:=} \Pi^{R}_{s'}[\ol{\eta_{s}}\ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}}/\ensuremath{\mathsf{x}}_{s}]$. Note that $(\Pi')^{R}$ has the open assumption $\Sigma^R_{s},(\ol{\eta_{s}}\ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}})^{\ol{a}(s)}$ instead of $\Sigma^R_{s}, \ensuremath{\mathsf{x}}_{s}^{\ol{a}(s)}$. We then construct the proof $\Pi^R_s$ as follows: \begin{prooftree} \AxiomC{$[\Sigma^{R}_{s},\big(\ol{\eta_{s}}\ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}}\big)^{\ol{a}(s)}]^{\ensuremath{\mathsf{x}}_{s}}$} \UnaryInfC{$(\Pi')^R$} \UnaryInfC{$\Sigma^R_{s}, \big(\sneg{\ip{s'}}[\ol{\eta_{s}} \ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}}/\ensuremath{\mathsf{x}}_{s}]\big)^{\ol{a}(s)}$} \RightLabel{$\mathsf{R}_{\ol{\eta_s}}$} \UnaryInfC{$\Sigma^R_{s}, \big(\ol{\eta_{s}} \ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}}\big)^{\ol{a}(s)}$} \RightLabel{\RuDischarge{\ensuremath{\mathsf{x}}_{s}}} \UnaryInfC{$\Sigma^R_s, \big(\ol{\eta_{s}} \ensuremath{\mathsf{x}}_{s}.\sneg{\ip{s'}}\big)^{\ol{a}(s)}$} \end{prooftree} Note that if $\ol{\eta_s} = \mu$ then $\eta_s = \nu$, $a(s) = f$ and $\ol{a}(s) = u$. Therefore, the application of the rule $\mathsf{R}_\mu$ above has the right annotation at the unfolded fixpoint. \end{description} We now check that $\Pi^L_r$ and $\Pi^R_r$ are indeed \ensuremath{\mathsf{Focus}}\xspace-proofs of, respectively, the sequents $\Sigma^L_r,\ip{r}^{a(s)}$ and $\Sigma^R_r,\sneg{\ip{r}}^{\ol{a}(r)}$, where $r$ is the root of $\Pi$. Note that whereas we are proving statements about $\Pi^L_r$ and $\Pi^R_r$, our proof is by induction on the complexity of the original proof $\Pi$. In the formulation of the inductive hypothesis it is convenient to allow for proofs in which some open assumptions are already labelled with a discharge token instead of with $\star$. (In the end of the induction this makes no difference because $\Pi^L_r$ and $\Pi^R_r$ do not have any open assumption.) With this adaptation we will establish the claim below. Before going into the details we observe that, given the inductive definition of the proof $\Pi^{L} = \Pi^{L}_{r}$, it contains, for every node $s$ in $\Pi$, some substitution instance of $\Pi^{L}_{s}$ as a subproof. In particular, we may assume the existence of an injection $f^L$ mapping $\Pi$-nodes to $\Pi^{L}$-nodes, in such a way that $f^L(s)$ is the root of the proof tree $\Pi^{L}_{s}$, for every node $s$ of $\Pi$. A similar observation holds for the proof $\Pi^{R}$. \begin{claimfirst} \label{cl:itp2} For all nodes $s$ in $\Pi$ the following hold. \begin{urlist} \item $\Pi^{L}_{s}$ is a \ensuremath{\mathsf{Focus}}\xspace-proof for the sequent $\Sigma^{L}_{s}, \ip{s}^{a(s)}$, with assumptions $\{ \Sigma^{L}_{l},\ensuremath{\mathsf{x}}_{c(l)}^{a(l)} \mid P^{+}c(l)s \text{ and } P^{*}sl \}$ such that additionally for every node $t'$ that is on a path from the root $f^L(s)$ of $\Pi^L_s$ to one of its open assumptions the following hold: \begin{enumerate} \item \label{it:itpcl-1} the annotated sequent at $t'$ contains at least one formula that is in focus; \item \label{it:itpcl-2} the rule applied at $t'$ is not $\ensuremath{\mathsf{F}}\xspace$ or $\ensuremath{\mathsf{U}}\xspace$; \item \label{it:itpcl-3} if $t' = f^L(s')$ and \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace is applied at $s'$ then \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace is applied at $t'$. \end{enumerate} \item $\Pi^{R}_{s}$ is a \ensuremath{\mathsf{Focus}}\xspace-proof for the sequent $\Sigma^{R}_{s}, \sneg{\ip{s}}^{\ol{a}(s)}$, with assumptions $\{ \Sigma^{R}_{l},\ensuremath{\mathsf{x}}_{c(l)}^{\ol{a}(l)} \mid P^{+}c(l)s \text{ and } P^{*}sl \}$ such that additionally for every node $t'$ that is on a path from the root of $\Pi^R_s$ to one of its open assumptions it holds that: \begin{enumerate} \item the annotated sequent at $t'$ contains at least one formula that is in focus; \item the rule applied at $t'$ is not $\ensuremath{\mathsf{F}}\xspace$ or $\ensuremath{\mathsf{U}}\xspace$; \item if $t' = f^R(s')$ and \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace is applied at $s'$ then \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace is applied at $t'$. \end{enumerate} \end{urlist} \end{claimfirst} \begin{pfclaim} As mentioned, our argument proceeds by induction on the complexity of the proof $\Pi$, or, to be somewhat more precise, by induction on the depth of $s$ in $\Pi$. Here we will use the same case distinction as the construction of $\Pi^L_s$ and $\Pi^R_s$. We focus on the proof $\Pi^{L}$, the case of $\Pi^{R}$ being similar. First we make an auxiliary observation that will be helpful for understanding our proof: \begin{equation} \label{eq:core} \text{if } s \in T_{\mu} \cup T_{\nu} \text{ then $f^L(s)$ contains a formula in focus in } \Pi^{L}_{s}. \end{equation} For a proof of this, first assume that $s \in T_{\mu}$, i.e., $\eta_{s} = \mu$. Then $\Sigma^{L}_{s}$ contains a formula in focus by item~\ref{i:in focus} of Definition~\ref{d:coloring}. On the other hand, if $s \in T_{\nu}$, then since the annotation $a$ is consistent with $\eta$, we have $a(s) = f$, so that the formula $\ip{s}^{a(s)}$ is in focus. \medskip Now we turn to the inductive proof of the claim proper. It is obvious from the construction that the root $f(s)$ of $\Pi^L_s$ is labelled with the annotated sequent $\Sigma^{L}_{s}, \ip{s}^{a(s)}$, and it is not hard to see that the open assumptions of this proof are indeed of the form claimed above. To show that $\Pi^L_s$ is indeed a \ensuremath{\mathsf{Focus}}\xspace-proof we need to check the conditions from Definition~\ref{d:proof}. Condition~\ref{i:local condition}, which requires the annotated sequents to match the applied proof rule at every node, can be easily verified by inspecting the nodes that are added in each step of the construction of $\Pi^L_s$. Similarly, it is clear that only leaves get labelled with discharge tokens and thus condition~\ref{i:leaf condition} is satisfied. It is also not too hard to see that all non-axiomatic leaves that are not open assumptions are discharged. This is just our (already established) claim that all open assumptions of $\Pi^L_s$ are in the set $\{ \Sigma^{L}_{l},\ensuremath{\mathsf{x}}_{c(l)}^{a(l)} \mid P^{+}c(l)s \text{ and } P^{*}sl \}$. This means that condition~\ref{i:discharge condition} is satisfied. (Note that it is here where we conveniently allow for open leaves that are labelled with a discharge token rather than with $\star$.) It is left to consider condition~\ref{i:path condition}. We have to consider any path between a leaf $l'$ and its companion $c(l')$ in $\Pi^{L}_s$. We can focus on the case, where $c(l')$ is the root $f^L(s)$ of $\Pi^L_s$; in later steps of the induction the labels of the node only get changed by substitutions of formulas for the open fixpoint variables, which by Proposition~\ref{p:psubst} does not affect condition~\ref{i:path condition}. Note then that $l' = f^L(l)$ for some leaf $l$ of $\Pi$ with $c(l) = s$ and $c(l') = f^L(s)$. The path from $s$ to $l$ in $\Pi$ satisfies condition~\ref{i:path condition} because $\Pi$ is a \ensuremath{\mathsf{Focus}}\xspace-proof. That the path from $f^L(s)$ to $l'$ satisfies condition~\ref{i:path condition} follows from the statements~\eqref{it:itpcl-1}, \eqref{it:itpcl-2} and \eqref{it:itpcl-3} that we are about to prove. \medskip To prove the parts~\eqref{it:itpcl-1}, \eqref{it:itpcl-3} and \eqref{it:itpcl-3} of the inductive statement, let $t'$ be a node on a path from the root $f(s)$ of $\Pi^L_s$ to one of its open assumptions. We now make our case distinction. \begin{description} \item[Case $s \in \mathsf{Dom}(c)$.] In this case $\Pi^{L}_{s}$ contains $f^L(s)$ as its single node, and so \eqref{it:itpcl-1} follows by \eqref{eq:core}, while \eqref{it:itpcl-2} and \eqref{it:itpcl-3} are obvious by construction. \item[Case $s \not\in \mathsf{Dom}(c)$ and $s \not\in \mathsf{Ran}(c)$.] Let $v_{0},\ldots,v_{n-1}$ be the children of $s$ (in $\Pi$). Then by construction $\Pi^{L}_{s}$ consists of the pre-proofs $\Pi^{L}_{v_{0}}, \ldots,\Pi^{L}_{v_{n-1}}$, linked to the root $f^L(s)$ via an instance $\Xi^{L}$ of Proposition~\ref{p:locitp}, in such a way that (i) all open leafs of $\Pi^{L}_{s}$ belong to one of the $\Pi^{L}_{v_{i}}$ where $s$ and $v_{i}$ are connected, and (ii) $\Pi^{L}_{v_{i}}$ is \emph{directly} pasted to the corresponding leaf of $\Xi^{L}$ in case $s$ and $v_{i}$ are connected (that is, no focus or unfocus rule are needed). Concerning the position of the node $t'$ in $\Pi^{L}_{s}$, it follows from (i) and (ii) that there is a child $v = v_{i}$ of $s$, which is connected to $s$ and such that $t'$ either lies (in the $\Pi^{L}_{v}$-part of $\Pi^{L}_{s}$) on the path from $f^L(v)$ to an open leaf, or on the path in $\Pi^{L}_{s}$ from $f^L(s)$ to $f^L(v)$. Since the first case is easily taken care of by the inductive hypothesis, we focus on the latter. It follows from (ii) that the full path from $f^L(s)$ to $f^L(v)$ is taken from the pre-proof $\Xi^{L}$ as provided by Proposition~\ref{p:locitp}. But then \eqref{it:itpcl-1}, \eqref{it:itpcl-2} and \eqref{it:itpcl-3} are immediate by item~\ref{i:itptrf}(a), (b) and (c) from mentioned proposition, given the fact that by \eqref{eq:core} the node $f^L(v)$ features a formula in focus. (Note that the rule applied at $s$ in $\Pi^{L}_{s}$ is not the focus rule since $s \in T_{\mu} \cup T_{\nu}$ and thus $\Sigma_{s}$ contains a formula in focus.) \item[Case $s \in \mathsf{Ran}(c)$.] Let $s^{+}$ be the unique successor of $s$ in $\Pi$. Then by construction $\Pi^{L}_{s}$ consists of a substitution instance of $\Pi^{L}_{s^{+}}$, connected to $f^L(s)$ via the application of the rules $\mathsf{R}_{\eta_s}$ (at the unique successor of $f^L(s)$) and \RuDischarge{\ensuremath{\mathsf{x}}_{s}} (at $f^L(s)$ itself). Clearly then there are two possible locations for the node $t'$. If $t'$ is situated in the subtree rooted at $f^L(s^{+})$, then \eqref{it:itpcl-1} and \eqref{it:itpcl-2} follow from the inductive hypothesis (note that when we apply a substitutions to the derivation $\Pi^{L}_{s^{+}}$ we do not change the proof rules or alter the annotations). On the other hand, the only two nodes of $\Pi^{L}_{s}$ that do not belong to mentioned subtree are $f^L(s)$ itself and its unique child. These nodes carry the same sequent label, and so in this case \eqref{it:itpcl-1} follows from \eqref{eq:core}. Finally, \eqref{it:itpcl-2} and \eqref{it:itpcl-3} are obvious since we already saw that the rules applied in $\Pi^{L}_{s}$ at $f^L(s)$ and its successor are $\RuDischarge{\ensuremath{\mathsf{x}}_{s}}$ and $\mathsf{R}_{\eta_s}$, respectively. \end{description} This finishes the proof of the claim. \end{pfclaim} Finally, the proof of the Proposition is immediate by these claims if we consider the case $s = r$, where $r$ denotes the root of the tree. \end{proof} We close this section with an example that illustrates the computation of the interpolant: \newcounter{nodecounter} \renewcommand{\thenodecounter}{(\alph{nodecounter})} \newcommand{\refstepcounter{nodecounter}\thenodecounter\ }{\refstepcounter{nodecounter}\thenodecounter\ } \newcommand{,}{,\,} \newcommand{\interpol}[1]{} \newcommand{\eq}[1]{} \begin{example} In this part of the appendix we discuss an example in which we compute an interpolant by induction on the complexity of a \ensuremath{\mathsf{Focus}}\xspace-proof. The example is the interpolant for the implication \begin{equation} \label{eq:implication} (\alpha(p) \rightarrow p) \rightarrow (\alpha(q) \lor q), \end{equation} where $\alpha(p)$ is the following formula: \begin{align*} \alpha(p) & = \mu x . \psi_1(p) \lor \psi_2(p) \lor \psi_3(p) \lor \varphi \lor \Diamond x \\ \psi_1(p) & = (p \land \Diamond p) \lor (\atneg{r} \land \Diamond p) \lor (\atneg{p} \land r \land \Box \atneg{p}) \\ \psi_2(p) & = p \land \atneg{r} \\ \psi_3(p) & = \Diamond \atneg{p} \land \Diamond{p} \\ \varphi & = \nu x . \Box (r \land x) \end{align*} This example is based on the example provided in \cite{stud:ckbp09}, which is in turn based on an earlier example by \cite{maks:temp91}, to show that epistemic logic with common knowledge does not have Craig interpolation. If substitutes the formula $\mu x . \Diamond(\atneg{s} \land x)$ for the propositional letter $r$ in the definition of $\alpha$ then one obtains the translations of the formulas from \cite{stud:ckbp09} to the alternation-free $\mu$-calculus. We will see that the interpolant of \eqref{eq:implication} can be expressed in the alternation-free $\mu$-calculus. \begin{figure} \begin{prooftree} \def\: \mid \:{\: \mid \:} \Axiom$\refstepcounter{nodecounter}\thenodecounter\ \label{topmost} [\varphi^f , \atneg{p}, \Diamond p, \atneg{r}, \Diamond \alpha(p) \: \mid \: q , \alpha(q)]^\ensuremath{\mathsf{x}} \interpol{\ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{second} (r \land \varphi)^f , \atneg{p}, \Diamond p, \atneg{r}, \Diamond \alpha(p) \: \mid \: q , \alpha(q) \interpol{\bot \lor \ensuremath{\mathsf{x}} \eq{\ensuremath{\mathsf{x}}}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{third} (r \land \varphi)^f , \atneg{p} , p \land \Diamond p , p \land \atneg{r} , \Diamond \alpha(p) \: \mid \: q , \alpha(q) \interpol{\bot \lor (\bot \lor \ensuremath{\mathsf{x}}) \eq{\ensuremath{\mathsf{x}}}}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$(r \land \varphi)^f , \atneg{p}, \psi_1(p), \psi_2(p), \psi_3(p) , \Diamond \alpha(p) \: \mid \: q , \alpha(q) \interpol{\ensuremath{\mathsf{x}}}$ \RightLabel{\RuFp{\mu}, \ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInf$(r \land \varphi)^f , \atneg{p} , \alpha(p) \: \mid \: q , \alpha(q) \interpol{\ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{firstbox} \Box (r \land \varphi)^f , \Diamond \atneg{p} , \Diamond \alpha(p) \: \mid \: \Diamond q , \Diamond \alpha(q) \interpol{\Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$p , \Box (r \land \varphi)^f , \Box \atneg{p} , \Diamond \atneg{p} , \Diamond \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{\Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\RuFp{\nu}} \UnaryInf$p , \varphi^f , \Box \atneg{p} , \Diamond \atneg{p} , \Diamond \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{\Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{addnegr} p , (r \land \varphi)^f , \Box \atneg{p} , \Diamond \atneg{p} , \Diamond \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{\atneg{r} \land \Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{W}}\xspace, \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{complexand} p , (r \land \varphi)^f , \Box \atneg{p} , \Diamond \atneg{p} \land \Diamond p , \Diamond \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{(\atneg{r} \land \Diamond \ensuremath{\mathsf{x}}) \lor \Diamond \bot \eq{\atneg{r} \land \Diamond \ensuremath{\mathsf{x}}}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{morenegr} p , (r \land \varphi)^f , \atneg{p} \land r \land \Box \atneg{p} , \Diamond \atneg{p} \land \Diamond p , \Diamond \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{\bot \lor (\atneg{r} \lor (\atneg{r} \land \Diamond \ensuremath{\mathsf{x}})) \eq{\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}}}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$p , (r \land \varphi)^f , \psi_1(p), \psi_2(p), \psi_3(p), \Diamond \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\RuFp{\mu}, \ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInf$p , (r \land \varphi)^f , \alpha(p) \: \mid \: \atneg{q} , \Diamond q , \atneg{r} , \Diamond \alpha(q) \interpol{\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$p , (r \land \varphi)^f , \alpha(p) \: \mid \: \atneg{q} , q \land \Diamond q , q \land \atneg{r} , \Diamond \alpha(q) \interpol{\top \land (\top \land (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})) \eq{\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}}}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$p , (r \land \varphi)^f , \alpha(p) \: \mid \: \atneg{q} , \psi_1(q), \psi_2(q), \psi_3(q), \Diamond \alpha(q) \interpol{\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\RuFp{\mu}, \ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInf$p , (r \land \varphi)^f , \alpha(p) \: \mid \: \atneg{q} , \alpha(q) \interpol{\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}}$ \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{secondbox} \Diamond p , \Box (r \land \varphi)^f , \Diamond \alpha(p) \: \mid \: \Diamond \atneg{q} , \Diamond \alpha(q) \interpol{\Diamond(\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$\atneg{p} , \Diamond p, \atneg{r}, \Box(r \land \varphi)^f , \Diamond \alpha(p) \: \mid \: q , \Box \atneg{q} , \Diamond \atneg{q} , \Diamond \alpha(q) \interpol{\Diamond(\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\RuFp{\nu}} \UnaryInf$\atneg{p} , \Diamond p, \atneg{r}, \varphi^f , \Diamond \alpha(p) \: \mid \: q , \Box \atneg{q} , \Diamond \atneg{q} , \Diamond \alpha(q) \interpol{\Diamond(\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{W}}\xspace, \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{morecomplexand} \atneg{p} , \Diamond p, \atneg{r}, \varphi^f , \Diamond \alpha(p) \: \mid \: q , \Box \atneg{q} , \Diamond \atneg{q} \land \Diamond q , \Diamond \alpha(q) \interpol{\Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}) \land \Box \top \eq{\Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{addr} \atneg{p} , \Diamond p, \atneg{r}, \varphi^f , \Diamond \alpha(p) \: \mid \: q , \atneg{q} \land r \land \Box \atneg{q} , \Diamond \atneg{q} \land \Diamond q , \Diamond \alpha(q) \interpol{\top \land (r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})) \eq{r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$\atneg{p} , \Diamond p, \atneg{r}, \varphi^f , \Diamond \alpha(p) \: \mid \: q , \psi_1(q), \psi_2(q), \psi_3(q), \Diamond \alpha(q) \interpol{r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\RuFp{\mu}, \ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInf$\atneg{p}, \Diamond p, \atneg{r}, \varphi^f , \Diamond \alpha(p) \: \mid \: q , \alpha(q) \interpol{r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\RuDischarge{\ensuremath{\mathsf{x}}}} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{discharge} \atneg{p}, \Diamond p, \atneg{r}, \varphi^f , \Diamond \alpha(p) \: \mid \: q , \alpha(q) \interpol{\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\ensuremath{\mathsf{F}}\xspace,\ensuremath{\mathsf{U}}\xspace} \UnaryInf$\atneg{p}^f, \Diamond p, \atneg{r}, \varphi, \Diamond \alpha(p) \: \mid \: q^f , \alpha(q)^f \interpol{\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace, \ensuremath{\mathsf{Ax1}}\xspace} \UnaryInf$\atneg{p}^f, p \land \Diamond p, p \land \atneg{r} , \Diamond \alpha(p) \: \mid \: q^f , \alpha(q)^f \interpol{\bot \lor (\bot \lor \mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})) \eq{\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}}$ \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInf$\atneg{p}^f, \psi_1(p), \psi_2(p), \psi_3(p), \varphi, \Diamond \alpha(p) \: \mid \: q^f , \alpha(q)^f \interpol{\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\RuFp{\mu}, \ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInf$\atneg{p}^f , \alpha(p)^f \: \mid \: q^f , \alpha(q)^f \interpol{\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInf$\refstepcounter{nodecounter}\thenodecounter\ \label{root} (\atneg{p} \lor \alpha(p))^f \: \mid \: (q \lor \alpha(q))^f \interpol{\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})}$ \end{prooftree} \caption{A \ensuremath{\mathsf{Focus}}\xspace-proof of \eqref{eq:implication}} \label{fig:example} \end{figure} Figure~\ref{fig:example} contains a \ensuremath{\mathsf{Focus}}\xspace-proof of the implication from $\alpha(p) \rightarrow p$ to $\alpha(q) \lor q$. All the sequents in this proof are already partitioned. At many steps we apply multiple proof rules or apply the same rules multiple times. For instance at the node labelled with \ref{complexand}, moving toward the node labeled with \ref{addnegr}, we first apply the rule \ensuremath{\mathsf{R}_{\land}}\xspace to the formula $\Diamond \atneg{p} \land \Diamond p$. This splits the proof into two branches. The left branch for the residual formula $\Diamond \atneg{p}$ is the node labeled with \ref{addnegr}. The right branch for the residual formula $\Diamond p$ is not written out. It continues with an application of weakening to reduce the sequent to $\Box \atneg{p}, \Diamond p \mid$. On this branch the proof continues with an application of \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace follows by \ensuremath{\mathsf{Ax1}}\xspace. We leave it to the reader to reconstruct these details for all other nodes of the proof in Figure~\ref{fig:example}. \begin{figure} \begin{center} \begin{tabular}{rrl} node & interpolant & \phantom{$\equiv$} simplification \\ \ref{topmost} & $\ensuremath{\mathsf{x}}$ & \\ \ref{second} & $\bot \lor \ensuremath{\mathsf{x}}$ & $\equiv \ensuremath{\mathsf{x}}$ \\ \ref{third} & $\bot \lor (\bot \lor \ensuremath{\mathsf{x}})$ & $\equiv \ensuremath{\mathsf{x}}$ \\ \ref{firstbox} & $\Diamond \ensuremath{\mathsf{x}}$ & \\ \ref{addnegr} & $\atneg{r} \land \Diamond \ensuremath{\mathsf{x}}$ & \\ \ref{complexand} & $(\atneg{r} \land \Diamond \ensuremath{\mathsf{x}}) \lor \Diamond \bot$ & $\equiv \atneg{r} \land \Diamond \ensuremath{\mathsf{x}}$ \\ \ref{morenegr} & $\bot \lor (\atneg{r} \lor (\atneg{r} \land \Diamond \ensuremath{\mathsf{x}}))$ & $\equiv \atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}$ \\ \ref{secondbox} & $\Diamond(\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})$ & \\ \ref{morecomplexand} & $\Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}) \land \Box \top$ & $\equiv \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})$ \\ \ref{addr} & $\top \land (r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}}))$ & $\equiv r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})$ \\ \ref{discharge} & $\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})$ & \\ \ref{root} & $\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})$ & \\ \end{tabular} \end{center} \caption{Interpolant computed from the proof in Figure~\ref{fig:example}} \label{fig:table} \end{figure} Following Definitions \ref{d:locitp}~and~\ref{d:itp}, we can compute the interpolant of \eqref{eq:implication} by induction over the proof in Figure~\ref{fig:example}. The most important steps of this computation are in the table of Figure~\ref{fig:table}. At some nodes we rewrite the interpolant into a simpler equivalent formula, and then continue the computation with the simplified version of the interpolant. The formula $\mu \ensuremath{\mathsf{x}} . r \land \Diamond (\atneg{r} \lor \Diamond \ensuremath{\mathsf{x}})$ at the root node \ref{root} is the interpolant of $\alpha(p) \rightarrow p$ and $\alpha(q) \lor q$. \end{example} \section{Introduction} In this paper we present a circular proof system for the alternation-free fragment of the modal $\mu$-calculus and use this system to proof Craig interpolation for the alternation-free fragment. \subsection{The alternation-free $\mu$-calculus} The modal $\mu$-calculus, introduced by Kozen~\cite{koze:resu83}, is a logic for describing properties of processes that are modelled by labelled transition systems. It extends the expressive power of propositional modal logic by means of least and greatest fixpoint operators. This addition permits the expression of all monadic second-order properties of transition systems~\cite{jani:auto95}. The $\mu$-calculus is generally regarded as a universal specification language, since it embeds most other logics that are used for this purpose, such as \textsc{ltl}, \textsc{ctl}, \textsc{ctl}$^{*}$ and \textsc{pdl}. The alternation-free $\mu$-calculus is a fragment of the $\mu$-calculus in which there is no interaction between least and greatest fixpoint operators. It can be checked that the translations of both \textsc{ctl} and \textsc{pdl} into the $\mu$-calculus yield alternation-free formulas. Over tree structures, or when restricted to bisimulation-invariant properties, the expressive power of the alternation-free $\mu$-calculus corresponds to monadic second-order logic where the quantification is restricted to sets that are finite, or in a suitable sense well-founded \cite{niwi:fixe97,facc:char13}. For more restricted classes of structures, such as for instance infinite words, it can be shown that the alternation-free fragment already has the same expressivity as the full $\mu$-calculus \cite{kaiv:axio95,guti:muca14}. Many theoretical results on the modal $\mu$-calculus depend on the translation from formulas in the $\mu$-calculus to automata \cite{jani:auto95,wilk:alte01}. The general idea is to construct for every formula an automaton that accepts precisely the pointed structures where the formula is true. For the alternation-free fragment the codomain of this translation can be taken to consist of weak alternating automata \cite{facc:char13,guti:muca14}. These are parity automata for which the assignment of priorities to states is restricted such that all states from the same strongly connected component have the same priority. \subsection{A cyclic focus system for the alternation-free $\mu$-calculus} In the theory of the modal $\mu$-calculus automata- and game-theoretic approaches have long been at the centre of attention. Apart from the rather straightforward tableau games by Niw\'{i}nski \& Walukiewicz \cite{niwi:game96} there have for a long time been few successful applications of proof-theoretic techniques. This situation has changed with a recent breakthrough by Afshari \& Leigh \cite{afsh:cutf17}, who obtain completeness of Kozen's axiomatization of the modal $\mu$-calculus using purely proof-theoretic arguments. The proof of this result can be taken to consist of a series of proof transformations: First, it starts with a successful infinite tableau in the sense of \cite{niwi:game96}. Second, one then adds a mechanism for annotating formulas that was developed by Jungteerapanich and Stirling \cite{jung:tabl10,stir:tabl14} to detect after finitely many steps when a branch of the tableau tree may develop into a successful infinite branch, thus obtaining a finite but cyclic tableau. Third, Afshari \& Leigh show how to apply a series of transformations to this finite annotated tableau to obtain a proof in a cyclic sequent system for the model $\mu$-calculus. Fourth, and finally, this proof can be turned into a Hilbert-style proof in Kozen's axiomatization. In this paper we present an annotated cyclic proof system for the alternation-free $\mu$-calculus that corresponds roughly to the annotated tableaux of Jungteerapanich and Stirling mentioned in the second step above. But, whereas in the system for the full $\mu$-calculus these annotations are sequences of names for fixpoint variables, for the alternation-free fragment it suffices to annotate formulas with just one bit of information. We think of this bit as indicating whether a formula is in what we call \emph{in focus} or whether it is \emph{unfocused}. We use this terminology because our proof system for the alternation-free $\mu$-calculus is a generalization of the focus games for weaker fixpoint logics such as \textsc{ltl} and \textsc{ctl} by Lange \& Stirling \cite{lang:focu01}. These are games based on a tableau such that at every sequent of the tableau there is exactly one formula in focus. In our system we generalise this so that a proof node may feature a \emph{set} of formulas in focus. Our system can be shown to be complete while only allowing for two kind of manipulations of annotations. The first is the rule that unfolds least fixpoints. Whenever one is unfolding a least fixpoint formula that is in focus at the current sequent then its unfolding in the sequent further away from the root needs to be unfocused. Unfolding greatest fixpoints has no influence on the annotations. That other manipulation of annotations is by a focus rule that puts previously unfocused formulas into focus. It suffices to only apply this rule if the current sequence does not contain any formula that is in focus. The rule then simply continues the proof search with the same formulas but now they are all in focus. The design of the annotation mechanisms in the tableau by Jungteerapanich \& Stirling and in the focus system from this paper are heavily influenced by ideas from automata theory. It was already observed by Niw\'{i}nski \& Walukiewicz \cite{niwi:game96} that a tree automaton can be used that accepts precisely the trees that encode successful tableaux. This automaton is the product of a tree automaton checking for local consistency of the tableau and a deterministic automaton over infinite words that detects whether every branch in the tableau is successful. That a branch in a tableau is successful means that it carries at least one trail of formulas where the most significant fixpoint that is unravelled infinitely often is a greatest fixpoint. It is relatively straight-forward to give a nondeterministic automaton that detects successful branches, but the construction needs a deterministic automaton, which is obtained using the Safra construction \cite{safr:comp88}. The crucial insight of Jungteerapanich \& Stirling \cite{jung:tabl10,stir:tabl14} is that this deterministic automaton that results from the Safra construction can be encoded inside the tableau by using annotations of formulas. The relation between detecting successful branches in a proof and the determinization of automata on infinite words can also be seen more directly. In the proof system annotations are used to detect whether a branch of the proof carries at least one trail such that the most significant fixpoint that is unfolded infinitely often on the trail is a greatest fixpoint. This is analogous to a problem that arises when one tries to use the powerset construction to construct an equivalent deterministic automaton from a given non-deterministic parity automaton operating on infinite words. The problem there is to determine whether a sequence of macrostates of the deterministic automaton carries a run of the original non-deterministic automaton that satisfies the parity condition. It is possible to view the annotated sequents of Jungteerapanich \& Stirling as a representation of the Safra trees which provide the states of a deterministic Muller automaton that one obtains when determinizing a non-deterministic parity automaton \cite[sec.~4.3.5]{jung:tabl10}. For alternation-free formulas it is significantly simpler to detect successful branches, because one can show that the fixpoints that are unravelled infinitely often on a trail of alternation-free formulas are either all least or all greatest fixpoints. One can compare the problem of finding such a trail to the problem of recognising a successful run of a non-deterministic weak stream automaton in the macrostates of a determinization of the automaton. In fact the focus mechanism from the proof system that we develop in this paper can also be used to transform a non-deterministic weak automaton into an equivalent deterministic co-B\"{u}chi automaton. This relatively simple construction is a special case of Theorem~15.2.1 in \cite{demr:temp16}, which shows that every non-deterministic co-B\"{u}chi automaton can be transformed into an equivalent deterministic co-B\"{u}chi automaton. \subsection{Interpolation for the alternation-free $\mu$-calculus} We apply the proof system introduced in this report to prove that the alternation-free $\mu$-calculus has Craig's interpolation property. This means that for any two alternation-free formulas $\varphi$ and $\psi$ such that $\varphi \rightarrow \psi$ is valid there is an interpolant $\chi$ of $\varphi$ and $\psi$ in the alternation-free $\mu$-calculus. An interpolant $\chi$ of $\varphi$ and $\psi$ is a formula which contains only propositional letters that occur in both $\varphi$ and $\psi$ such that both $\varphi \rightarrow \chi$ an $\chi \rightarrow \psi$ are valid. Basic modal logic \cite{gabb:ipol05} and the full $\mu$-calculus \cite{dago:logi00} have Craig interpolation. In fact both formalisms enjoy a even stronger property called uniform interpolation, where the interpolant $\chi$ only depends on $\varphi$ and the set of propositional letters that occur in $\psi$ (but not on the formula $\psi$ itself). Despite these strong positive results, interpolation is certainly not guaranteed to hold for fixpoint logics. For instance, even Craig interpolation fails for weak temporal logics or epistemic logics with a common knowledge modality \cite{maks:temp91,stud:ckbp09}. Moreover, one can show that uniform interpolations fails for both \textsc{pdl} and for the alternation-free $\mu$-calculus~\cite{dago:logi00}. The argument relies on the observation that uniform interpolation corresponds to the definability of bisimulation quantifiers. But, adding bisimulation quantifiers to \textsc{pdl}, or the alternation-free fragment, allows the expression of arbitrary fixpoints and thus increases the expressive power to the level of the full $\mu$-calculus. It is still somewhat unclear whether \textsc{pdl} has Craig interpolation. Various proofs have been proposed, but they have either been retracted or still wait for a proper verification~\cite{borz:tabl88,borz:proo20}. The uniform interpolation result for the modal $\mu$-calculus has been generalised to the wider setting of coalgebraic fixpoint logic~\cite{mart:unif15,enqv:disj19}, but the proofs known for these results are all automata-theoretic in nature. Recently, however, Afshari \& Leigh~\cite{afsh:lyndxx} pioneered the use of proof-theoretic methods in fixpoint logics, to prove, among other things, a Lyndon-style interpolation theorem for the (full) modal $\mu$-calculus. Their proof, however, does not immediately yield interpolation results for fragments of the logic; in particular, for any pair of alternation-free formulas of which the implication is valid, their approach will yield an interpolant inside the full $\mu$-calculus, but not necessarily one that is itself alternation free. It is here that the simplicity of our focus-style proof system comes in. Summarising our interpolation proof for the alternation-free $\mu$-calculus, we base ourselves on Maehara's method, adapted to the setting of cyclic proofs. Roughly, the idea underlying Maehara's method is that, given a proof $\Pi$ for an implication $\varphi \rightarrow \psi$ one defines the interpolant $\chi$ by an induction on the complexity of the proof $\Pi$. The difficulty in applying this method to cyclic proof systems is that here, some proof leaves may not be axiomatic and thus fail to have a trivial interpolant. In particular, a discharged leaf indicates an infinite continuation of the current branch. Such a leaf introduces a fixpoint variable into the interpolant, which will be bound later in the induction. The crux of our proof, then, lies in the the way that we handle the additional complications that arise in correctly managing the annotations in our proof system, in order to make sure that these interpolants belong to the right fragment of the logic. \subsection{Overview} This paper is organized as follows: The preliminaries about the syntax and semantics of the $\mu$-calculus and its alternation-free fragment are covered in Section~\ref{s:prel}. In Section~\ref{sec-tab} we present our version of the tableau games by Niw\'{i}nski \& Walukiewicz that we use later as an intermediate step in the soundness and completeness proofs for our proof system. In Section~\ref{sec-proofsystem} we introduce our focus system for the alternation-free $\mu$-calculus and we prove some basic results about the system. The sections \ref{s:soundness}~and~\ref{s:completeness} contain the proofs of soundness and completeness of the focus system. In Section~\ref{sec-itp} we show how to use the focus system to prove interpolation for the alternation-free $\mu$-calculus. \section{The alternation-free linear $\mu$-calculus} \label{sec-lin} \btbs \item Note that the alternation-free fragment of the linear $\mu$-calculus is equivalent to the full version. \etbs \section{Preliminaries} \label{s:prel} We first fix some terminology related to relations and trees and then discuss the syntax and semantics of the $\mu$-calculus and its alternation-free fragment. \subsection{Relations and trees} Given a binary relation $R \subseteq S \times S$, we let $R^{-1}$, $R^{+}$ and $R^{*}$ denote, respectively, the converse, the transitive closure and the reflexive-transitive closure of $R$. For a subset $S \subseteq T$, we write $R[S] \mathrel{:=} \{ t \in T \mid Rst \text{ for some } s \in S \}$; in the case of a singleton, we write $R[s]$ rather than $R[\{s\}]$. Elements of $R(s)$ and $R^{-1}(s)$ are called, respectively, \emph{successors} and \emph{predecessors} of $s$. An \emph{$R$-path} of length $n$ is a sequence $s_{0}s_{1}\cdots s_{n}$ (with $n \geq 0$ such that $Rs_{i}s_{i+1}$ for all $0\leq i<n$); we say that such a path \emph{leads from $s_{0}$ to $s_{n}$}. Similarly, an \emph{infinite path starting at $s$} is a sequence $(s_{n})_{n\in\omega}$ such that $Rs_{i}s_{i+1}$ for all $i<\omega$. A structure $\mathstr{T} = (T,R)$, with $R$ a binary relation on $T$, is a \emph{tree} if there is a node $r$ such that for every $t \in T$ there is a unique path leading from $r$ to $t$. The node $r$, which is then characterized as the only node in $T$ without predecessors, is called the \emph{root} of $\mathstr{T}$. Every non-root node $u$ has a unique predecessor, which is called the \emph{parent} of $u$; conversely, the successors of a node $t$ are sometimes called its \emph{children}. If $R^{*}tu$ we call $u$ a \emph{descendant} of $t$ and, conversely, $t$ an \emph{ancestor} of $u$; in case $R^{+}tu$ we add the adjective `proper'. If $s$ is an ancestor of $t$ we define the \emph{interval} $\itv{s}{t}$ as the set of nodes on the (unique) path from $s$ to $t$. A \emph{branch} of a tree is a path that starts at the root. A \emph{leaf} of a tree is a node without successors. For nodes of a tree we will generally use the letters $s,t,u,v,\ldots$, for leaves we will use $l,m, \ldots$\ . The \emph{depth} of a node $u$ in a finite tree $\mathstr{T} = (T,R)$ is the maximal length of a path leading from $u$ to a leaf of $\mathstr{T}$. The \emph{hereditarily finite} part of a tree $\mathstr{T} = (T,R)$ is the subset $\mathit{HF}(\mathstr{T}) \mathrel{:=} \{ t \in T \mid R^{*}[t] \text{ is finite} \}$. A \emph{tree with back edges} is a structure of the form $(T,R,c)$ such that $c$ is a partial function on the collection of leaves, mapping any leaf $l \in \mathsf{Dom}(c)$ to one of its proper ancestors; this node $c(l)$ will be called the \emph{companion} of $l$. \subsection{The modal $\mu$-calculus and its alternation-free fragment} In this part we review syntax and semantics of the modal $\mu$-calculus and discuss its alternation-free fragment. \subsubsection{The modal $\mu$-calculus} \paragraph{Syntax} The \emph{formulas} in the modal $\mu$-calculus are generated by the grammar \[ \phi \;::=\; p \;\mid\; \atneg p \;\mid\; \bot \;\mid\; \top \;\mid\; (\phi\lor\phi) \;\mid\; (\phi\land\phi) \;\mid\; \Diamond\phi \;\mid\; \Box\phi \;\mid\; \mu x\, \phi \;\mid\; \nu x\, \phi, \] where $p$ and $x$ are taken from a fixed set $\mathsf{Prop}$ of propositional variables and in formulas of the form $\mu x. \phi$ and $\nu x. \phi$ there are no occurrences of $\atneg x$ in $\phi$. We write $\mathcal{L}_{\mu}$ for the set of formulas in the modal $\mu$-calculus. Formulas of the form $\mu x . \phi$ ($\nu x . \phi$) are called \emph{$\mu$-formulas} (\emph{$\nu$-formulas}, respectively); formulas of either kind are called \emph{fixpoint formulas}. The operators $\mu$ and $\nu$ are called fixpoint operators. We use $\eta \in \{\mu,\nu\}$ to denote an arbitrary fixpoint operator and write $\ol{\eta} \mathrel{:=} \nu$ if $\eta = \mu$ and $\ol{\eta} = \mu$ if $\eta = \nu$. Formulas that are of the form $\Box \phi$ or $\Diamond \phi$ are called \emph{modal}. Formulas of the form $\phi \land \psi$ or $\phi \lor \psi$ are called \emph{boolean}. Formulas of the form $p$ or $\atneg p$ for some $p \in \mathsf{Prop}$ are called \emph{literals} and the set of all literals is denoted by $\mathsf{Lit}$; a formula is \emph{atomic} if it is either a literal or an atomic constant, that is, $\top$ or $\bot$. We use standard terminology for the binding of variables by the fixpoint operators and for substitutions. In particular we write $\mathit{FV}(\phi)$ for the set of variables that occur freely in $\phi$ and $\mathit{BV}(\phi)$ for the set of all variables that are bound by some fixpoint operator in $\phi$. We do count occurrences of $\atneg{x}$ as free occurrences of $x$. Unless specified otherwise, we assume that all formulas $\phi \in \mathcal{L}_{\mu}$ are \emph{tidy} in the sense $\mathit{FV}(\phi) \cap \mathit{BV}(\phi) = \varnothing$. Given formulas $\phi$ and $\psi$ and a propositional variable $x$ such that there is no occurrences of $\atneg{x}$ in $\phi$, we let $\phi[\psi / x]$ denote the formula that results from substituting all free occurrences of $x$ in $\phi$ by the formula $\psi$. We only apply this substitution in situations where $\mathit{FV}(\psi) \cap \mathit{BV}(\phi) = \varnothing$. This guarantees that no variable capture will occur. If the variable that is substituted is clear from the context we also write $\phi(\psi)$ for $\phi[\psi/x]$. An important use of substitutions of formulas are the unfolding of fixpoint formulas. Given a fixpoint formula $\xi = \eta x . \chi$ its \emph{unfolding} is the formula $\chi[\xi/x]$. Given a formula $\phi \in \mathcal{L}_{\mu}$ we define its \emph{negation} $\ol{\phi}$ as follows. First, we define the \emph{boolean dual} $\bdual{\phi}$ of $\phi$ using the following induction. \[\begin{array}{lllclll} \bdual{\bot} & \mathrel{:=} & \top & \hspace*{1cm} & \bdual{\top} & \mathrel{:=} & \bot \\ \bdual{(\atneg{p})} & \mathrel{:=} & \atneg{p} && \bdual{p} & \mathrel{:=} & p \\ \bdual{(\phi\lor\psi)} & \mathrel{:=} & \bdual{\phi} \land \bdual{\psi} && \bdual{(\phi\land\psi)}& \mathrel{:=} & \bdual{\phi} \lor \bdual{\psi} \\ \bdual{(\Diamond\phi)} & \mathrel{:=} & \Box \bdual{\phi} && \bdual{(\Box \phi)} & \mathrel{:=} & \Diamond\bdual{\phi} \\ \bdual{(\mu x.\phi)} & \mathrel{:=} & \nu x.\bdual{\phi} && \bdual{(\nu x.\phi)} & \mathrel{:=} & \mu x.\bdual{\phi} \end{array}\] Based on this definition, we define the formula $\ol{\phi}$ as the formula $\bdual{\phi}[p \leftrightharpoons \atneg{p} \mid p \in \mathit{FV}(\phi)]$ that we obtain from $\bdual{\phi}$ by replacing all occurrences of $p$ with $\atneg{p}$, and vice versa, for all free proposition letters $p$ in $\phi$. Observe that if $\phi$ is tidy then so is $\ol{\phi}$. For every formula $\phi \in \mathcal{L}_{\mu}$ define the set $\mathsf{Clos}_0(\phi)$ as follows \[\begin{array}{lll l lll} \mathsf{Clos}_0(p) & \mathrel{:=} & \varnothing && \mathsf{Clos}_0(\atneg{p}) & \mathrel{:=} & \varnothing \\ \mathsf{Clos}_0(\psi_0 \land \psi_1) & \mathrel{:=} & \{ \psi_0, \psi_1 \} && \mathsf{Clos}_0(\psi_0 \lor \psi_1) & \mathrel{:=} & \{ \psi_0, \psi_1 \} \\ \mathsf{Clos}_0(\Box\psi) & \mathrel{:=} & \{ \psi \} && \mathsf{Clos}_0(\Diamond\psi) & \mathrel{:=} & \{ \psi \} \\ \mathsf{Clos}_0(\mu x. \psi) & \mathrel{:=} & \{ \psi[\mu x. \psi / x] \} && \mathsf{Clos}_0(\nu x. \psi) & \mathrel{:=} & \{ \psi[\nu x. \psi / x] \} \end{array}\] If $\psi \in \mathsf{Clos}_{0}(\phi)$ we sometimes write $\phi \to_{C} \psi$. Moreover, we define the \emph{closure} $\mathsf{Clos}(\phi) \subseteq \mathcal{L}_{\mu}$ of $\phi$ as the least set $\Sigma$ containing $\phi$ that is closed in the sense that $\mathsf{Clos}_0(\psi) \subseteq \Sigma$ for all $\psi \in \Sigma$. We define $\mathsf{Clos}(\Phi) = \bigcup_{\phi \in \Phi} \mathsf{Clos}(\phi)$ for any $\Phi \subseteq \mathcal{L}_{\mu}$. It is well known that $\mathsf{Clos}(\Phi)$ is finite iff $\Phi$ is finite. A \emph{trace} is a sequence $(\phi_{n})_{n<\kappa}$, with $\kappa \leq \omega$, of formulas such that $\phi_{n} \to_{C} \phi_{n+1}$, for all $n$ such that $n+1 < \kappa$. If $\tau = (\phi_{n})_{n<\kappa}$ is an infinite trace, then there is a unique formula $\phi$ that occurs infinitely often on $\tau$ and is a subformula of $\phi_{n}$ for cofinitely many $n$. This formula is always a fixpoint formula, and where it is of the form $\phi_{\tau} = \eta x.\psi$ we call $\tau$ an \emph{$\eta$-trace}. A proof that there exists a unique such fixpoint formula $\varphi$ can be found in Proposition~6.4 of \cite{kupk:size20}, but the observation is well-known in the literature and goes back at least to \cite{emer:comp88}. A formula $\phi \in \mathcal{L}_{\mu}$ is \emph{guarded} if in every subformula $\eta x . \psi$ of $\phi$ all free occurrences of $x$ in $\psi$ are in the scope of a modality. It is well known that every formula can be transformed into an equivalent guarded formula, and it is not hard to verify that all formulas in the closure of a guarded formula are also guarded. \paragraph{Semantics} The semantics of the modal $\mu$-calculus is given in terms of \emph{Kripke models} $\mathstr{S} = (S,R,V)$, where $S$ is a set whose elements are called \emph{worlds}, \emph{points} or \emph{states}, $R \subseteq S \times S$ is a binary relation on $S$ called the \emph{accessibility relation} and $V : \mathsf{Prop} \to \mathcal{P} S$ is a function called the \emph{valuation function}. The \emph{meaning} $\mng{\phi}^\mathstr{S} \subseteq S$ of a formula $\phi \in \mathcal{L}_{\mu}$ relative to a Kripke model $\mathstr{S} = (S,R,V)$ is defined by induction on the complexity of $\phi$: \[\begin{array}{lllclll} \mng{p}^{\mathstr{S}} &\mathrel{:=}& V(p) && \mng{\atneg{p}}^{\mathstr{S}} &\mathrel{:=}& S \setminus V(p) \\ \mng{\bot}^{\mathstr{S}} &\mathrel{:=}& \varnothing && \mng{\top}^{\mathstr{S}} &\mathrel{:=}& S \\ \mng{\phi\lor\psi}^{\mathstr{S}} &\mathrel{:=}& \mng{\phi}^{\mathstr{S}} \cup \mng{\psi}^{\mathstr{S}} && \mng{\phi\land\psi}^{\mathstr{S}} &\mathrel{:=}& \mng{\phi}^{\mathstr{S}} \cap \mng{\psi}^{\mathstr{S}} \\ \mng{\Diamond\phi}^{\mathstr{S}} &\mathrel{:=}& \{ s \in S \mid R[s] \cap \mng{\phi}^{\mathstr{S}} \neq \varnothing \} && \mng{\Box\phi}^{\mathstr{S}} &\mathrel{:=}& \{ s \in S \mid R[s] \subseteq \mng{\phi}^{\mathstr{S}} \} \\ \mng{\mu x.\phi}^{\mathstr{S}} &\mathrel{:=}& \bigcap \{ U \subseteq S \mid \mng{\phi}^{\mathstr{S}[x\mapsto U]}\subseteq U \} && \mng{\nu x.\phi}^{\mathstr{S}} &\mathrel{:=}& \bigcup \{ U \subseteq S \mid \mng{\phi}^{\mathstr{S}[x\mapsto U]}\supseteq U \}. \end{array}\] Here, $\mathstr{S}[x \mapsto U]$ for some $U \subseteq S$ denotes the model $(S,R,V')$, where $V'(x) = u$ and $V'(p) = V(p)$ for all $p \in \mathsf{Prop}$ with $p \neq x$. We say that $\phi$ \emph{is true} at $s$ if $s \in \mng{\phi}^\mathstr{S}$. A formula $\phi \in \mathcal{L}_{\mu}$ is valid if $\mng{\phi}^\mathstr{S} = S$ holds in all Kripke models $\mathstr{S} = (S,R,V)$ and two formulas $\phi,\psi \in \mathcal{L}_{\mu}$ are \emph{equivalent} if $\mng{\phi}^\mathstr{S} = \mng{\psi}^\mathstr{S}$ for all Kripke models $\mathstr{S}$. Alternatively, the semantics of the $\mu$-calculus is often given in terms of a so-called \emph{evaluation} or \emph{model checking game}. Let $\xi \in \mathcal{L}_{\mu}$ be a $\mu$-calculus formula, and let $\mathstr{S} = (S,R,V)$ be a Kripke model. The \emph{evaluation game} $\mathcal{E}(\xi,\mathstr{S})$ is the following infinite two-player game\footnote{% We assume familiarity with such games, see the appendix for some definitions. }. Its positions are pairs of the form $(\phi,s) \in \mathsf{Clos}(\xi)\times S$, and its ownership function and admissible rules are given in Table~\ref{tb:EG}. For the winning conditions of this game, consider an infinite match of the form $\Sigma = (\phi_{n},s_{n})_{n<\omega}$; then we define the winner of the match to be Eloise\xspace if the induced trace $(\phi_{n})_{n<\omega}$ is a $\nu$-trace, and Abelard\xspace if it is a $\mu$-trace. It is well-known that this game can be presented as a parity game, and as such it has positional determinacy. \begin{table}[htb] \begin{center} \begin{tabular}{|ll|c|l|} \hline \multicolumn{2}{|l|}{Position} & Player & Admissible moves\\ \hline $(p,s)$ & with $p\in \mathit{FV}(\xi)$ and $s \in V(p)$ & $\forall$ & $\varnothing$ \\ $(p,s)$ & with $p\in \mathit{FV}(\xi)$ and $s \notin V(p)$ & $\exists$ & $\varnothing$ \\ $(\atneg{p},s)$ & with $p\in \mathit{FV}(\xi)$ and $s \in V(p)$ & $\exists$ & $\varnothing$ \\ $(\atneg{p},s)$ & with $p\in \mathit{FV}(\xi)$ and $s \notin V(p)$ & $\forall$ & $\varnothing$ \\ \multicolumn{2}{|l|}{$(\phi \lor \psi,s)$} & $\exists$ & $\{ (\phi,s), (\psi,s) \}$ \\ \multicolumn{2}{|l|}{$(\phi \land \psi,s)$} & $\forall$ & $\{ (\phi,s), (\psi,s) \}$ \\ \multicolumn{2}{|l|}{$(\Diamond \phi,s)$} & $\exists$ & $\{ (\phi,t) \mid sRt \}$ \\ \multicolumn{2}{|l|}{$(\Box \phi,s) $} & $\forall$ & $\{ (\phi,t) \mid sRt \}$ \\ \multicolumn{2}{|l|}{$(\eta x . \phi,s)$} & - & $\{ (\phi[\eta x\, \phi/x],s) \}$ \\ \hline \end{tabular} \end{center} \caption{The evaluation game $\mathcal{E}(\xi,\mathstr{S})$} \label{tb:EG} \end{table} \subsubsection{The alternation-free fragment} As mentioned in the introduction, the alternation-free fragment of the modal $\mu$-calculus consists of relatively simple formulas, in which the interaction between least- and greatest fixpoint operators is restricted. There are various ways to formalise this intuition. Following the approach by Niwi\'nski~\cite{niwi:fixp86}, we call a formula $\xi$ alternation free if it satisfies the following: if $\xi$ has a subformula $\eta x. \phi$ then no free occurrence of $x$ in $\phi$ can be in the scope of an $\ol{\eta}$-operator. An inductive definition of this set can be given as follows. \begin{definition} \label{d:afmc} By a mutual induction we define the \emph{alternation-free $\mu$-calculus} $\muML^{\mathit{af}}$, and, for a subset $\mathsf{Q} \subseteq \mathsf{Prop}$ and $\eta \in \{ \mu, \nu \}$, its \emph{noetherian $\eta$-fragment over $\mathsf{Q}$}, $\nth{\eta}{\mathsf{Q}}$. \[\begin{array}{rlc@{\;\mid\;}c@{\;\mid\;}c@{\;\mid\;}c@{\;\mid\;}% c@{\;\mid\;}c@{\;\mid\;}c@{\;\mid\;}c@{\;\mid\;}l@{\;\mid\;}l@{\;\mid\;}c} \muML^{\mathit{af}} \ni \phi &\;::=\; & \bot & \top & p & \ol{p} & (\phi_{0} \land \phi_{1}) & (\phi_{0} \lor \phi_{1}) & \Diamond \phi & \Box \phi & \mu p. \phi^{\mu}_{p} & \nu p. \phi^{\nu}_{p} \\[2mm] \nth{\mu}{\mathsf{Q}} \ni \phi & \;::=\; & \bot & \top & q & & (\phi_{0} \land \phi_{1}) & (\phi_{0} \lor \phi_{1}) & \Diamond \phi & \Box \phi & \mu p. \phi^{\mu}_{\mathsf{Q} p} & & \psi \\[2mm] \nth{\nu}{\mathsf{Q}} \ni \phi &\;::=\;& \bot & \top & q & & (\phi_{0} \land \phi_{1}) & (\phi_{0} \lor \phi_{1}) & \Diamond \phi & \Box \phi && \nu p. \phi^{\nu}_{\mathsf{Q} p} & \psi \end{array}\] where $p \in \mathsf{Prop}$, $q \in \mathsf{Q}$, $\phi^{\eta}_{\mathsf{P}} \in \nth{\eta}{\mathsf{P}}$ for $\mathsf{P} \subseteq \mathsf{Prop}$, and $\psi \in \muML^{\mathit{af}}$ is such that $\mathit{FV}(\psi) \cap \mathsf{Q} = \varnothing$. Here and in the sequel we shall write $p$ for $\{ p \}$ and $\mathsf{Q} q$ for $\mathsf{Q} \cup \{ q \}$. \end{definition} Throughout the text we shall simply refer to elements of $\muML^{\mathit{af}}$ as \emph{formulas}. The intuition underlying this definition is that $\nth{\eta}{\mathsf{Q}}$ consists of those alternation-free formulas in which free variables from $\mathsf{Q}$ may not occur in the scope of an $\ol{\eta}$-operator. The name `noetherian' refers to a semantic property that characterize the $\nth{\mu}{\mathsf{Q}}$ formulas~\cite{font:mode18}: if a formula $\phi\in\nth{\mu}{\mathsf{Q}}$ is satisfied at the root of a tree model $\mathstr{T}$, then it is also true in a variant of $\mathstr{T}$ where we restrict the interpretation of the proposition letters in $\mathsf{Q}$ to noetherian subtrees of $\mathstr{T}$, i.e., subtrees without infinite paths. \begin{example} For some examples of alternation-free formulas, observe that $\muML^{\mathit{af}}$ contains all basic modal (i.e., fixpoint-free) formulas, as well as all $\mathcal{L}_{\mu}$-formulas that use $\mu$-operators or $\nu$-operators, but not both, and all modal and boolean combinations of such formulas. For a slightly more sophisticated example, consider the formula $\xi = \mu x. (\nu y. p \land \Box y) \land \Diamond x$. This formula does feature an alternating chain of fixpoint operators, in the sense that the $\nu$-formula $\phi = \nu y. p \land \Box y$ is a subformula of the $\mu$-formula $\xi$. However, since the variable $x$ does not occur in $\phi$, this formula does belong to $\muML^{\mathit{af}}$. To see this in terms of Definition~\ref{d:afmc}, observe that $\psi \in \nth{\mu}{x}$ since $x \not\in \mathit{FV}(\psi)$. But then the formula $(\nu y. p \land \Box y) \land \Diamond x$ also belongs to this fragment, and from this it is immediate that $\xi \in \muML^{\mathit{af}}$. \end{example} Below we gather some basic observations on $\muML^{\mathit{af}}$. First we mention some useful closure conditions, stating that $\muML^{\mathit{af}}$ is closed under taking respectively negations, unfoldings, subformulas and guarded equivalents. \begin{proposition} \label{p:af1} Let $\xi$ be an alternation-free formula. Then \begin{urlist} \item \label{it:af1-1} its negation $\ol{\xi}$ is alternation free; \item \label{it:af1-2} if $\xi$ is a fixpoint formula, then its unfolding is alternation free; \item \label{it:af1-3} every subformula of $\xi$ is alternation free; \item \label{it:af1-4} every formula in $\mathsf{Clos}(\xi)$ is alternation free; \item \label{it:af1-5} there is an alternation-free guarded formula $\xi'$ that is equivalent to $\xi$. \end{urlist} \end{proposition} \begin{proof} Item~\ref{it:af1-2} is immediate by Proposition~\ref{p:af3}(\ref{it:af3-3} and Proposition~\ref{p:af2}(\ref{it:af2-2}. For item~\ref{it:af1-5} a careful inspection will reveal that the standard procedure for guarding formulas (see \cite{walu:comp00,kupfer:autobranch00,brus:guar15}) transforms alternation-free formulas to guarded alternation-free formulas. The other items can be proved by routine arguments. \end{proof} \begin{proposition} \label{p:af2} \begin{urlist} \item \label{it:af2-1} If $\mathsf{Q}$ and $\mathsf{Q}'$ are sets of proposition letters with $\mathsf{Q} \subseteq \mathsf{Q}'$, then $\nth{\eta}{\mathsf{Q}'} \subseteq \nth{\eta}{\mathsf{Q}}$. \item \label{it:af2-2} $\muML^{\mathit{af}} = \nth{\eta}{\varnothing}$. \end{urlist} \end{proposition} \begin{proof} Item~\ref{it:af2-1} can be proved by a straightforward induction on the complexity of formulas in $\nth{\eta}{\mathsf{Q}'}$; we leave the details for the reader. A similar induction shows that $\nth{\eta}{\mathsf{Q}} \subseteq \muML^{\mathit{af}}$, for any set $\mathsf{Q}$ of variables; clearly this takes care of the inclusion $\subseteq$ in item~\ref{it:af2-2}. This leaves the statement that $\muML^{\mathit{af}} \subseteq \nth{\eta}{\varnothing}$, which we prove by induction on the complexity of $\muML^{\mathit{af}}$-formulas. We confine our attention here to the case where $\phi \in \muML^{\mathit{af}}$ is a fixpoint formula, say, $\phi = \lambda p. \phi'$. But then it is obvious that $\mathit{FV}(\phi) \cap \{ p \} = \varnothing$, so that $\phi \in \nth{\eta}{p}$ by definition of the latter set. It follows that $\phi \in \nth{\eta}{\varnothing}$ by item~\ref{it:af2-1}. \end{proof} The following proposition states some useful closure conditions on sets of the form $\nth{\eta}{\mathsf{Q}}$. \begin{proposition} \label{p:af3} Let $\chi$ and $\xi$ be formulas in $\muML^{\mathit{af}}$, let $x,y$ be variables, and let $\mathsf{Q}$ be a set of variables. Then the following hold: \begin{urlist} \item \label{it:af3-1} if $\xi \in \nth{\eta}{\mathsf{Q}}$ and $y \not\in \mathit{FV}(\xi)$, then $\xi \in \nth{\eta}{\mathsf{Q} y}$; \item \label{it:af3-2} if $\chi \in \nth{\eta}{\mathsf{Q} x}$, $\xi \in \nth{\eta}{\mathsf{Q}}$ and $\xi$ is free for $x$ in $\chi$, then $\chi[\xi/x] \in \nth{\eta}{\mathsf{Q}}$; \item \label{it:af3-3} if $\eta x\, \chi \in \nth{\eta}{\mathsf{Q}}$ then $\chi[\eta x\, \chi/x] \in \nth{\eta}{\mathsf{Q}}$. \end{urlist} \end{proposition} \begin{proof} We prove item~\ref{it:af3-1} of the proposition by a straightforward induction on the complexity of $\xi$. We only cover the case of the induction step where $\xi$ is of the form $\xi = \lambda z. \xi'$. Here we distinguish cases. If $\mathit{FV}(\xi) \cap \mathsf{Q} = \varnothing$ then we find $\mathit{FV}(\xi) \cap (\mathsf{Q} \cup \{ y \}) = \varnothing$ since $y \not\in \mathit{FV}(\xi)$ by assumption. Here it is immediate by the definition of $\nth{\eta}{\mathsf{Q} y}$ that $\xi$ belongs to it. If, on the other hand, we have $\mathit{FV}(\xi) \cap \mathsf{Q} \neq \varnothing$, then we can only have $\xi \in \nth{\eta}{\mathsf{Q}}$ if $\lambda = \eta$. We now make a further case distinction: if $y = z$ then we have $\xi' \in \nth{\eta}{\mathsf{Q} y}$ so that also $\xi \in \nth{\eta}{\mathsf{Q} y}$. If $y$ and $z$ are distinct variables, then it must be the case that $\xi' \in \nth{\eta}{\mathsf{Q} z}$; since we clearly have $y \not\in \mathit{FV}(\xi')$ as well, the inductive hypothesis yields that $\xi' \in \nth{\eta}{\mathsf{Q} yz}$. But then we immediately find $\xi \in \nth{\eta}{\mathsf{Q} y}$ by definition of the latter set. \smallskip For the proof of item~\ref{it:af3-2} we proceed by induction on the complexity of $\chi$. Again, we only cover the inductive case where $\chi$ is a fixpoint formula, say, $\chi = \lambda y. \chi'$. We make a case distinction. First assume that $x \not\in \mathit{FV}(\chi)$; then we find $\chi[\xi/x] = \chi$, so that $\chi[\xi/x] \in \nth{\eta}{\mathsf{Q} x}$ by assumption. It then follows that $\chi[\xi/x] \in \nth{\eta}{\mathsf{Q}}$ by Proposition~\ref{p:af2}(\ref{it:af2-1}. Assume, then, that $x \in \mathit{FV}(\chi)$; since $\chi \in \nth{\eta}{\mathsf{Q} x}$ this can only be the case if $\lambda = \eta$, and, again by definition of $\nth{\eta}{\mathsf{Q} x}$, we find $\chi' \in \nth{\eta}{\mathsf{Q} xy}$. Furthermore, as $\xi$ is free for $x$ in $\chi$, the variable $y$ cannot be free in $\xi$, so that it follows by item~\ref{it:af3-1} and the assumption that $\xi \in \nth{\eta}{\mathsf{Q}}$ that $\xi \in \nth{\eta}{\mathsf{Q} y}$. We may now use the inductive hypothesis on $\chi'$ and $\xi$, to find that $\chi'[\xi/x] \in \nth{\eta}{\mathsf{Q} y}$; and from this we conclude that $\chi[\xi/x] \in \nth{\eta}{\mathsf{Q}}$ by definition of $\nth{\eta}{\mathsf{Q}}$. \smallskip Finally, item~\ref{it:af3-3} is immediate by item~\ref{it:af3-2}. \end{proof} The next observation can be used to simplify the formulation of the winning conditions of the evaluation game for alternation-free formulas somewhat. It is a direct consequence of results in~\cite{kupk:size20}, so we confine ourselves to a proof sketch. \begin{proposition} \label{p:af4} For any infinite trace $\tau = (\phi_{n})_{n<\omega}$ of $\muML^{\mathit{af}}$-formulas the following are equivalent: \begin{urlist} \item \label{it:af4-1} $\tau$ is an $\eta$-trace; \item \label{it:af4-2} $\phi_{n}$ is an $\eta$-formula, for infinitely many $n$; \item \label{it:af4-3} $\phi_{n}$ is an $\ol{\eta}$-formula, for at most finitely many $n$. \end{urlist} \end{proposition} \begin{proof}[Proof (sketch)] Let $\xi = \eta z. \xi'$ be the characteristic fixpoint formula of $\tau$, i.e., $\xi$ is the unique formulas that occurs infinitely often on $\tau$ and that is a subformula of almost all formulas on $\tau$. Clearly it suffices to prove that almost every fixpoint formula on $\tau$ is an $\eta$-formula as well. To show why this is the case, it will be convenient to introduce the following notation. We write $\psi \rclat{\rho} \phi$ if there is a sequence $(\chi_{i})_{0\leq i \leq n}$ such that $\psi = \chi_{0}$, $\phi = \chi_{n}$, $\chi_{i} \to_{C} \chi_{i+1}$ for all $i<n$, and every $\chi_{i}$ is of the form $\chi_{i}'[\rho/x]$ for some formula $\chi_{i}'$ and some $x \in \mathit{FV}(\chi_{i}')$. Then it readily follows from the definitions that $\xi \rclat{\xi} \phi_{n}$ for almost every formula $\phi_{n}$ on $\tau$. The key observation in the proof is now that if $\xi$ is alternation-free, and $\phi$ is a fixpoint formula such that $\xi \rclat{\xi} \phi$, then $\phi$ is an $\eta$-formula. To be more precise we first show that \begin{equation} \label{eq:ih} \mbox{for all } \phi \mbox{ with } \xi \rclat{\xi} \phi \mbox{ there is some } \phi^\circ \in \nth{\eta}{z} \mbox{ such that } z \in \mathit{FV}(\phi^\circ) \mbox{ and } \phi = \phi^\circ [\xi/z]. \end{equation} We prove this claim by induction on the length of the path $\xi \rclat{\xi} \phi$. In the base case we have $\phi = \xi$ and we let $\phi^\circ = z$. In the inductive step there is some $\chi$ such that $\xi \rclat{\xi} \chi \rcla{\xi} \phi$. By the inductive hypothesis there is some $\chi^\circ \in \nth{\eta}{z}$ such that $z \in \mathit{FV}(\chi^\circ)$ and $\chi = \chi^\circ [\xi/z]$. We distinguish cases depending on the main connective of $\chi$. Omitting the boolean and modal cases we focus on the case where $\chi$ is a fixpoint formula, and we further distinguish cases depending on whether $\chi = \xi$ or not. If $\chi = \xi$ then $\phi = \xi'[\xi/z]$. Because $\xi$ is alternation free we know that $\xi' \in \nth{\eta}{x}$. We can thus let $\phi^{\circ} \mathrel{:=} \xi[z/x]$. If $\chi = \lambda y . \chi'$ but $\chi \neq \xi$ then we have $\phi = \chi'[\chi/y]$. From the inductive hypothesis we get that $\chi = \chi^\circ[\xi / z]$ for some $\chi^\circ \in \nth{\eta}{z}$ with $z \in \mathit{FV}{\chi^\circ}$. Because $\chi \neq \xi$ it follows from $\chi = \lambda y . \chi'$ and $\chi = \chi^\circ[\xi / z]$ that $\chi^\circ = \lambda y . \rho$ for some $\rho$ with $\chi' = \rho[\xi / z]$. Hence, $\phi = \rho[\xi / z][\chi/y]$. Because $z \notin \mathit{FV}(\chi)$ and $y \notin \mathit{FV}(\xi)$ (because $\mathit{BV}(\chi) \cap \mathit{FV}(\xi) = \varnothing$) we may commute these substitutions (cf.~Proposition~3.11 in \cite{kupk:size20}). Hence $\phi = \rho[\chi/y][\xi / z]$, and we may set $\phi^\circ \mathrel{:=} \rho[\chi/y]$. Because $\chi^\circ = \lambda y . \rho$ and $z \in \mathit{FV}(\chi^\circ)$ it follows that $z \neq y$ and that $z \in \mathit{FV}(\rho)$. Thus also $z \in \mathit{FV}(\rho[\chi/y])$. Lastly, it follows from $\chi^\circ = \lambda y . \rho$, $\chi^\circ \in \nth{\eta}{z}$ and $z \in \mathit{FV}(\rho)$ that $\rho \in \nth{\eta}{z}$. It is not hard to see that $\nth{\eta}{z}$ is closed under substitution with the alternation free formula $\chi$, where $z \notin \mathit{FV}(\chi)$ and thus $\rho[\chi/y] \in \nth{\eta}{z}$. This finishes the proof of \eqref{eq:ih}. The claim about fixpoint formulas $\phi$ such that $\xi \rclat{\xi} \phi$ can be derived from \eqref{eq:ih} as follows. Assume that $\phi$ is of the form $\phi = \lambda y . \rho$, then if $\lambda y . \rho = \phi^\circ[\xi/z]$ with $z \in \mathit{FV}(\phi^\circ)$ and $\phi \neq \xi$ then it must be the case that $\phi^\circ = \lambda y . \rho^\circ$, and because $\phi^\circ \in \nth{\eta}{z}$ and $z \in \mathit{FV}(\rho^\circ)$ this is only possible if $\lambda = \eta$. That is, $\phi$ is an $\eta$-formula as required. \end{proof} \section{The focus system} \label{sec-proofsystem} In this section we introduce our annotated proof systems for the alternation-free $\mu$-calculus. We consider two versions of the system, which we call \ensuremath{\mathsf{Focus}}\xspace and \ensuremath{\mathsf{Focus}_\infty}\xspace, respectively. \ensuremath{\mathsf{Focus}_\infty}\xspace is a proof system that allows proofs to be based on infinite, but finitely branching trees. The focus mechanism that is implemented by the annotations of formulas helps ensuring that all the infinite branches in a \ensuremath{\mathsf{Focus}_\infty}\xspace proof are of the right shape. The proof system \ensuremath{\mathsf{Focus}}\xspace can be seen as a finite variant of \ensuremath{\mathsf{Focus}_\infty}\xspace. The proof trees in this system are finite, but the system is circular in that it contains a discharge rule that allows to discharge a leaf of the tree if the same sequent as the sequent at the leaf is reached again closer to the root of the tree. As we will see, the two systems are equivalent in the sense that we may transform proofs in either variant into proofs of the other kind. \subsection{Basic notions} In this first part of this section we provide the definition of the proof systems \ensuremath{\mathsf{Focus}}\xspace and \ensuremath{\mathsf{Focus}_\infty}\xspace. A \emph{sequent} is a finite set of formulas. When writing sequents we often leave out the braces, meaning that we write for instance $\phi_1,\dots,\phi_i$ for the sequent $\{\phi_1,\dots,\phi_i\}$. If $\Phi$ is a sequent, we also use the notation $\phi_1,\dots,\phi_i, \Phi$ for the sequent $\{\phi_1,\dots,\phi_i\} \cup \Phi$. Given a sequent $\Phi$ we write $\Diamond \Phi$ for the sequent $\Diamond \Phi \mathrel{:=} \{\Diamond \phi \mid \phi \in \Phi\}$. Intuitively, sequents are to be read \emph{disjunctively}. An \emph{annotated formula} is a pair $(\phi,a) \in \muML^{\mathit{af}} \times \{ f,u \}$; we usually write $\phi^{a}$ instead of $(\phi,a)$ and call $a$ the \emph{annotation} of $\phi$. We define a linear order $\sqsubseteq$ on the set $\{f,u\}$ of annotations by putting $u \sqsubset f$, and given $a \in \{ f,u \}$ we let $\ol{a}$ be its alternative, i.e., we define $\ol{u} \mathrel{:=} f$ and $\ol{f} \mathrel{:=} u$. A formula that is annotated with $f$ is called \emph{in focus}, and one annotated with $u$ is \emph{out of focus}. We use $a,b,c,\ldots$ as symbols to range over the set $\{f,u\}$. A finite set of annotated formulas is called an \emph{annotated sequent}. We shall use the letters $\Sigma, \Gamma, \Delta, \ldots$ for annotated sequents, and $\Phi,\Psi$ for sequents. In practice we will often be sloppy and refer to annotated sequents as sequents. Given a sequent $\Phi$, we define $\Phi^a$ to be the annotated sequent $\Phi^a \mathrel{:=} \{\phi^a \mid \phi \in \Phi \}$. Conversely, given an annotated sequent $\Sigma$, we define $\uls{\Sigma}$ as its underlying plain sequent; that is, $\uls{\Sigma}$ consists of the formulas $\phi$ such that $\phi^{a} \in \Sigma$, for some annotation $a$. The proof rules of our focus proof systems $\ensuremath{\mathsf{Focus}}\xspace$ and $\ensuremath{\mathsf{Focus}_\infty}\xspace$ are given in Figure~\ref{f:proof rules}. We use standard terminology when talking about proof rules. Every (application of a) rule has one \emph{conclusion} and a finite (possibly zero) number of \emph{premises}. \emph{Axioms} are rules without premises. The \emph{principal} formula of a rule application is the formula in the conclusion to which the rule is applied. As non-obvious cases we have that all formulas are principal in the conclusion of the rule \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace and that the rule \RuDischarge{\ensuremath{\mathsf{x}}} has no principal formula. In all cases other than for the rule \ensuremath{\mathsf{W}}\xspace the principal formula develops into one or more \emph{residual} formulas in each of the premises. Principal and residual formulas are also called \emph{active}. \begin{figure}[tbh] \begin{minipage}{\textwidth} \begin{minipage}{0.16\textwidth} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$p^a, \atneg{p}^b$} \end{prooftree} \end{minipage} \begin{minipage}{0.12\textwidth} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax2}}\xspace} \UnaryInfC{$\top^a$} \end{prooftree} \end{minipage} \begin{minipage}{0.21\textwidth} \begin{prooftree} \AxiomC{$\phi^a,\psi^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInfC{$(\phi \lor \psi)^a,\Sigma$} \end{prooftree} \end{minipage} \begin{minipage}{0.28\textwidth} \begin{prooftree} \AxiomC{$\phi^a, \Sigma$} \AxiomC{$\psi^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace} \BinaryInfC{$(\phi \land \psi)^a,\Sigma$} \end{prooftree} \end{minipage} \begin{minipage}{0.20\textwidth} \begin{prooftree} \AxiomC{$\phi^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInfC{$\Box \phi^a, \Diamond \Sigma$} \end{prooftree} \end{minipage} \end{minipage} \bigskip \begin{minipage}{\textwidth} \begin{minipage}{0.24\textwidth} \begin{prooftree} \AxiomC{$\phi[\mu x . \phi / x]^u, \Sigma$} \RightLabel{\RuFp{\mu}} \UnaryInfC{$\mu x . \phi^a, \Sigma$} \end{prooftree} \end{minipage} \begin{minipage}{0.24\textwidth} \begin{prooftree} \AxiomC{$\phi[\nu x . \phi / x]^a, \Sigma$} \RightLabel{\RuFp{\nu}} \UnaryInfC{$\nu x . \phi^a, \Sigma$} \end{prooftree} \end{minipage} \begin{minipage}{0.16\textwidth} \begin{prooftree} \AxiomC{$\Sigma$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$\phi^a, \Sigma$} \end{prooftree} \end{minipage} \begin{minipage}{0.16\textwidth} \begin{prooftree} \AxiomC{$\phi^f,\Sigma$} \RightLabel{\ensuremath{\mathsf{F}}\xspace} \UnaryInfC{$\phi^u,\Sigma$} \end{prooftree} \end{minipage} \begin{minipage}{0.16\textwidth} \begin{prooftree} \AxiomC{$\phi^u,\Sigma$} \RightLabel{\ensuremath{\mathsf{U}}\xspace} \UnaryInfC{$\phi^f,\Sigma$} \end{prooftree} \end{minipage} \end{minipage} \begin{prooftree} \AxiomC{$[\Sigma]^\ensuremath{\mathsf{x}}$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\Sigma$} \RightLabel{\RuDischarge{\ensuremath{\mathsf{x}}}} \UnaryInfC{$\Sigma$} \end{prooftree} \caption{Proof rules of the focus system} \label{f:proof rules} \end{figure} Here are some more specific comments about the individual proof rules. The boolean rules ($\ensuremath{\mathsf{R}_{\land}}\xspace$ and $\ensuremath{\mathsf{R}_{\lor}}\xspace$) are fairly standard; observe that the annotation of the active formula is simply inherited by its subformulas. The fixpoint rules (\RuFp{\mu} and \RuFp{\nu}) simply unfold the fixpoint formulas; note, however, the difference between \RuFp{\mu} and \RuFp{\nu} when it comes to the annotations: in \RuFp{\nu} the annotation of the active $\nu$-formula remains the same under unfolding, while in \RuFp{\mu}, the active $\mu$-formula \emph{loses focus} when it gets unfolded. The box rule \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace is the standard modal rule in one-sided sequent systems; the annotation of any formula in the consequent and its derived formula in the antecedent are the same. The rule \ensuremath{\mathsf{W}}\xspace is a standard \emph{weakening rule}. Next to \RuFp{\mu}, the \emph{focus rules} \ensuremath{\mathsf{F}}\xspace and \ensuremath{\mathsf{U}}\xspace are the only rules that change the annotations of formulas. Finally, the \emph{discharge rule} \RuDischarge{} is a special proof rule that allows us to discharge an assumption if it is repeating a sequent that occurs further down in the proof. Every application \RuDischarge{\ensuremath{\mathsf{x}}} of this rule is marked by a so-called \emph{discharge token} $\ensuremath{\mathsf{x}}$ that is taken from some fixed infinite set $\mathcal{D} = \{\ensuremath{\mathsf{x}},\ensuremath{\mathsf{y}},\ensuremath{\mathsf{z}},\dots\}$. In Figure~\ref{f:proof rules} this is suggested by the notation $[\Sigma]^\ensuremath{\mathsf{x}}$. The precise conditions under which \RuDischarge{\ensuremath{\mathsf{x}}} can be employed are explained in Definition~\ref{d:proof} below. \begin{definition} \label{d:proof} A \emph{pre-proof} $\Pi = (T,P,\Sigma,\mathsf{R})$ is a quadruple such that $(T,P)$ is a, possibly infinite, tree with nodes $T$ and parent relation $P$; $\Sigma$ is a function that maps every node $u \in T$ to a non-empty annotated sequent $\Sigma_u$; and \[ \mathsf{R}:\; T \;\to\; \big\{ \ensuremath{\mathsf{Ax1}}\xspace,\ensuremath{\mathsf{Ax2}}\xspace,\ensuremath{\mathsf{R}_{\lor}}\xspace,\ensuremath{\mathsf{R}_{\land}}\xspace,\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace,\RuFp{\mu},\RuFp{\nu},\ensuremath{\mathsf{W}}\xspace,\ensuremath{\mathsf{F}}\xspace,\ensuremath{\mathsf{U}}\xspace \big\} \cup \big\{\RuDischarge{\ensuremath{\mathsf{x}}} \mid \ensuremath{\mathsf{x}} \in \mathcal{D} \big\} \cup \mathcal{D} \cup \{ \star \}, \] is a map that assigns to every node $u$ of $T$ its \emph{label} $\mathsf{R}(u)$, which is either (i) the name of a proof rule, (ii) a discharge token or (iii) the symbol $\star$. To qualify as a pre-proof, such a quadruple is required to satisfy the following conditions: \begin{enumerate} \item \label{i:local condition} If a node is labelled with the name of a proof rule then it has as many children as the proof rule has premises, and the annotated sequents at the node and its children match the specification of the proof rules in Figure~\ref{f:proof rules}. \item \label{i:leaf condition} If a node is labelled with a discharge token or with $\star$ then it is a leaf. We call such nodes \emph{non-axiomatic leaves} as opposed to the \emph{axiomatic leaves} that are labelled with one of the axioms, \ensuremath{\mathsf{Ax1}}\xspace or \ensuremath{\mathsf{Ax2}}\xspace. \item \label{i:discharge condition} For every leaf $l$ that is labelled with a discharge token $\ensuremath{\mathsf{x}} \in \mathcal{D}$ there is exactly one node $u$ in $\Pi$ that is labelled with \RuDischarge{\ensuremath{\mathsf{x}}}. This node $u$, as well as its (unique) child, is a proper ancestor of $l$ and satisfies $\Sigma_u = \Sigma_l$. In this situation we call $l$ a \emph{discharged leaf}, and $u$ its \emph{companion}; we write $c$ for the function that maps a discharged leaf $l$ to its companion $c(l)$. \item \label{i:path condition} \label{i:pc} If $l$ is a discharged leaf with companion $c(l)$ then the path from $c(l)$ to $l$ contains (\ref{i:pc}a) no application of the focus rules, (\ref{i:pc}b) at least one application of \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace, while (\ref{i:pc}c) every node on this path features a formula in focus. \end{enumerate} Non-axiomatic leaves that are not discharged, are called \emph{open}; the sequent at an open leaf is an \emph{open assumption} of the pre-proof. We call a pre-proof a \emph{proof in \ensuremath{\mathsf{Focus}}\xspace} if it is finite and does not have any open assumptions. A infinite branch $\beta = (v_{n})_{n\in\omega}$ is \emph{successful} if there are infinitely many applications of \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace on $\beta$ and there is some $i$ such that for all $j \geq i$ the annotated sequent at $v_j$ contains at least one formula that is in focus and none of the focus rules \ensuremath{\mathsf{F}}\xspace and \ensuremath{\mathsf{U}}\xspace is applied at $v_j$. A pre-proof is a \emph{\ensuremath{\mathsf{Focus}_\infty}\xspace-proof} if it does not have any non-axiomatic leaves and all its infinite branches are successful. An unannotated sequent $\Phi$ is \emph{derivable} in $\ensuremath{\mathsf{Focus}_\infty}\xspace$ (in $\ensuremath{\mathsf{Focus}}\xspace$) if there is a \ensuremath{\mathsf{Focus}_\infty}\xspace proof (a $\ensuremath{\mathsf{Focus}}\xspace$ proof, respectively) such that $\Phi^f$ is the annotated sequent at the root of the proof. \end{definition} For future reference we make some first observations about (pre-)proofs in this system. \begin{proposition} \label{p:proof in closure} Let $\Phi$ be the set of formulas that occur in the annotated sequent $\Sigma_r$ at the root of some pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$. Then all formulas that occur annotated in $\Sigma_t$ for any $t \in T$ are in $\mathsf{Clos}(\Phi)$. \end{proposition} \begin{proof} This is an easy induction on the depth of $t$ in the tree $(T,P)$. It amounts to checking that if the formulas in the conclusion of any of the rules from Figure~\ref{f:proof rules} are in $\mathsf{Clos}(\Phi)$ then so are the formulas at any of the premises. \end{proof} \begin{proposition} \label{p:lr1} Let $u$ and $v$ be two nodes in a proof $\Pi = (T,P,\Sigma,\mathsf{R})$ such that $Puv$ and $\mathsf{R}_{u} \neq \ensuremath{\mathsf{F}}\xspace$. Then the following holds: \begin{equation} \label{eq:lr2} \text{if } \Sigma_{v} \text{ contains a formula in focus, then so does } \Sigma_{u}. \end{equation} \end{proposition} This claim is proved by straightforward inspection in a case distinction as to the proof rule $\mathsf{R}_{u}$. \subsection{Circular and infinite proofs} We first show that \ensuremath{\mathsf{Focus}_\infty}\xspace and \ensuremath{\mathsf{Focus}}\xspace are the infinitary and circular version of the same proof system, and derive the same annotated sequents. \begin{theorem} \label{t:same} An annotated sequent is provable in $\ensuremath{\mathsf{Focus}}\xspace$ iff it is provable in \ensuremath{\mathsf{Focus}_\infty}\xspace. \end{theorem} The two directions of this theorem are proved in Propositions \ref{p:fintoinf}~and~\ref{p:ppp}. \begin{proposition} \label{p:fintoinf} If an annotated sequent $\Gamma$ is provable in $\ensuremath{\mathsf{Focus}}\xspace$ then it is provable in \ensuremath{\mathsf{Focus}_\infty}\xspace. \end{proposition} \begin{proof} Let $\Pi = (T,P,\Sigma,\mathsf{R})$ be a proof of $\Gamma$ in \ensuremath{\mathsf{Focus}}\xspace. We define a proof $\Pi' = (T',P',\Sigma',\mathsf{R}')$ of $\Gamma$ in \ensuremath{\mathsf{Focus}_\infty}\xspace. Basically, the idea is to unravel the proof $\Pi$ at discharged leaves; the result of this, however, would contain some redundant nodes, corresponding to the discharged leaves in $\Pi$ and their companions. In our construction we will take care to remove these nodes from the paths that provide the nodes of the unravelled proof. Going into the technicalities, we first define the relation $L$ on $T$ such that $L u v$ holds iff either $P u v$ or $u$ is a discharged leaf and $v = c(u)$. Let $A$ be the set of all finite paths $\pi$ in $L$ that start at the root $r$ of $(T,P)$. Formally, $\pi = v_0,\cdots,v_n$ is in $A$ iff $v_0 = r$ and $Lv_iv_{i+1}$ for all $i \in \{0,\dots,n-1\}$. For any path $\pi = v_0,\cdots,v_n \in A$ define $\mathsf{last}(\pi) = v_n$. Consider the set $S \mathrel{:=} \mathcal{D} \cup \{\RuDischarge{\ensuremath{\mathsf{x}}} \mid \ensuremath{\mathsf{x}} \in \mathcal{D}\}$; these are the ones that we need to get rid of in $\Pi'$. We then define $T' = \{\pi \in A \mid \mathsf{R}(\mathsf{last}(\pi)) \notin S\}$ and set $P' \pi \rho$ for $\pi,\rho \in T'$ iff $\rho = \pi \cdot u_1 \cdots u_n$ with $n \geq 1$ and $u_i \in S$ for all $i \in \{1,\dots,n-1\}$. Moreover, we set $\Sigma'_\pi = \Sigma_{\mathsf{last}(\pi)}$ and $\mathsf{R}'(\pi) = \mathsf{R}(\mathsf{last}(\pi))$. Note that for every node $v \in T$ we can define a unique $L$-path $\pi^v = t^v_0 \cdots t^v_n$ with $t^v_0 = v$, $\mathsf{R}(t^v_n) \notin S$ and $\mathsf{R}(t^v_i) \in S$ for all $i \in \{0,\dots,n-1\}$. This path is unique because every node $w$ with $\mathsf{R}(w) \in S$ has a unique $L$-successor, and there cannot be an infinite $L$-path through $S$. (To see this assume for contradiction that there would be such an infinite $L$-path $(t^{v}_{n})_{n\in\omega}$ through $S$. Because $T$ is finite it would follow that from some moment on the path visits only nodes that it visits infinitely often. Hence, there must then be some discharged leaf such that the infinite path visits all the nodes that are between mentioned leaf and its companion. But then by definition the path passes a node $w$ with $\mathsf{R}(w) = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace \notin S$.) Finally, observe that by the definition of the rules in $S$ we have $\Sigma_v = \Sigma_{t^v_n}$ for every such path $\pi^v$. It is not hard to see that $\Pi'$ is a pre-proof, and that it does not use the detachment rule. It thus remains to verify that all infinite branches are successful. Let $\beta = (\pi_{n})_{n\in\omega}$ be such a branch; by construction we may associate with $\beta$ a unique $L$-path $\alpha = (v_{n})_{n\in\omega}$ such that the sequence $(\mathsf{last}(\beta_n))_{n\in\omega}$ corresponds to the subsequence we obtain from $\alpha$ by removing all nodes from $S$. Because $T$ is finite, from some point on $\alpha$ only passes nodes that are situated on a path to some discharged leaf from its companion node. By condition~\ref{i:pc} from Definition~\ref{d:proof} it then follows that $\beta$ must be successful. \end{proof} The converse direction of Theorem~\ref{t:same} requires some preparations. \begin{definition} A node $u$ in a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is called a \emph{successful repeat} if it has a proper ancestor $t$ such that $\Sigma_{t} = \Sigma_{u}$, $\mathsf{R}(t) \neq \RuDischarge{}$, and the path $[t,u]$ in $\Pi$ satisfies condition~\ref{i:pc} of Definition~\ref{d:proof}. Any node $t$ with this property is called a \emph{witness} to the successful-repeat status of $u$. \end{definition} The following is then obvious. \begin{proposition} \label{p:successful repeat} Every successful branch $\beta = \beta_0 \beta_1 \cdots$ in a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ contains a successful repeat. \end{proposition} \begin{proposition} \label{p:ppp} If an annotated sequent $\Gamma$ is provable in \ensuremath{\mathsf{Focus}_\infty}\xspace then it is provable in \ensuremath{\mathsf{Focus}}\xspace. \end{proposition} \begin{proof} Assume that $\Pi = (T,P,\Sigma,\mathsf{R})$ is a proof for the annotated sequent $\Gamma$ in \ensuremath{\mathsf{Focus}_\infty}\xspace. If $\Pi$ is finite we are done, so assume otherwise; then by K\"{o}nig's Lemma the set $B^{\infty}$ of infinite branches of $\Pi$ is nonempty. Because of Proposition~\ref{p:successful repeat} we may define for every infinite branch $\tau \in B^{\infty}$ the number $\mathsf{l}(\tau) \in \omega$ as the least number $n \in \omega$ such that $\tau(n)$ is a successful repeat. This means that $\tau(\mathsf{l}(\tau))$ is the first successful repeat on $\tau$. Our first claim is the following: \begin{equation} \label{eq:itp01} \text{there is no pair } \sigma,\tau \text{ of infinite branches such that } \sigma(\mathsf{l}(\sigma)) \text{ is a proper ancestor of } \tau(\mathsf{l}(\tau)). \end{equation} To see this, suppose for contradiction that $\sigma(\mathsf{l}(\sigma))$ is a proper ancestor of $\tau(\mathsf{l}(\tau))$, then $\sigma(\mathsf{l}(\sigma))$ actually lies on the branch $\tau$. But this would mean that $\sigma(\mathsf{l}(\sigma))$ is a successful repeat on $\tau$, contradicting the fact that $\tau(\mathsf{l}(\tau))$ is the \emph{first} successful repeat on $\tau$. Our second claim is that \begin{equation} \label{eq:itp02} \text{ the set } Y \mathrel{:=} \{ t \in T \mid t \text{ has a descendant } \tau(\mathsf{l}(\tau)), \text{ for some } \tau \in B^{\infty} \} \text{ is finite}. \end{equation} For a proof of \eqref{eq:itp02}, assume for contradiction that $Y$ is infinite. Observe that $Y$ is in fact (the carrier of) a subtree of $(T,P)$, and as such a finitely branching tree. It thus follows by K\"onig's Lemma that $Y$ has an infinite branch $\sigma$, which is then clearly also an infinite branch of $\Pi$. Consider the node $s \mathrel{:=} \sigma(\mathsf{l}(\sigma))$. Since $\sigma$ is infinite, it passes through some proper descendant $t$ of $s$. This node $t$, lying on $\sigma$, then belongs to the set $Y$, so that by definition it has a descendant of the form $\tau(\mathsf{l}(\tau))$ for some $\tau \in B^{\infty}$. But then $\sigma(\mathsf{l}(\sigma))$ is a proper ancestor of $\tau(\mathsf{l}(\tau))$, which contradicts our earlier claim \eqref{eq:itp01}. It follows that the set $Y$ is finite indeed. Note that it obviously follows from \eqref{eq:itp02} that the set \[ \wh{Y} \mathrel{:=} \{ \tau(\mathsf{l}(\tau)) \mid \tau \in B^{\infty} \} \] is finite as well. Recall that every element $l \in \wh{Y}$ is a successful repeat; we may thus define a map $c: \wh{Y} \to T$ by setting $c(l)$ to be the \emph{first} ancestor $t$ of $l$ witnessing that $l$ is a successful repeat. Finally, let $\mathsf{Ran}(c)$ denote the range of $c$. \medskip We are almost ready for the definition of the finite tree $(T',P')$ that will support the proof $\Pi'$ of $\Gamma$; the only thing left to care of is the well-founded part of $\Pi'$. For this we first define $Z$ to consist of those successors of nodes in $Y$ that generate a finite subtree; then it is easy to show that the collection $P^{*}[Z]$ of descendants of nodes in $Z$ is finite. With the above definitions we have all the material in hands to define a \ensuremath{\mathsf{Focus}}\xspace-proof $\Pi' = (T',P',\Sigma',\mathsf{R}')$ of $\Gamma$. The basic idea is that $\Pi'$ will be based on the set $Y \cup P^{*}[Z]$, with the nodes in $\wh{Y}$ providing the discharged assumptions of $\Pi'$. Note however, that for a correct presentation of the discharge rule, every companion node $u$ of such a leaf in $\wh{Y}$ needs to be provided with a successor $u^{+}$ that is labelled with the same annotated sequent as the companion node and the leaf. First of all we set \[ T' \mathrel{:=} Y \cup P^{*}[Z] \cup \{ u^{+} \mid u \in \mathsf{Ran}(c) \} \] and \begin{align*} P' \mathrel{:=} & \quad \{ (u,v) \in P \mid u \in T' \setminus \mathsf{Ran}(c) \text{ and } v \in T' \} \\ & \cup \{ (u,u^{+}) \mid u \in \mathsf{Ran}(c) \} \\ & \cup \{ (u^{+},v) \mid u \in \mathsf{Ran}(c), (u,v) \in P \} \end{align*} The point of adding the nodes $u^+$ is to make space for applications of the rule \RuDischarge{\ensuremath{\mathsf{x}}} at companion nodes. Furthermore, we put \[ \Sigma'(u) \mathrel{:=} \left\{ \begin{array}{ll} \Sigma(u) & \text{if } u \in T' \\ \Sigma(t) & \text{if } u = t^{+} \text{ for some } t \in \mathsf{Ran}(c). \end{array} \right. \] Finally, for the definition of the rule labelling $\mathsf{R}'$, we introduce a set $A \mathrel{:=} \{ \ensuremath{\mathsf{x}}_{u} \mid u \in \mathsf{Ran}(c) \}$ of discharge tokens, and we define \[ \mathsf{R}'(u) \mathrel{:=} \left\{ \begin{array}{lll} \mathsf{R}(u) & \text{if } & u \in T' \setminus (\wh{Y} \cup \mathsf{Ran}(c)) \\ \ensuremath{\mathsf{x}}_{c(l)} & \text{if } & u = l \in \wh{Y}, \\ \RuDischarge{\ensuremath{\mathsf{x}}_{u}} & \text{if } & u \in \mathsf{Ran}(c) \\ \mathsf{R}(t) & \text{if } & u = t^{+} \text{ for some } t \in \mathsf{Ran}(c). \end{array} \right. \] It is straightforward to verify that with this definition, $\Pi'$ is indeed a \ensuremath{\mathsf{Focus}}\xspace-proof of the sequent $\Gamma$. \end{proof} \subsection{Thin and progressive proofs} When we prove the soundness of our proof system it will be convenient to work with (infinite) proofs that are in a certain normal form. The idea here is that we restrict (as much as possible) attention to sequents that are \emph{thin} in the sense that they do not feature formulas that are both in and out of focus, and to proofs that are \emph{progressive} in the sense that when (from the perspective of proof search) we move from the conclusion of a boolean or fixpoint rule to its premise(s), we drop the principal formula. Theorem~\ref{t:tpp} below states that we can make these assumptions without loss of generality. \begin{definition} An annotated sequent $\Sigma$ is \emph{thin} if there is no formula $\phi \in \muML^{\mathit{af}}$ such that $\phi^f \in \Sigma$ and $\phi^u \in \Sigma$. Given an annotated sequent $\Sigma$, we define its \emph{thinning} \[ \thin{\Sigma} \mathrel{:=} \{ \phi^{f} \mid \phi^{f} \in \Sigma \} \cup \{ \phi^{u} \mid \phi^{u} \in \Sigma, \phi^{f} \not\in \Sigma \}. \] A pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{thin} if for all $v \in T$ with $\phi^f,\phi^u \in \Sigma_v$ we have that $\mathsf{R}_v = \ensuremath{\mathsf{W}}\xspace$ and $\phi^u \notin \Sigma_u$ for the unique $u$ with $P v u$. \end{definition} Note that one may obtain the thinning $\thin{\Sigma}$ from an annotated sequent $\Sigma$ by removing the \emph{unfocused} versions of the formulas with a double occurrence in $\Sigma$. The definition of a thin proof implies that whenever a thin proof contains a sequent that is not thin then this sequent is followed by applications of the weakening rule until all the duplicate formulas are weakened away. For example if the sequent $\Sigma_v = p^u,p^f,q^u,q^f,r$ occurs in a thin proof then at $v$ and all of its immediate successors there need to be applications of weakening until only one annotated version of $p$ and one annotated version of $q$ is left. This might look for instance as follows: \begin{center} \begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$p^f,q^u,r$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$p^f,q^u,q^f,r$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$p^u,p^f,q^u,q^f,r$} \noLine \UnaryInfC{$\vdots$} \end{prooftree} \end{center} \begin{definition} An application of a boolean or fixpoint rule at a node $u$ in a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{progressive} if for the principal formula $\phi^a \in \Sigma_u$ it holds that $\phi^a \notin \Sigma_v$ for all $v$ with $Puv$.\footnote{% Note that since we assume guardedness, the principal formula is different from its residuals. } The proof $\Pi$ is \emph{progressive} if all applications of the boolean rules and the fixpoint rules in $\Pi$ are progressive. \end{definition} Our main result is the following. \begin{theorem} \label{t:tpp} Every \ensuremath{\mathsf{Focus}_\infty}\xspace-derivable sequent $\Phi$ has a thin and progressive \ensuremath{\mathsf{Focus}_\infty}\xspace-proof. \end{theorem} For the proof of Theorem~\ref{t:tpp} we need some preparations. Recall that we defined the linear order $\sqsubseteq$ on annotations such that $u \sqsubset f$. \begin{definition} \label{d:mf} Let $\Sigma$ and $\Gamma$ be annotated sequents. We define $\morefocus{\Gamma}{\Sigma}$ to hold if for all $\phi^a \in \Gamma$ there is a $b \sqsupseteq a$ such that $\phi^b \in \Sigma$. \end{definition} \begin{definition} \label{d:backcl} Let $\Sigma$ be a set of annotated formulas. We define $Q_0(\Sigma)$ as the set of all annotated formulas $\phi^a$ such that either \begin{enumerate} \item $\phi^b \in \Sigma$ for some $b \sqsupseteq a$; \item $\phi = \phi_0 \lor \phi_1$, and $\phi_0^a \in \Sigma$ and $\phi_1^a \in \Sigma$; \item $\phi = \phi_0 \land \phi_1$, and $\phi_0^a \in \Sigma$ or $\phi_1^a \in \Sigma$; \item $\phi = \mu x . \phi_0$ and $\phi_0(\phi)^u \in \Sigma$; or \item $\phi = \nu x . \phi_0$ and $\phi_0(\phi)^a \in \Sigma$. \end{enumerate} The map $Q_0$ clearly being a monotone operator on the sets of annotated formulas, we define the \emph{backwards closure} of $\Sigma$ as the least fixpoint $Q(\Sigma)$ of the operator $\Gamma \mapsto \Sigma \cup Q_0(\Gamma)$. \end{definition} In words, $Q(\Sigma)$ is the least set of annotated formulas such that $\Sigma \subseteq Q(\Sigma)$ and $Q_0(Q(\Sigma)) \subseteq Q(\Sigma)$. The following proposition collects some basic properties of $Q$; recall that we abbreviate $\allfocus{\Sigma} = \allfocus{\uls{\Sigma}}$, that is, $\allfocus{\Sigma}$ consists of the annotated formulas $\phi^{f}$ such that $\phi^a \in \Sigma$ for some $a$. \begin{proposition} \label{p:progressive facts}\label{p:pf} The map $Q$ is a closure operator on the collection of sets of annotated formulas. Furthermore, the following hold for any pair of annotated sequents $\Gamma,\Sigma$. \begin{enumerate} \item If $\morefocus{\Gamma}{\Sigma}$ then $\Gamma \subseteq Q(\Sigma)$. \item \label{i:pf:2} If $\Gamma \subseteq Q(\Sigma)$ and $\Gamma$ contains only atomic or modal formulas, then $\morefocus{\Gamma}{\Sigma}$. \item \label{i:proof rules} If $\Gamma$ is the conclusion and $\Sigma$ is one of the premises of an application of one of the rules \ensuremath{\mathsf{R}_{\lor}}\xspace, \ensuremath{\mathsf{R}_{\land}}\xspace, \RuFp{\mu}, or \RuFp{\nu}, then $\Gamma \subseteq Q(\Sigma)$. \item \label{i:thinning} $\{\phi^u,\phi^f\} \cup \Sigma \subseteq Q(\{\phi^f\} \cup \Sigma)$. \item \label{i:more focus} If $\phi^a \in Q(\Sigma)$ for some $a$ then $\phi^u, \phi^f \in Q(\allfocus{\Sigma})$. \end{enumerate} \end{proposition} \begin{proof} These statements are straightforward consequences of the Definitions~\ref{d:mf} and~\ref{d:backcl}. For instance, in order to establish part \eqref{i:more focus} it suffices to prove the following: \begin{equation} \label{eq:974} \phi^{a} \in Q_{0}(\Sigma) \text{ only if } \phi^{f} \in Q(\Sigma^{f}). \end{equation} To see this, take an arbitary annotated formula $\phi^{a} \in Q_{0}(\Sigma)$ and make a case distinction as to the reason why $\phi^{a} \in Q_{0}(\Sigma)$. (1) If $\phi^b \in \Sigma$ for some $b \sqsupseteq a$, then $\Phi^{f} \in \Sigma^{f}$, and so $\phi^{f} \in Q_{0}(\Sigma) \subseteq Q(\Sigma)$. (2) If $\phi = \phi_0 \lor \phi_1$, and $\phi_0^a, \phi_1^a \in \Sigma$ then $\phi_0^f, \phi_1^f \in \Sigma^{f}$, so that $\phi^{f} \in \Sigma^{f}$. (3) If $\phi = \phi_0 \land \phi_1$, and $\phi_i^a \in \Sigma$ for some $i \in \{0,1\}$, then $\phi_i^f \in \Sigma^{f}$ so that $\phi^{f}\in\Sigma^{f}$. (4) If $\phi = \mu x . \phi_0$ and $\phi_0(\phi)^u \in \Sigma$, then clearly also $\phi_0(\phi)^u \in Q(\Sigma)$, and so $\phi^{a} \in Q(Q(\Sigma)) \subseteq Q(\Sigma)$. Finally, (5) if $\phi = \nu x . \phi_0$ and $\phi_0(\phi)^a \in \Sigma$, then $\phi_0(\phi)^f \in \Sigma^{f}$, so that $\phi^{f} \in Q_{0}(\Sigma) \subseteq Q(\Sigma)$ indeed. \end{proof} \begin{definition} A pre-proof $\Pi'$ of $\Gamma'$ is a \emph{simulation} of a pre-proof $\Pi$ of $\Gamma$ if $\Gamma \subseteq Q(\Gamma')$, and for every open assumption $\Delta'$ of $\Pi'$ there is an open assumption $\Delta$ of $\Pi$ such that $\Delta \subseteq Q(\Delta')$. \end{definition} In the proof below we will frequently use the following proposition, the proof of which is straightforward. \begin{proposition} \label{p:thinning} Let $\Gamma$ and $\Delta$ be two sequents such that $\Gamma \subseteq Q(\Delta)$. Then $\thin{\Delta}$ is thin and satisfies $\Gamma \subseteq Q(\thin{\Delta})$, and there is a thin, progressive proof $\Pi$ of $\Delta$, which has $\thin{\Delta}$ as its only open assumption and uses only the weakening rule. \end{proposition} \begin{proof} It is clear that $\thin{\Delta}$ is thin and that we may write $\Delta = \{\phi_1^u, \dots,\phi_n^u\} \cup \thin{\Delta}$, where $\phi_{1},\ldots,\phi_{n}$ are the formulas that occur both focused and unfocused in $\Delta$. We then let $\Pi'$ be the proof that weakens the formulas $\phi_1^u,\dots, \phi_n^u$ one by one. By item~\ref{i:thinning} of Proposition~\ref{p:progressive facts} it follows that $\Delta \subseteq Q(\thin{\Delta})$. Thus, $\Gamma \subseteq Q(\Delta)$ implies $\Gamma \subseteq Q(\thin{\Delta})$ because $Q$ is a closure operator. \end{proof} The key technical observation in the proof of Theorem~\ref{t:tpp} is Proposition~\ref{p:ps} below. \begin{definition} \label{p:baspr} A pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{basic} if $T$ consists of the root $r$ and its successors, $\mathsf{R}_{r} \neq \RuDischarge{}$ and $\mathsf{R}_{u} = \star$ for every successor of $r$. \end{definition} A basic derivation is thus a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ of $\Sigma_{r}$ (where $r$ is the root of $\Pi$) with open assumptions $\{ \Sigma_{u} \mid u \neq r \}$. \begin{proposition} \label{p:progressive simulation} \label{p:ps} Let $\Pi$ be a basic pre-proof of $\Gamma$ with root $r$ and let $\Gamma'$ be a sequent such that $\Gamma \subseteq Q(\Gamma')$. Then there is a thin and progressive simulation $\Pi'$ of $\Pi$ that proves the sequent $\Gamma'$. Moreover, if $\mathsf{R}_{r} \neq \ensuremath{\mathsf{F}}\xspace, \ensuremath{\mathsf{U}}\xspace$ then $\Pi'$ does not use $\ensuremath{\mathsf{F}}\xspace$ or $\ensuremath{\mathsf{U}}\xspace$, and if $\mathsf{R}_{r} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ then $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ is also the rule applied at the root of $\Pi'$. \end{proposition} Before we prove this proposition, we first show how our main theorem follows from it. \begin{proofof}{Theorem~\ref{t:tpp}} Let $\Pi = (T,P,\Sigma,\mathsf{R})$ be a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof of the sequent $\Phi$, then by definition we have $\Sigma_{r} = \Phi^{f}$, where $r$ is the root of $\Pi$. Obviously we have $\Sigma_{r} \subseteq Q(\Sigma_{r})$. We will transform $\Pi$ into a thin proof of $\Phi$ as follows. On the basis of Proposition~\ref{p:ps} it is straightforward to define a map $\Xi$ which assigns a thin sequent $\Xi_{t}$ to each node $t \in T$, in such a way that $\Xi_{r} \mathrel{:=} \Sigma_{r}$, and for every $t \in T$ we find $\Sigma_{t} \subseteq Q(\Xi_{t})$, while we also have a thin and progressive pre-proof $\Pi_{t}$ of the sequent $\Xi_{t}$ from the assumptions $\{ \Xi_{u} \mid Ptu \}$. In addition we know that if $\mathsf{R}_{t} \neq \ensuremath{\mathsf{F}}\xspace, \ensuremath{\mathsf{U}}\xspace$, then the derivation $\Pi_{t}$ does not involve the focus rules, and that if $\mathsf{R}_{t} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ then $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ is also the rule applied at the root of $\Pi_{t}$. We obtain a thin and progressive proof $\Pi'$ from this by simply adding all these thin and progressive derivations $\Pi_{t}$ to the `skeleton structure' $(T,P,\Xi)$, in the obvious way. It is easy to show that $\Pi'$ is a pre-proof, and the additional conditions on the focus rules and $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ guarantee that every infinite branch of $\Pi'$ witnesses infinitely many applicatinos of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$, but only finitely many applications of the focus rules. To prove the remaining condition on focused formulas, consider an infinite branch $\alpha = (v_{n})_{n\in\omega}$ of $\Pi'$. It is easy to see that by construction we may associate an infinite branch $\beta = (t_{n})_{n\in\omega}$ of $\Pi$ with $\alpha$, together with a map $f: \omega \to \omega$ such that $\Sigma_{t_{n}} \subseteq Q(\Xi_{v_{f(n)}})$. This path $\beta$ is successful since $\Pi$ is a proof, and so there is a $k \in \omega$ such that for all $n \geq k$ the sequent $\Sigma_{t_{n}}$ contains a formula in focus, and $\mathsf{R}(t_{n}) \neq \ensuremath{\mathsf{F}}\xspace$. But by Proposition~\ref{p:pf}(\ref{i:pf:2}) for any $n\geq k$ such that $\mathsf{R}(t_{n}) = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$, the sequent $\Xi_{v_{f(n)}}$ must contain a focused formula as well. Since $\alpha$ features infinitely many applications of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$, this implies the existence of infinitely many nodes $v_{m}$ on $\alpha$ such that $\Xi_{v_{m}}$ contains a focused formula. And since the focus rule is applied only finitely often on $\alpha$, by Proposition~\ref{p:lr1} it follows from this that $\alpha$ actually contains cofinitely many such nodes, as required. Furthermore it is obvious that, being constructed by glueing together thin and progressive proofs, $\Pi'$ has these properties as well. Finally, since $\Xi_{r} = \Sigma_{r} = \Phi^{f}$, we have indeed obtained a proof for the plain sequent $\Phi$. \end{proofof} \begin{proofof}{Proposition~\ref{p:ps}} By definition of a basic proof, $\Pi = (T,P,\Sigma,\mathsf{R})$ consists of nothing more than a single application of the rule $\mathsf{R} \mathrel{:=} \mathsf{R}_{r}$ to the annotated sequent $\Gamma = \Sigma_{r}$, where $r$ is the root of $\Pi$. Because of Proposition~\ref{p:thinning} we can assume without loss of generality that $\Gamma'$ is thin. We then make a case distinction depending on the rule $\mathsf{R}$. Recall that we use $\ensuremath{\mathsf{W}}\xspace^*$ to denote a finite (potentially zero) number of successive applications of weakening. \begin{description} \item[\it Case for \ensuremath{\mathsf{Ax1}}\xspace:] In this case $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$p^a, \atneg{p}^b$} \end{prooftree} \end{center} The assumption is that $\{p^a,\atneg{p}^b\} \subseteq Q(\Gamma')$. By item~\ref{i:pf:2} in Proposition~\ref{p:progressive facts} it follows that $p^a,\atneg{p}^b \in \Gamma'$. We can thus define $\Pi'$ to be the proof \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$p^a, \atneg{p}^b$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} \item[\it Case for \ensuremath{\mathsf{Ax2}}\xspace:] In this case $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax2}}\xspace} \UnaryInfC{$\top^a$} \end{prooftree} \end{center} From the assumption that $\top^a \subseteq Q(\Gamma')$ it follows with item~\ref{i:pf:2} of Proposition~\ref{p:progressive facts} that that $\top^a \in \Gamma'$. We define $\Pi'$ to be the proof. \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$\top^a$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} \item[\it Case for \ensuremath{\mathsf{R}_{\lor}}\xspace:] In this case $\Gamma = \phi_0 \lor \phi_1,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0^a,\phi_1^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInfC{$(\phi_0 \lor \phi_1)^a,\Sigma$} \end{prooftree} \end{center} Let $\phi \mathrel{:=} \phi_0 \lor \phi_1$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^a \in Q(\Gamma')$. By definition of $Q$ there are two cases for why this might hold, either $\phi^b \in \Gamma'$ for $b \sqsupseteq a$ or $\phi_0^a \in Q(\Gamma')$ and $\phi_1 \in Q(\Gamma')$. In the latter case where $\phi_0^a \in Q(\Gamma')$ and $\phi_1 \in Q(\Gamma')$ we can let $\Pi'$ consist of just the sequent $\Gamma'$. This proof is thin and progressive and it clear follows that $\phi_0^a,\phi_1^a,\Sigma \subseteq Q(\Gamma')$ because $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$. In the former case, where $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$, consider the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInfC{$(\phi_0 \lor \phi_1)^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} We let $\Pi'$ be this proof. Clearly, this is a proof of $\Gamma' = (\phi_0 \lor \phi_1)^b,\Gamma' \setminus \{\phi^b\}$ and it is progressive. Moreover, we have from the definition of $Q$ that $\phi_0^a,\phi_1^a \subseteq Q(\phi_0^b, \phi_1^b)$, as $b \sqsupseteq a$. By item~\ref{i:proof rules} of Proposition~\ref{p:progressive facts} it holds that $\Gamma' \subseteq Q(\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\})$. By assumption we have that $\Gamma \subseteq Q(\Gamma')$ and hence $\Sigma \subseteq \Gamma \subseteq Q(\Gamma') \subseteq Q(\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\})$. Putting all of these together it follows that \[ \phi_0^a,\phi_1^a,\Sigma \subseteq Q(\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\}). \] It remains to be seen that $\Pi$ can be made thin. For the sequent $\Gamma'$ at the root of $\Pi$ we have already established that it is thin. It might be, however, that the open assumption $\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\}$ is not thin. If this is the case we can simply apply Proposition~\ref{p:thinning} and obtain the required proof. \item[\it Case for \ensuremath{\mathsf{R}_{\land}}\xspace:] In this case $\Gamma = \phi_0 \land \phi_1,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0^a,\Sigma$} \AxiomC{$\phi_1^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace} \BinaryInfC{$(\phi_0 \land \phi_1)^a,\Sigma$} \end{prooftree} \end{center} Let $\phi \mathrel{:=} \phi_0 \land \phi_1$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^a \in Q(\Gamma')$. By the definition $Q$ we may split into two cases: either $\phi^b \in \Gamma'$ for $b \sqsupseteq a$ or $\phi_i^a \in Q(\Gamma')$ for some $i \in \{0,1\}$. In the subcase where $\phi_i^a \in Q(\Gamma')$ for some $i \in \{0,1\}$ we let $\Pi'$ just be the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show that there is some open assumption $\Delta_i$ of $\Pi$ such that $\Delta_i \subseteq Q(\Gamma')$. Let this be the assumption $\phi_i^a, \Sigma$. We already know that $\phi_i^a \in Q(\Gamma')$, so we it only remains to be seen that $\Sigma \subseteq Q(\Gamma')$. But this follows because $\Sigma \subseteq \Gamma$ and $\Gamma \subseteq Q(\Gamma')$. In the other subcase we have that $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$. We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0^b,\Gamma' \setminus \{\phi^b\}$} \AxiomC{$\phi_1^b,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace} \BinaryInfC{$(\phi_0 \land \phi_1)^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} By definition this proof is progressive and it is a proof of $\Gamma' = (\phi_0 \land \phi_1)^b,\Gamma' \setminus \{\phi^b\}$. We then show that for each open assumption $\phi_i^b, \Gamma' \setminus \{\phi^b\}$ of $\Pi$, where $i \in \{0,1\}$, there is the open assumption $\phi_i^a,\Sigma$ of $\Pi$ such that \begin{equation*} \phi_i^a,\Sigma \subseteq Q(\phi_i^b, \Gamma' \setminus \{\phi^b\}). \end{equation*} Because $a \sqsubseteq b$ it is clear that $\phi_i^a \in Q(\{\phi_i^b\})$. So we only need $\Sigma \subseteq Q(\phi_i^b, \Gamma' \setminus \{\phi^b\})$. But this follows from $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$ and the fact that $\Gamma' \subseteq Q(\phi_i^b,\Gamma' \setminus \{\phi^b\})$, which is item~\ref{i:proof rules} in Proposition~\ref{p:progressive facts}. Finally, as before, we use Proposition~\ref{p:thinning} to deal with non-thin open assumptions of $\Pi'$, if any. \item[\it Case for \RuFp{\mu}:] In this case $\Gamma = \mu x . \phi_0(x),\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^u,\Sigma$} \RightLabel{\RuFp{\mu}} \UnaryInfC{$(\mu x . \phi_0(x))^a,\Sigma$} \end{prooftree} \end{center} Here we write $\phi = \mu x . \phi_0(x)$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^u \in Q(\Gamma')$. By definition of $Q$ this gives us the cases that either $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$ or $\phi_0(\phi)^u \in Q(\Gamma')$. In the subcase where $\phi_0(\phi)^u \in Q(\Gamma')$ we let $\Pi'$ just be the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show $\phi_0(\phi)^u,\Sigma \subseteq Q(\Gamma')$. Because we are in the subcase for $\phi_0(\phi)^u \in Q(\Gamma')$ it suffice to show that $\Sigma \subseteq Q(\Gamma')$. But this follows because $\Sigma \subseteq \Gamma$ and $\Gamma \subseteq Q(\Gamma')$. In the other subcase we have that $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$. We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^u,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\RuFp{\mu}} \UnaryInfC{$(\mu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} Clearly, this proof is progressive and it is a proof of $\Gamma' = (\mu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$. We can also show that \[ \phi_0(\phi)^u,\Sigma \subseteq Q(\phi_0(\phi)^u, \Gamma' \setminus \{\phi^b\}). \] For this it clearly suffices to show that $\Sigma \subseteq Q(\phi_0(\phi)^u, \Gamma' \setminus \{\phi^b\})$. This follows from $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$ and the fact that $\Gamma' \subseteq Q(\phi_0(\phi)^u, \Gamma' \setminus \{\phi^b\})$, which comes from item~\ref{i:proof rules} in Proposition~\ref{p:progressive facts}. Finally, as before, we use Proposition~\ref{p:thinning} to deal with non-thin open assumptions of $\Pi'$, if any. \item[\it Case for \RuFp{\nu}:] In this case $\Gamma = \nu x . \phi_0(x),\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^a,\Sigma$} \RightLabel{\RuFp{\nu}} \UnaryInfC{$(\nu x . \phi_0(x))^a,\Sigma$} \end{prooftree} \end{center} Here, we write $\phi = \nu x . \phi_0(x)$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^u \in Q(\Gamma')$. By the definition $Q$ this gives us the cases that either $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$ or $\phi_0(\phi)^u \in Q(\Gamma')$. In the subcase where $\phi_0(\phi)^u \in Q(\Gamma')$ we let $\Pi'$ just be the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show $\phi_0(\phi)^u,\Sigma \subseteq Q(\Gamma')$. Because we are in the subcase for $\phi_0(\phi)^u \in Q(\Gamma')$ it suffice to show that $\Sigma \subseteq Q(\Gamma')$. But this follows because $\Sigma \subseteq \Gamma$ and $\Gamma \subseteq Q(\Gamma')$. In the other subcase we have that $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$. We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^b,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\RuFp{\nu}} \UnaryInfC{$(\nu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} Clearly, this proof is progressive and it is a proof of $\Gamma' = (\mu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$. We can also show that \[ \phi_0(\phi)^a,\Sigma \subseteq Q(\phi_0(\phi)^b, \Gamma' \setminus \{\phi^b\}). \] Because $a \sqsubseteq b$ it is clear that $\phi_0(\phi)^a \in Q(\{\phi_0(\phi)^b\})$. So it clearly suffices to show that $\Sigma \subseteq Q(\phi_0(\phi)^b, \Gamma' \setminus \{\phi^b\})$. This follows from $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$ and the fact that $\Gamma' \subseteq Q(\phi_0(\phi)^b, \Gamma' \setminus \{\phi^b\})$, which comes from item~\ref{i:proof rules} in Proposition~\ref{p:progressive facts}. Any remaining non-thin open assumptions are dealt with using Proposition~\ref{p:thinning}. \item[\it Case for \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace:] In this case $\Gamma$ must be of the form $\Gamma = \Box\phi^{a},\Diamond\Sigma$, and $\Pi$ is the derivation \begin{center} \begin{prooftree} \AxiomC{$\phi^{a},\Sigma$} \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInfC{$\Box\phi^a,\Diamond\Sigma$} \end{prooftree} \end{center} Because $\Gamma \subseteq Q(\Gamma')$ it follows from Proposition~\ref{p:pf}\eqref{i:pf:2} that $\morefocus{\Gamma}{\Gamma'}$. But then $\Gamma'$ must contain a subset of the form $\Box\phi^{b},\Diamond\Sigma'$, with $a \sqsubseteq b$ and $\morefocus{\Sigma}{\Sigma'}$. Consider the following derivation $\Pi'$: \begin{center} \begin{prooftree} \AxiomC{$\phi^{b},\Sigma'$} \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInfC{$\Box\phi^b,\Diamond\Sigma'$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} It is easy to see that we have $\morefocus{\Delta}{\Delta'}$, where $\Delta \mathrel{:=} \phi^{a},\Sigma$ and $\Delta' \mathrel{:=} \phi^{b},\Sigma'$ are the assumptions of the pre-proofs $\Pi$ and $\Pi'$, respectively. Furthermore, the proof $\Pi'$ is obviously progressive, and if not thin already, can be made so by applying Proposition~\ref{p:thinning}. \item[\it Case for \ensuremath{\mathsf{W}}\xspace:] In this case $\Gamma = \phi^a,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\Sigma$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$\phi^a,\Sigma$} \end{prooftree} \end{center} We can let $\Pi'$ consist of just the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show that $\Sigma \subseteq Q(\Gamma')$. Clearly $\Sigma \subseteq \Gamma$, and $\Gamma \subseteq Q(\Gamma')$ holds by assumption. \item[\it Case for \ensuremath{\mathsf{F}}\xspace:] In this case $\Gamma = \phi^a,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi^f,\Sigma$} \RightLabel{\ensuremath{\mathsf{F}}\xspace} \UnaryInfC{$\phi^u,\Sigma$} \end{prooftree} \end{center} We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\allfocus{(\Gamma')}$} \RightLabel{$\ensuremath{\mathsf{F}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} Here, $\allfocus{(\Gamma')} = \{\phi^f \mid \phi^a \in \Gamma' \mbox{ for some } a \in \{u,f\}\}$, as in Proposition~\ref{p:progressive facts}, and $\ensuremath{\mathsf{F}}\xspace^*$ are as many applications of the focus rule as we need to put every formula in $\Gamma'$ in focus. This proof $\Pi'$ is trivially progressive and it is thin because $\Gamma'$ is thin and hence changing the annotations of some formulas in $\Gamma'$ in this way still yields a thin sequent. From item~\ref{i:more focus} of Proposition~\ref{p:progressive facts} it is clear that $\phi^f,\Sigma \subseteq Q{\allfocus{(\Gamma')}}$ is implied by $\phi^u,\Sigma \subseteq Q(\Gamma')$. \item[\it Case for \ensuremath{\mathsf{U}}\xspace:] In this case $\Gamma = \phi^f,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi^u,\Sigma$} \RightLabel{\ensuremath{\mathsf{U}}\xspace} \UnaryInfC{$\phi^f,\Sigma$} \end{prooftree} \end{center} We can let $\Pi'$ consist of just the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show that $\phi^u,\Sigma \subseteq Q(\Gamma')$. By the definition of $Q$ we have that $\phi^u \in Q(\phi^f)$. Thus $\phi^u, \Sigma \subseteq Q(\phi^f,\Sigma)$. Moreover, we have by assumption that $\phi^f,\Sigma = \Gamma \subseteq Q(\Gamma')$. Putting this together, and using that $Q$ is a closure operator, we get $\phi^u,\Sigma \subseteq Q(\Gamma')$. \end{description} Since we have covered all the cases in the above case distinction, this proves the main part of the proposition. The additional statements about the focus rules and the rule $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ can easily be verified from the definition of $\Pi'$ given above. \end{proofof} \section{Soundness} \label{s:soundness} In this section we show that our proof systems are sound, meaning that any provable formula is valid. Because of the adequacy of the tableau game that was established in Theorem~\ref{t:adequacy} it suffices to show that for every provable formula Prover\xspace has a winning strategy in some tableau for this formula. Moreover, we only need to consider proofs in \ensuremath{\mathsf{Focus}_\infty}\xspace because by Theorem~\ref{t:same} every formula that is provable in \ensuremath{\mathsf{Focus}}\xspace is also provable in \ensuremath{\mathsf{Focus}_\infty}\xspace. \begin{theorem} \label{t:soundness} Let $\Phi$ be some sequent. If $\Phi$ is provable in \ensuremath{\mathsf{Focus}_\infty}\xspace then there is some tableau $\mathstr{T}$ for $\Phi$ such that Prover\xspace has a winning strategy in $\game{\mathstr{T}}$. \end{theorem} We will prove the soundness theorem by transforming a thin and progressive $\ensuremath{\mathsf{Focus}_\infty}\xspace$-proof of $\Phi$ into a winning strategy for Prover\xspace in the tableau game associated with some tableau for $\Phi$. To make a connection between proofs and tableaux more tight, we first consider the notion of an (annotated) trail in the setting of $\ensuremath{\mathsf{Focus}_\infty}\xspace$-proofs. \begin{definition} \label{d:trails} Let $\Pi = (T,P,\Sigma,\mathsf{R})$ be a thin and progressive proof in \ensuremath{\mathsf{Focus}_\infty}\xspace. For all nodes $u,v \in T$ such that $P u v$ we define the \emph{active trail relation} $\atrail_{u,v} \subseteq \Sigma_u \times \Sigma_v$ and the \emph{passive trail relation} $\ptrail_{u,v} \subseteq \Sigma_u \times \Sigma_v$ by a case distinction depending on the rule that is applied at $u$. Here we use the notation $\Delta_{S} \mathrel{:=} \{(s,s) \mid s \in S\}$, for any set $S$. \textit{Case $\mathsf{R}(u) = \ensuremath{\mathsf{R}_{\lor}}\xspace$:} Then $\Sigma_u = \{(\phi \lor \psi)^a\} \uplus \Gamma$ and $\Sigma_v = \{\phi^a,\psi^a\} \cup \Gamma$, for some annotated sequent $\Gamma$. We define $\atrail_{u,v} \mathrel{:=} \{((\phi \lor \psi)^a,\phi^a),((\phi \lor \psi)^a, \psi^a)\}$ and $\ptrail_{u,v} \mathrel{:=} \Delta_\Gamma$. \textit{Case $\mathsf{R}(u) = \ensuremath{\mathsf{R}_{\land}}\xspace$:} In this case $\Sigma_u = \{(\phi_{0} \land \phi_{1})^{a}\} \uplus \Gamma$ and $\Sigma_v = \{\phi_{i}^a\} \cup \Gamma$ for some $i \in \{0,1\}$ and some annotated sequent $\Gamma$. We set $\atrail_{u,v} \mathrel{:=} \{((\phi_{0} \land \phi_{1})^a,\phi_{i}^a)\}$ and $\ptrail_{u,v} \mathrel{:=} \Delta_\Gamma$. \textit{Case $\mathsf{R}(u) = \RuFp{\mu}$:} Then $\Sigma_u = \{\mu x . \phi^a\} \uplus \Gamma$ and $\Sigma_v = \{\phi[\mu x . \phi / x]^u\} \cup \Gamma$ for some sequent $\Gamma$. We define $\atrail_{u,v} \mathrel{:=} \{(\mu x . \phi^a,\phi[\mu x . \phi / x]^f)\}$ and $\ptrail_{u,v} \mathrel{:=} \Delta_\Gamma$. \textit{Case $\mathsf{R}(u) = \RuFp{\nu}$:} Then $\Sigma_u = \{\nu x . \phi^a\} \uplus \Gamma$ and $\Sigma_v = \{\phi[\nu x . \phi / x]^a\} \cup \Gamma$ for some sequent $\Gamma$. We define $\atrail_{u,v} \mathrel{:=} \{(\nu x . \phi^a,\phi[\nu x . \phi / x]^a)\}$ and $\ptrail_{u,v} \mathrel{:=} \Delta_\Gamma$. \textit{Case $\mathsf{R}(u) = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$:} Then $\Sigma_u = \{\Box \phi^a\} \cup \Diamond \Gamma$ and $\Sigma_v = \{\phi^a\} \cup \Gamma$ for some annotated sequent $\Gamma$. We define $\atrail_{u,v} = \{(\Box \phi^a,\phi^a)\} \cup \{(\Diamond \psi^b,\psi^b) \mid \psi^b \in \Sigma\}$ and $\ptrail_{u,v} = \varnothing$. \textit{Case $\mathsf{R}(u) = \ensuremath{\mathsf{W}}\xspace$:} In this case $\Sigma_u = \Sigma_v \uplus \{ \phi^{a} \}$ and we set $\atrail_{u,v} \mathrel{:=} \varnothing$ and $\ptrail_{u,v} \mathrel{:=}\Delta_{\Sigma_v}$. \textit{Case $\mathsf{R}(u) = \ensuremath{\mathsf{F}}\xspace$:} Then $\Sigma_u = \{\phi^u\} \cup \Gamma$ and $\Sigma_v = \{\phi^f\} \cup \Gamma$ for some annotated sequent $\Gamma$. We define $\atrail_{u,v} = \varnothing$ and $\ptrail_{u,v} = \{(\phi^u,\phi^f)\} \cup \Delta_{\Gamma}$. \textit{Case $\mathsf{R}(u) = \ensuremath{\mathsf{U}}\xspace$:} Then $\Sigma_u = \{\phi^f\} \cup \Gamma$ and $\Sigma_v = \{\phi^u\} \cup \Gamma$ for some annotated sequent $\Gamma$. We define $\atrail_{u,v} = \varnothing$ and $\ptrail_{u,v} = \{(\phi^f,\phi^u)\} \cup \Delta_{\Gamma}$. We also define the \emph{general trail relation} $\gtrail_{u,v} \mathrel{:=} \atrail_{u,v} \cup \ptrail_{u,v}$ for all nodes $u$ and $v$ with $Puv$. \end{definition} Note that in the case distinction of Definition~\ref{d:trails}, it is not possible that $u$ is an axiomatic leaf since it has a successor, and it is not possible that $\mathsf{R}(u) \in \mathcal{D} \cup \{\RuDischarge{\ensuremath{\mathsf{x}}} \mid \ensuremath{\mathsf{x}} \in \mathcal{D}\}$ since $\Pi$ is a proof in \ensuremath{\mathsf{Focus}_\infty}\xspace. We extend the trail relation $\gtrail_{u,v}$ to any two nodes such that $u$ is an ancestor of $v$ in the underlying proof tree. \begin{definition} Let $u,v$ be nodes of a proof tree $\Pi = (T,P,\Sigma,\mathsf{R})$ such that $P^{*}uv$. The relation $\gtrail_{u,v}$ is defined inductively such that $\gtrail_{u,u} \mathrel{:=} \Delta_{\Sigma_{u}}$, and if $Puw$ and $P^{*}wv$ then $\gtrail_{u,v} \mathrel{:=} \gtrail_{u,w} \mathop{;} \gtrail_{w,v}$, where $\mathop{;}$ denotes relational composition. \end{definition} As in the case of tableaux, we will be specifically interested in infinite trails. \begin{definition} An \emph{(annotated) trail} on an infinite path $\alpha = (v_{n})_{n\in\omega}$ in a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof $\Pi$ is an infinite sequence $\tau = (\phi_n^{a_n})_{n\in\omega}$ of annotated formulas such that $(\phi_i^{a_i} \phi_{i + 1}^{a_{i+1}}) \in \gtrail_{v_i,v_{i+1}}$ for all $i \in \omega$. The tightening of such an annotated trail is defined exactly as in the case of plain trails. An infinite trail $\tau$ is an \emph{$\eta$-trail}, for $\eta \in \{ \mu,\nu \}$ if its tightening $\rdc{\tau}$ is an $\eta$-trace. \end{definition} The central observation about the focus mechanism is that it enforces every infinite branch in a thin and progressive \ensuremath{\mathsf{Focus}_\infty}\xspace-proofs to contain a $\nu$-trail. \begin{proposition} \label{p:nu-trail} Every infinite branch in a thin and progressive \ensuremath{\mathsf{Focus}_\infty}\xspace-proof carries a $\nu$-trail. \end{proposition} \begin{proof} Consider an infinite branch $\alpha = (v_{n})_{n\in\omega}$ in some $\ensuremath{\mathsf{Focus}_\infty}\xspace$-proof $\Pi = (T,P,\Sigma,\mathsf{R})$. Then $\alpha$ is successful by assumption, so that we may fix a $k$ such that for every $j \geq k$, the sequent $\Sigma_{v_{j}}$ contains a formula in focus, and $\mathsf{R}(v_{j})$ is not a focus rule. We claim that \begin{equation} \label{e:predecessor} \text{for every $j \geq k$ and $\psi^f \in \Sigma_{v_{j+1}}$ there is some $\chi^f \in \Sigma_{v_{j}}$ such that $(\chi^{f},\psi^{f}) \in \gtrail_{v_{j},v_{j+1}}$}. \end{equation} To see this, let $j \geq k$ and $\psi^f \in \Sigma_{v_{j+1}}$. It is obvious that there is some annotated formula $\chi^a \in \Sigma_{v_{j}}$ with $(\chi^{a},\psi^{f}) \in \gtrail_{v_{j},v_{j+1}}$. The key observation is now that in fact $a = f$, and this holds because the only way that we could have $(\chi^{u},\psi^{f}) \in \gtrail_{v_{j},v_{j+1}}$ is if we applied the focus rule at $v_{j}$, which would contradict our assumption on the nodes $v_{j}$ for $j \geq k$. Now consider the graph $(V,E)$ where \[ V \mathrel{:=} \{ (j,\phi) \mid k \leq j < \omega \text{ and } \phi^{f} \in \Sigma_{\alpha_{j}} \}, \] and \[ E \mathrel{:=} \big\{ \big( (j,\phi),(j+1,\psi) \big) \mid (\phi^{f},\psi^{f}) \in \gtrail_{v_{j},v_{j+1}} \big\} \] This graph is directed, acyclic, infinite and finitely branching. Furthermore, it follows by \eqref{e:predecessor} that every node $(j,\phi)$ is reachable in $(V,E)$ from some node $(k,\psi)$. Then by a (variation of) K\"{o}nig's Lemma there is an infinite path $(n,\phi_{n}^f)_{n\in\omega}$ in this graph. The induced sequence $\tau \mathrel{:=} (\phi_{n}^f)_{n\in\omega}$ is a trail on $\alpha$ because the formulas are related by the trail relation. By guardedness, $\tau$ must be either a $\mu$-trail or a $\nu$-trail. But $\tau$ cannot feature infinitely many $\mu$-formulas, since it is not possible to unravel a $\mu$-formula $\phi_{j}^{f}$ and end up with a formula of the form $\phi_{j + 1}^f$, simply because the rule \RuFp{\mu} attaches the label $u$ to the unravelling of $\phi_{j}$. This means that $\tau$ cannot be a $\mu$-trail, and hence it must be a $\nu$-trail. \end{proof} \begin{proofof}{Theorem~\ref{t:soundness}} Let $\Pi = (T,P,\Sigma,\mathsf{R})$ be a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof for $\Phi^f$. By Theorem~\ref{t:tpp} we may assume without loss of generality that $\Pi$ is thin and progressive. We are going to construct a tableau $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$ and a winning strategy for Prover\xspace in $\game{\mathstr{T}}$. Our construction will be such that $(V,E)$ is a potentially infinite tree, of which the winning strategy $S \subseteq V$ for Prover\xspace is a subtree, as in Remark~\ref{r:treestrat}. The construction of $\mathstr{T}$ and $S$ proceeds via an induction that starts from the root and in every step adds children to one of the nodes in the subtree $S$ that is not yet an axiom. Nodes of $\mathstr{T}$ that are not in $S$ are always immediately completely extended using Proposition~\ref{p:tableau exists}. Thus, they do not have to be treated in the inductive construction. The construction of $S$ is guided by the structure of $\Pi$. In addition to the tableau $\mathstr{T}$ we will construct a function $g : S \to T$ mapping those nodes of $\mathstr{T}$ that belong to the strategy $S$ to nodes of $\Pi$. This function will satisfy the following three conditions, which will allow us to lift the $\nu$-trails from $\Pi$ to $S$: \begin{enumerate} \item \label{i:order preserving} If $Euv$ then $P^* g(u) g(v)$. \item \label{i:tra} The sequent $\Sigma_{g(l)}$ is thin, and $\uls{\Sigma}_{g(u)} \subseteq \Phi_{u}$. \item \label{i:trb} If $Euv$ and $(\psi^b,\phi^a) \in \gtrail^{\Pi}_{g(u),g(v)}$ then $(\psi,\phi) \in \gtrail^{\mathstr{T}}_{u,v}$. \end{enumerate} We now describe the iterative construction of the approximating objects $\mathstr{T}_i$, $S_i$ and $g_i$ for all $i \in \omega$, which in the limit will yield $\mathstr{T}$, $S$ and $g$. Each $\mathstr{T}_i$ will be a \emph{pre-tableau}, that is, an object as defined in Definition~\ref{d:tableau}, except that we do not require the rule labelling to be defined for every leaf of the tree. Leaves without labels will be called \emph{undetermined}, and the basic idea underlying the construction is that each step will take care of one undetermined leaf. We will make sure that in each step $i$ of the construction, the entities $\mathstr{T}_i$, $S_i$ and $g_i$ satisfy the conditions \ref{i:order preserving}, \ref{i:tra} and \ref{i:trb}, and moreover ensure that all undetermined leaves of $\mathstr{T}_i$ belong to $S_i$. It is easy to see that then also $S$ and $g$ satisfy these conditions. \medskip In the base case we let $\mathstr{T}_0$ be the node $v_I$ labelled with just $\Phi$ at the root of the tableau. We let $g_0(v_0)$ be the root of the proof $\Pi$. The strategy $S_0$ just contains the node $v_0$. \medskip In the inductive step we assume that we have already constructed a pre-tableau $\mathstr{T}_i$, a subtree $S_i$ corresponding to Prover\xspace's strategy and a function $g_i : S_i \to T$ satisfying the above conditions \ref{i:order preserving} -- \ref{i:trb}. To extend these objects further we fix an undetermined leaf $l$ of $S_i$. We may choose $l$ such that its distance to the root of $\mathstr{T}_i$ is minimal among all the undetermined leaves of $\mathstr{T}_i$. This will guarantee that every undetermined leaf gets treated eventually and thus ensure that the trees $S$ and $T$ in the limit do not contain any undetermined leaves. We distinguish cases depending on the rule that is applied in $\Pi$ at $g_i(l)$. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{Ax1}}\xspace$ or $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{Ax2}}\xspace$:} In this case we may simply label the node $l$ with the corresponding axiom, while apart from this, we do not change $\mathstr{T}_{i}$, $S_{i}$ of $g_{i}$. Note that $l$ will remain an (axiomatic) leaf of the tableau $\mathstr{T}$. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{R}_{\lor}}\xspace$:} If the rule applied at $g_i(l)$ is \ensuremath{\mathsf{R}_{\lor}}\xspace with principal formula, say, $(\phi\lor\psi)^{a}$, then this application of $\ensuremath{\mathsf{R}_{\lor}}\xspace$ is followed by a (possibly empty) series of applications of weakening until a descendant $t$ of $g_{i(l)}$ is reached that is labeled with a thin sequent. By condition~\ref{i:tra} the formula $\phi \lor \psi$ occurs at $l$, as it occurs in $g_i(l)$, so that we may label $l$ with the disjunction rule as well. We extend $\mathstr{T}_i$, $S_i$ and $g_i$ accordingly, meaning that $T_{i + 1}$ is $T_i$ extended with one node $v$ that is labelled with the premise of the application of the disjunction rule, $S_{i + 1}$ is $S_i$ extended to contain $v$ and $g_{i + 1}$ is just like $g_i$ but additionally maps $v$ to $t$. It is easy to check that with these definitions, the conditions \ref{i:order preserving} -- \ref{i:trb} are satisfied. For condion \ref{i:tra} we need the fact that the formula $(\phi\lor\psi)^{\ol{a}}$ does not occur as a side formula in $\Sigma_{g_{i}(l)}$ since the latter sequent is thin, so that, as $\Pi$ is also progressive, the formula $\phi\lor\psi$ does not appear in the premisse of the rule at all, and hence not in $\Sigma_{t}$ either. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{R}_{\land}}\xspace$:} In the case where \ensuremath{\mathsf{R}_{\land}}\xspace is applied at $g_i(l)$ with principal formula $(\phi \land \psi)^a$ it follows that $g_i(l)$ has a child $s_\phi$ for $\phi^a$ and a child $s_\psi$ for $\psi^a$, and that these nodes have thin descendants $t_{\phi}$ and $t_{\psi}$, respectively, each of which is reached by a possibly empty series of weakenings. By condition~\ref{i:tra} it follows that $\phi \land \psi \in \Phi_l$. We can then apply the conjunction rule at $l$ to the formula $\phi \land \psi$ and obtain two new premises $v_\phi$ and $v_\psi$ for each of the conjuncts. $\mathstr{T}_{i + 1}$ is defined to extend $\mathstr{T}_i$ with these additional two children. We let $S_{i + 1}$ include both nodes $v_\phi$ and $v_\psi$ as the conjunction rule belongs to Refuter\xspace in the tableaux game. Moreover, $g_{i + 1}$ is the same as $g_i$ on the domain of $g_i$, while it maps $v_\phi$ to $t_\phi$ and $v_\psi$ to $t_\psi$. It is easy to check that the conditions \ref{i:order preserving} -- \ref{i:trb} are satisfied, where for condition condion \ref{i:tra} we use the thinness and progressivity of $\Pi$ as in the case for $\ensuremath{\mathsf{R}_{\lor}}\xspace$. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$:} We want to match this application of \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace in $\Pi$ with an application of the rule \ensuremath{\mathsf{\mathsf{M}}}\xspace in the tableau system. To make this work, however, two difficulties need to be addressed. Let $s$ be the successor of $g_{i}(l)$ in $\Pi$, and, as before, assume that $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ is followed by a possibly empty series of weakenings until a descendant $t$ of $s$ is reached that is labelled with a thin sequent. The first issue is that to apply the rule \ensuremath{\mathsf{\mathsf{M}}}\xspace in the tableau system, every formula in the consequent must be either atomic or modal, whereas the sequent $\Phi_{l}$ may contain boolean or fixpoint formulas. The second difficulty is that the rule \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace in the focus proof system has only one premise, whereas the tableau rule \ensuremath{\mathsf{\mathsf{M}}}\xspace has one premise for each box formula in the conclusion. To address the first difficulty we step by step apply the Boolean rules (\ensuremath{\mathsf{R}_{\lor}}\xspace and \ensuremath{\mathsf{R}_{\land}}\xspace) to break down all the Boolean formulas in $\Phi_l$ and the fixpoint rules (\RuFp{\mu} and \RuFp{\nu}) to unfold all fixpoint formulas. Because the rule \ensuremath{\mathsf{R}_{\land}}\xspace is branching this process generates a subtree $\mathstr{T}_{l}$ at $l$ such that all leaves of $\mathstr{T}_{l}$ contain literals and modal formulas only. Moreover, any modal formula from $\Phi_l$ is still present in $\Phi_m$, for any such leaf $m$, because modal formulas are not affected by the application of Boolean or fixpoint rules. We add all nodes of $\mathstr{T}_{l}$ to the strategy $S$, and we define $g_{i + 1}(u) \mathrel{:=} g_i(u)$ for any $u$ in this subtree. To see that this does not violate condition~\ref{i:tra} or \ref{i:trb}, note that all formulas in $\Sigma_{g_i(l)}$ are modal and so, as we saw, remain present throughout the subtree. Note that $\mathstr{T}_{l}$ may contain leaves $m$ such that $\Phi_{m}$ does not meet the side condition (\dag) of the modal rule $\ensuremath{\mathsf{\mathsf{M}}}\xspace$; this means, however, that $\Phi_{m}$ is axiomatic, so that we may label such a leaf $m$ with either $\ensuremath{\mathsf{Ax1}}\xspace$ or $\ensuremath{\mathsf{Ax2}}\xspace$. We then want to expand any remaining leaf in $\mathstr{T}_{l}$ by applying the modal rule \ensuremath{\mathsf{\mathsf{M}}}\xspace. To see how this is done, fix such a leaf $m$. Applying the modal rule of the tableau system at $m$ generates a new child $n_\chi$ for every box formula $\Box \chi \in \Phi_l$. At this point we have to solve our second difficulty mentioned above, which is to select one child $n_\chi$ to add into $S_{i + 1}$ and finish the construction of the tableau for all other children. To select the appropriate child of $m$, consider the unique box formula $\Box \phi$ such that $\Box \phi^a \in \Sigma_{g_i(l)}$ for some $a \in \{f,u\}$ --- such a formula exists because \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace is applied at $g_i(l)$. By condition~\ref{i:tra} we then have $\Box \phi \in \Phi_l$ and from this it follows, as we saw already, that $\Box \phi \in \Phi_m$. We select the child $n_\phi$ of $m$ to be added to $S_{i + 1}$ and set $g_{i+1}(n_\phi) = t$, where $t$ is defined before. It is not hard to see that this definition satisfies the conditions~\ref{i:tra} and~\ref{i:trb}, because all diamond formulas in $\Sigma_{g_i(l)}$ are also in $\Phi_l$ and thus still present in $\Phi_m$. We still need to deal with the other children of $m$, since these are still undetermined but not in $S_{i + 1}$, something we do not allow in our iterative construction. To solve this issue we simply use Proposition~\ref{p:tableau exists} to obtain a new tree-shaped tableau $\mathstr{T}_k$ for any such child $k$ of $m$ with $k \neq n_\phi$. For the definition of $\mathstr{T}_{i + 1}$ we append $\mathstr{T}_k$ above the child $k$. Hence, the only undetermined leaf that is left above $m$ in $\mathstr{T}_{i+1}$ is the node $n_\phi$, which belongs to $S_{i + 1}$. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \RuFp{\mu}$ or $\mathsf{R}(g_i(l)) = \RuFp{\nu}$:} The case for the fixpoint rules is similar to the case for $\ensuremath{\mathsf{R}_{\lor}}\xspace$, we just apply the corresponding fixpoint rule on the tableau side. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{W}}\xspace$:} Note that in this case the sequent $\Sigma_{t}$, associated with the successor node $t$ of $g(l)$, being the premise of an application of the weakening rule, is a (proper) subset of the consequent sequent $\Sigma_{g_i(l)}$. In this case we simply define $T_{i+1} \mathrel{:=} \mathstr{T}_i$ and $S_{i+1} \mathrel{:=} S_i$, but we modify $g_i$ so that $g_{i+1} : S_{i+1} \to T$ maps $g_{i+1}(l) = t$ and $g_{i+1}(k) = g_i(k)$ for all $k \neq l$. This clearly satisfies condition~\ref{i:order preserving}. To see that it satisfies the other two conditions we use the facts that $\Sigma_t \subseteq \Sigma_{g_i(l)}$, and that the trail relation for the weakening rule is trivial. However, after applying this step we still have that $l$ is an undetermined leaf of $\mathstr{T}_{i + 1}$. Thus the construction does not really make progress in this step and one might worry that not all undetermined leaves get eventually. We address this matter further below. \smallskip \textit{Case $\mathsf{R}(g_i(l)) = \ensuremath{\mathsf{F}}\xspace$:} The case for the focus change rule \ensuremath{\mathsf{F}}\xspace is analogous to the previous case for the weakening rule \ensuremath{\mathsf{W}}\xspace. The fact that the annotations of formulas change has no bearing on the conditions. \smallskip We now address the problem that in the cases for \ensuremath{\mathsf{W}}\xspace and \ensuremath{\mathsf{F}}\xspace, we do not extend $\mathstr{T}_i$ at its undetermined leaf $l$. Thus, without further arguments it would seem possible that the construction loops through these cases without ever making progress at the undetermined leaf $l$. To see that this can not happen note first that in each of these cases we are moving on in the proof $\Pi$ in the sense that $g_{i + 1}(l) \neq g_i(l)$ and $(g_i(l),g_{i + 1}(l)) \in P$. Thus, if we would never make progress at $l$ this means that we would need to follow an infinite path in $\Pi$ of which every node is labelled with either \ensuremath{\mathsf{W}}\xspace or with \ensuremath{\mathsf{F}}\xspace. However, this would contradict Proposition~\ref{p:nu-trail} because every infinite branch in $\Pi$ is successful. \medskip It remains to be seen that $S$ is a winning strategy for Prover\xspace. It is clear that Prover\xspace wins all finite matches that are played according to $S$ because by construction all leaves in $S$ are axioms. To show that all infinite matches are winning, consider an infinite path $\beta = (v_{n})_{n\in\omega}$ in $S$. We need to show that $\beta$ contains a $\nu$-trail. Using condition~\ref{i:order preserving} it follows that there is an infinite path $\alpha = (t_{n})_{n\in\omega}$ in $\Pi$ such that for every $i \in \omega$ we have that $g(v_i) = t_{k_i}$ for some $k_i \in \omega$, and, moreover, $k_i \leq k_j$ if $i \leq j$. By Proposition~\ref{p:nu-trail} the infinite path $\alpha$ contains a $\nu$-trail $\tau = \phi_0^{a_0} \phi_1^{a_1} \cdots$. With condition~\ref{i:trb} it follows that $\tau' \mathrel{:=} \phi_{k_0} \phi_{k_1} \phi_{k_2} \cdots$ is a trail on $\beta$. By Proposition~\ref{p:af4}, $\tau$ contains only finitely many $\mu$-formulas; from this it is immediate that $\tau'$ also features at most finitely many $\mu$-formulas. Thus, using Proposition~\ref{p:af4} a second time, we find that $\tau'$ is a $\nu$-trail, as required. \end{proofof} \section{Tableaux and tableau games} \label{s:nw tableaux} \label{sec-tab} In this section we define a tableau game for the alternation-free $\mu$-calculus that is a adaptation of the tableau game by Niwi\'{n}ski and Walukiewicz \cite{niwi:game96}. We also show that the tableau game is adequate with respect to the semantics in Kripke frames, meaning that Prover\xspace has a winning strategy in the tableau game for some tableau of some formula iff the formula is valid. The soundness and completeness proofs for the focus system of this paper rely on this result. There we will exploit that proofs in the focus system closely correspond to winning strategies for one of the two players in the tableau game. \subsection{Tableaux} We first introduce tableaux, which are the graph over which the tableau game is played. The nodes of a tableau for some formula $\varphi$ are labelled with sequents (as defined in the previous section) consisting of formulas taken from the closure of $\varphi$. Our tableaux are defined from the perspective that sequents are read disjunctively. We show below that Prover\xspace has a winning strategy in the tableau for some sequent if the disjunction of its formulas are valid. This is is different from the satisfiability tableaux in \cite{niwi:game96}, where sequents are read conjunctively. The tableau system is based on the rules in Figure~\ref{f:tableaux rules}. We use the same terminology here as we did for rules in the focus system. The tableau rules \ensuremath{\mathsf{Ax1}}\xspace, \ensuremath{\mathsf{Ax2}}\xspace, \ensuremath{\mathsf{R}_{\lor}}\xspace, \ensuremath{\mathsf{R}_{\land}}\xspace, \RuFp{\mu} and \RuFp{\nu} are direct counterparts of the focus proof rules with the same name, the only difference being that the tableau rules are simpler since they do not involve the annotations. \begin{figure}[thb] \begin{minipage}{\textwidth} \begin{minipage}{0.20\textwidth} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$p, \atneg{p}, \Phi$} \end{prooftree} \end{minipage} \begin{minipage}{0.20\textwidth} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax2}}\xspace} \UnaryInfC{$\top, \Phi$} \end{prooftree} \end{minipage} \begin{minipage}{0.24\textwidth} \begin{prooftree} \AxiomC{$\varphi,\psi,\Phi$} \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInfC{$\varphi \lor \psi,\Phi$} \end{prooftree} \end{minipage} \begin{minipage}{0.30\textwidth} \begin{prooftree} \AxiomC{$\varphi, \Phi$} \AxiomC{$\psi,\Phi$} \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace} \BinaryInfC{$\varphi \land \psi,\Phi$} \end{prooftree} \end{minipage} \end{minipage} \bigskip \begin{minipage}{\textwidth} \begin{minipage}{0.45\textwidth} \begin{prooftree} \AxiomC{$\varphi_1,\Phi$} \AxiomC{$\dots$} \AxiomC{$\varphi_n,\Phi$} \LeftLabel{(\dag)} \RightLabel{\ensuremath{\mathsf{\mathsf{M}}}\xspace} \TrinaryInfC{$\Psi,\Box \varphi_1, \dots, \Box \varphi_n, \Diamond \Phi$} \end{prooftree} \end{minipage} \begin{minipage}{0.22\textwidth} \begin{prooftree} \AxiomC{$\varphi[\mu x . \varphi / x], \Phi$} \RightLabel{\RuFp{\mu}} \UnaryInfC{$\mu x . \varphi, \Phi$} \end{prooftree} \end{minipage} \begin{minipage}{0.22\textwidth} \begin{prooftree} \AxiomC{$\varphi[\nu x . \varphi / x], \Phi$} \RightLabel{\RuFp{\nu}} \UnaryInfC{$\nu x . \varphi, \Phi$} \end{prooftree} \end{minipage} \end{minipage} \caption{Rules of the tableaux system} \label{f:tableaux rules} \end{figure} The \emph{modal rule} \ensuremath{\mathsf{\mathsf{M}}}\xspace can be seen as a game-theoretic version of the box rule \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace from the focus system, differing from it in two ways. First of all, the number of premises of \ensuremath{\mathsf{\mathsf{M}}}\xspace is not fixed, but depends on the number of box formulas in the conclusion; as a special case, if the conclusion contains no box formula at all, then the rule has an empty set of premises, similar to an axiom. Second, the rule \ensuremath{\mathsf{\mathsf{M}}}\xspace does allow side formulas in the consequent that are not modal; note however, that \ensuremath{\mathsf{\mathsf{M}}}\xspace has as its \emph{side condition} (\dag) that this set $\Psi$ contains atomic formulas only, and that it is \emph{locally falsifiable}, i.e., $\Psi$ does not contain $\top$ and there is no proposition letter $p$ such that both $p$ and $\atneg{p}$ belong to $\Psi$. This side condition guarantees that \ensuremath{\mathsf{\mathsf{M}}}\xspace is only applicable if no other tableau rule is. \begin{definition} \label{d:tableau} A \emph{tableau} is a quintuple $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$, where $V$ is a set of \emph{nodes}, $E$ is a binary relation on $V$, $v_I$ is the \emph{initial node} or \emph{root} of the tableau, and both $\Phi$ and $\mathsf{Q}$ are labelling functions. Here $\Phi$ maps every node $v$ to a non-empty sequent $\Phi_v$, and \[ \mathsf{Q}: V \to \{\ensuremath{\mathsf{Ax1}}\xspace,\ensuremath{\mathsf{Ax2}}\xspace,\ensuremath{\mathsf{R}_{\lor}}\xspace,\ensuremath{\mathsf{R}_{\land}}\xspace,\ensuremath{\mathsf{\mathsf{M}}}\xspace,\RuFp{\mu},\RuFp{\nu}\} \] associates a proof rule $\mathsf{Q}_{v}$ with each node $v$ in $V$. Tableaux are required to satisfy the following \emph{coherence} conditions: \begin{enumerate}[resume] \item \label{i:tr1} If a node is labelled with the name of a proof rule then it has as many successors as the proof rule has premises, and the sequents at the node and its successors match the specification of the proof rules in Figure~\ref{f:tableaux rules}. \item \label{i:tr2} A node can only be labelled with the modal rule $\ensuremath{\mathsf{\mathsf{M}}}\xspace$ if its side condition (\dag) is met. \item \label{i:tr3} In any application of the rules $\ensuremath{\mathsf{R}_{\lor}}\xspace, \ensuremath{\mathsf{R}_{\land}}\xspace, \RuFp{\mu}$ and $\RuFp{\nu}$, the principal formula is not an element of the context $\Phi$. \end{enumerate} A tableau is a \emph{tableau for a sequent} $\Phi$ if $\Phi$ is the sequent of the root of the tableau. \end{definition} Observe that it follows from condition~\ref{i:tr2} in Definition~\ref{d:tableau} that if a node $u$ is labelled with \ensuremath{\mathsf{\mathsf{M}}}\xspace, then no other rule is applicable. \begin{proposition} \label{p:tableau exists} There is a tree-based tableau for every sequent $\Phi$. \end{proposition} \begin{proof} This can be proved in a straightforward step-wise procedure in which we construct the tree underlying $\mathstr{T}$ by repeatedly extending it at non-axiomatic leaves using any of the proof rules that are applicable at that leaf. This generates a possibly infinite tree that is a tableau because in every sequent there is at least one rule applicable. Note that \ensuremath{\mathsf{\mathsf{M}}}\xspace can be applied in sequents without modal formulas, in which case it has no premises and thus creates a leaf of the tableau. \end{proof} A crucial aspect of tableaux for the $\mu$-calculus is that one has to keep track of the development of individual formulas along infinite paths in the tableau. For this purpose we define the notion of a trail in a path of the tableau. \begin{definition} \label{d:tableaux trails} Let $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$ be a tableau. For all nodes $u,v \in V$ such that $E u v$ we define the \emph{active trail relation} $\atrail_{u,v} \subseteq \Phi_u \times \Phi_v$ and the \emph{passive trail relation} $\ptrail_{u,v} \subseteq \Phi_u \times \Phi_v$, both of which relate formulas in the sequents at $u$ and $v$. The idea is that $\atrail$ connects the active formulas in the premise and conclusion, whereas $\ptrail$ connects the side formulas. Both relations are defined via a case distinction depending on the rule that is applied at $u$: \textit{Case $\mathsf{Q}_{u} = \ensuremath{\mathsf{R}_{\lor}}\xspace$:} Then $\Phi_u = \{\varphi \lor \psi\} \cup \Psi$ and $\Phi_v = \{\varphi,\psi\} \cup \Psi$ for some sequent $\Psi$. We define $\atrail_{u,v} = \{(\varphi \lor \psi,\varphi),(\varphi \lor \psi,\psi)\}$ and $\ptrail_{u,v} = \Delta_\Psi$, where $\Delta_\Psi = \{(\varphi,\varphi) \mid \varphi \in \Psi\}$. \textit{Case $\mathsf{Q}_{u} = \ensuremath{\mathsf{R}_{\land}}\xspace$:} In this case $\Phi_u = \{\varphi \land \psi\} \cup \Psi$ and $\Phi_v = \{\chi\} \cup \Psi$ for some sequent $\Psi$ and $\chi$ such that $\chi = \varphi$ if $v$ corresponds to the left premise of \ensuremath{\mathsf{R}_{\land}}\xspace and $\chi = \psi$ if $v$ corresponds to the right premise. In both cases we set $\atrail_{u,v} = \{(\varphi \land \psi,\chi)\}$ and $\ptrail_{u,v} = \Delta_\Psi$. \textit{Case $\mathsf{Q}_{u} = \ensuremath{\mathsf{\mathsf{M}}}\xspace$:} Then $\Phi_u = \Psi \cup \{\Box \varphi_1,\dots,\Box \varphi_n\} \cup \Diamond \Phi$ and $\Phi_v = \{\phi_{v}\} \cup \Phi$ for some sequent $\Phi$ and locally falsifiable set of literals $\Psi \subseteq \mathsf{Lit}$. We can thus define $\atrail_{u,v} = \{(\Box \phi_{v},\phi_{v})\} \cup \{(\Diamond \phi, \phi) \mid \phi \in \Phi\}$ and $\ptrail_{u,v} = \varnothing$. \textit{Case $\mathsf{Q}_{u} = \RuFp{\mu}$:} Then $\Phi_u = \{\mu x . \varphi\} \cup \Psi$ and $\Phi_v = \{\varphi[\mu x . \varphi / x]\} \cup \Psi$ for some sequent $\Psi$. We define $\atrail_{u,v} = \{(\mu x . \varphi,\varphi[\mu x . \varphi / x])\}$ and $\ptrail_{u,v} = \Delta_\Psi$. \textit{Case $\mathsf{Q}_{u} = \RuFp{\nu}$:} Then $\Phi_u = \{\nu x . \varphi\} \cup \Psi$ and $\Phi_v = \{\varphi[\nu x . \varphi / x]\} \cup \Psi$ for some sequent $\Psi$. We define $\atrail_{u,v} = \{(\nu x . \varphi,\varphi[\nu x . \varphi / x])\}$ and $\ptrail_{u,v} = \Delta_\Psi$. Note that it is not possible that $\mathsf{Q}_{u} = \ensuremath{\mathsf{Ax1}}\xspace$ or $\mathsf{Q}_{u} = \ensuremath{\mathsf{Ax2}}\xspace$ because $u$ is assumed to have a successor. Finally, for all nodes $u$ and $v$ with $Euv$, the \emph{general trail relation} $\gtrail_{u,v}$ is defined as $\gtrail_{u,v} \mathrel{:=} \atrail_{u,v} \cup \ptrail_{u,v}$. \end{definition} Note that for any two nodes $u,v$ wth $Euv$ and $(\phi,\psi) \in \gtrail_{u,v}$, we have either $(\phi,\psi) \in \atrail_{{u,v}}$ and $\psi \in \mathsf{Clos}_{0}(\phi)$, or else $(\phi,\psi) \in \atrail_{{u,v}}$ and $\phi = \psi$. \begin{definition} Let $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$ be a tableau. A \emph{path} in $\mathstr{T}$ is simply a path in the underlying graph $(V,E)$ of $\mathstr{T}$, that is, a sequence $\pi = (v_{n})_{n<\kappa}$, for some ordinal $\kappa$ with $0 < \kappa \leq \omega$, such that $Ev_{i}v_{i+1}$ for every $i$ such that $i+1 < \kappa$. A \emph{trail} on such a path $\pi$ is a sequence $(\phi_{n})_{n<\kappa}$ of formulas such that $(\phi_{i},\phi_{i+1}) \in \gtrail_{v_i,v_{i+1}}$, whenever $i+1 < \kappa$. \end{definition} \begin{remark} Although our tableaux are very much inspired by the ones introduced by Niwi\'{n}ski and Walukiewicz~\cite{niwi:game96}, there are some notable differences in the actual definitions. In particular, the fixpoint rules in our tableaux simply unfold fixpoint formulas; that is, we omit the mechanism of definition lists. Some minor differences are that we always decompose formulas until we reach literals, and that our tableaux are not necessarily tree-based. \end{remark} It is easy to see that because of guardedness, we have the following. \begin{proposition} \label{p:prog Let $\pi$ be an infinite path in a tableau $\mathstr{T}$, and let $(\phi_{n})_{n<\omega}$ be a trail on $\pi$. Then \begin{urlist} \item $\pi$ witnesses infinitely many applications of the rule \ensuremath{\mathsf{\mathsf{M}}}\xspace; \item there are infinitely many $i$ such that $(\phi_{i},\phi_{i+1}) \in \atrail_{v_i,v_{i+1}}$. \end{urlist} \end{proposition} Before we move on to the definition of tableau games, we need to have a closer look at trails. Recall that for any two nodes $u,v \in V$, the trail relation $\gtrail_{u,v}$ is the union of an active and a passive trail relation, and that the passive relation is always a subset of the diagonal relation on formulas. As a consequence, we may \emph{tighten} any trail $(\phi_{n})_{n<\kappa}$ on a path $\pi = (v_{n})_{n<\kappa}$ simply by omitting all $\phi_{i+1}$ from the sequence for which $(\phi_{i},\phi_{i+1})$ belongs to the passive trail relation $\ptrail_{v_{i},v_{i+1}}$. \begin{definition} Let $\tau = (\phi_{n})_{n<\kappa}$ be a trail on the path $\pi = (v_{n})_{n<\kappa}$ in some tableau $\mathstr{T}$. Then the \emph{tightened} trail $\rdc{\tau}$ is obtained from $\tau$ by omitting all $\phi_{i+1}$ from $\tau$ for which $(\phi_{i},\phi_{i+1})$ belongs to the passive trail relation $\ptrail_{v_{i},v_{i+1}}$. \end{definition} It is not difficult to see that tightened trails are \emph{traces}, and that it follows from Proposition~\ref{p:prog} that the tightening of an infinite trail is infinite. \begin{definition} Let $\tau = (\phi_{n})_{n<\omega}$ be an infinite trail on the path $\pi = (v_{n})_{n<\omega}$ in some tableau $\mathstr{T}$. Then we call $\tau$ a \emph{$\nu$-trail} if its tightening $\rdc{\tau}$ is a $\nu$-trace. \end{definition} \subsection{Tableau games} We are now ready to introduce the \emph{tableau game} $\game{\mathstr{T}}$ that we associate with a tableau $\mathstr{T}$. We will first give the formal definition of this game, and then provide an intuitive explanation; Appendix~\ref{sec:games} contains more information on infinite games. We shall refer to the two players of tableau games as \emph{Prover\xspace} (female) and \emph{Refuter\xspace} (male). \begin{definition} \label{d:tg} Given a tableau $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$, the \emph{tableau game} $\game{\mathstr{T}}$ is the (initialised) board game $\game{\mathstr{T}} = (V,E,O,\mathcal{M}_{\nu},v_{I})$ defined as follows. $O$ is a partial map that assigns a player to some positions in $V$; the player $O(v)$ will then be called the \emph{owner} of the position $v$. More specifically, Refuter\xspace owns all positions that are labelled with one of the axioms, \ensuremath{\mathsf{Ax1}}\xspace or \ensuremath{\mathsf{Ax2}}\xspace, or with the rule \ensuremath{\mathsf{R}_{\land}}\xspace; Prover\xspace owns all position labelled with \ensuremath{\mathsf{\mathsf{M}}}\xspace; $O$ is undefined on all other positions. In this context $v_{I}$ will be called the \emph{initial} or \emph{starting} position of the game. The set $\mathcal{M}_{\nu}$ is the \emph{winning condition} of the game (for Prover\xspace); it is defined as the set of infinite paths through the graph that carry a $\nu$-trail. \end{definition} A \emph{match} of the game consists of the two players moving a token from one position to another, starting at the initial position, and following the edge relation $E$. The owner of a position is responsible for moving the token from that position to an adjacent one (that is, an $E$-successor); in case this is impossible because the node has no $E$-successors, the player \emph{gets stuck} and immediately loses the match. For instance, Refuter\xspace loses as soon as the token reaches an axiomatic leaf labelled \ensuremath{\mathsf{Ax1}}\xspace or \ensuremath{\mathsf{Ax2}}\xspace; similarly, Prover\xspace loses at any modal node without successors. If the token reaches a position that is not owned by a player, that is, a node of $\mathstr{T}$ that is labelled with the proof rule \ensuremath{\mathsf{R}_{\lor}}\xspace, \RuFp{\mu} or \RuFp{\nu}, the token automatically moves to the unique successor of the position. If neither player gets stuck, the resulting match is infinite; we declare Prover\xspace to be its winner if the match, as an $E$-path, belongs to the set $\mathcal{M}_{\nu}$, that is, if it carries a $\nu$-trail. Finally, we say that a position $v$ is a \emph{winning} position for a player $P$ if $P$ has a way of playing the game that guarantees they win the resulting match, no matter how $P$'s opponent plays. For a formalisation of these concepts we refer to the Appendix. \begin{remark} \label{r:treestrat} If $\mathstr{T}$ is \emph{tree-based} the notion of a strategy can be simplified. The point is that in this case finite matches can always be identified with their last position, since any node in a tree corresponds to a unique path from the root to that node. It follows that any strategy in such a game is \emph{positional} (that is, the move suggested to the player only depends on the current position). Moreover, we may identify a strategy for either player with a \emph{subtree} $S$ of $\mathstr{T}$ that contains the root of $\mathstr{T}$ and, for any node $s$ in $S$, (1) it contains all successors of $s$ in case the player owns the position $s$, while (2) it contains exactly one successor of $s$ in case the player's opponent owns the position $s$. \end{remark} The observations below are basically due to Niwi\'nski \& Walukiewicz~\cite{niwi:game96}. \begin{theorem}[Determinacy] Let $\mathstr{T}$ be a tableau for a sequent $\Phi$. Then at any position of the tableaux game for $\mathstr{T}$ precisely one of the players has a winning strategy. \end{theorem} \begin{proof} The key observation underlying this theorem is that tableau games are \emph{regular}. That is, using the labelling maps $\mathsf{Q}$ and $\Sigma$ of a tableau $\mathstr{T}$, we can find a finite set $C$, a colouring map $\gamma: V \to C$, and an $\omega$-regular subset $L \subseteq C^{*}$ such that $\mathcal{M}_{\nu} = \{ (v_{n})_{n\in\omega} \in \mathsf{InfPath}(\mathstr{T}) \mid (\gamma(v_{n}))_{n\in\omega} \in L \}$. The determinacy of $\game{\mathstr{T}}$ then follows by the classic result by B\"uchi \& Landweber~\cite{buch:solv69} on the determinacy of regular games. We skip further details of the proof, since it is rather similar to the analogous proof in~\cite{niwi:game96}. \end{proof} For the Adequacy Theorem below we do provide a proof, since our proof is somewhat different from the one by Niwi\'nski and Walukiewicz. \begin{theorem}[Adequacy] \label{t:adequacy} Let $\mathstr{T}$ be a tableau for a sequent $\Phi$. Then Refuter\xspace (Prover\xspace, respectively) has a winning strategy in $\game{\mathstr{T}}$ iff the formula $\bigvee\Phi$ is refutable (valid, respectively). \end{theorem} \begin{proof} Fix a sequent $\Phi$ and a tableau $\mathstr{T}$ for $\Phi$. We will prove the following statement: \begin{equation} \label{eq:admain} \text{Refuter\xspace has a winning strategy in } \game{\mathstr{T}} \text{ iff } \Phi \text{ is refutable}. \end{equation} The theorem follows from this by the determinacy of $\game{\mathstr{T}}$. For the left to right implication of \eqref{eq:admain}, fix a tableau $\mathstr{T} = (V,E,\Phi,\mathsf{Q},v_I)$; it will be convenient to assume that $\mathstr{T}$ is \emph{tree based}. This is without loss of generality: if the graph underlying $\mathstr{T}$ does not have the shape of a tree, we may simply continue with its unravelling. Let $f$ be a winning strategy for Refuter\xspace in the game $\game{\mathstr{T}}$; recall that we may think of $f$ as a subtree $\mathstr{T}_{f}$ of $\mathstr{T}$. We will first define the pointed model in which the sequent $\Phi$ can be refuted. We define a \emph{state} to be a maximal path in $\mathstr{T}_{f}$ which does not contain any modal node, with the possible exception of its final node $\mathsf{last}(\pi)$. Note that by maximality, the first node of a state is either the root of $\mathstr{T}$ or else a successor of a modal node. Given a state $\pi = v_{0}\cdots v_{k}$ and a formula $\phi$, we say that $\phi$ \emph{occurs at} $\pi$, if $\phi \in \bigcup_{i} \Phi_{v_{i}}$. We let $S_{f}$ denote the collection of all states, and define an accessibility relation $R_{f}$ on this set by putting $R_{f}\pi\rho$ iff the first node of $\rho$ is an $E$-successor of the last node of $\pi$. Note that this can only happen if $\mathsf{last}(\pi)$ is modal. Finally, we define the valuation $V_{f}$ by putting $V_{f}(p) \mathrel{:=} \{ \pi \mid p \not\in \Phi_{\mathsf{last}(\pi)} \}$, and we set $\mathstr{S}_{f} \mathrel{:=} (S_{f},R_{f},V_{f})$. In the sequel we will need the following observation; we leave its proof as an exercise. \begin{claimfirst} \label{cl:fair} Let $\phi \in \Phi_{v_{j}}$ be a non-atomic formula, where $v_{j}$ is some node on a finite path $\pi = (v_{i})_{i<k}$. If $\pi$ is a state, then the formula is active at some node $v_{m}$ on $\pi$, with $j \leq m < k$. \end{claimfirst} Now let $\pi_{0}$ be any state of which $\mathsf{first}(\pi_{0})$ is the root of $\mathstr{T}$. We will prove that the pointed model $\mathstr{S}_{f},\pi_{0}$ refutes $\Phi$ by showing that \begin{equation} \label{eq:adkey} \text{for every }\phi \in \Phi, \text{ the position } (\phi,\pi_{0}) \text{ is winning for $\forall$ in } \mathcal{E}(\textstyle{\bigvee}\Phi,\mathstr{S}_{f}). \end{equation} To prove this, we will provide $\forall$ with a winning strategy in the evaluation game $\mathcal{E}(\bigvee\Phi,\mathstr{S}_{f})@(\phi,\pi_{0})$, for each $\phi\in\Phi$. Fix such a $\phi$, and abbreviate $\mathcal{E} \mathrel{:=} \mathcal{E}(\bigvee\Phi,\mathstr{S}_{f})@(\phi,\pi_{0})$. The key idea is that, while playing $\mathcal{E}$, $\forall$ maintains a private match of the tableau game $\game{\mathstr{T}}$, which is guided by Refuter\xspace's winning strategy $f$ and such that the current match of $\mathcal{E}$ corresponds to a trail on this $\game{\mathstr{T}}$-match. For some more detail on this link between the two games, let $\Sigma = (\phi_{0},\pi_{0})(\phi_{1},\pi_{1})\cdots (\phi_{n},\pi_{n})$ be a partial match of $\mathcal{E}$. We will say that a $\game{\mathstr{T}}$-match $\pi$ is \emph{linked to} $\Sigma$ if the following holds. First, let $i_{1},\ldots,i_{k}$ be such that $0 < i_{1} < \cdots < i_{k} \leq n$ and $\phi_{i_{1}-1},\ldots,\phi_{i_{k}-1}$ is the sequence of all \emph{modal} formulas among $\phi_{0},\ldots,\phi_{n-1}$. Then we require that $\pi$ is the concatenation $\pi = \pi_{0}\circ\cdots \circ \pi_{i_{k}-1}\circ \rho$, where each $\pi_{i}$ is a state and $\rho \sqsubseteq \pi_{n}$, and that the sequence $\phi_{0} \cdots\phi_{n}$ is the active tightening of some trail on $\pi$. Clearly then the matches that just consist of the initial positions of $\mathcal{E}$ and $\game{\mathstr{T}}$, respectively, are linked. Our proof of \eqref{eq:adkey} is based on the fact that $\forall$ has a strategy that keeps such a link throughout the play of $\mathcal{E}$. As the crucial observation underlying this strategy, the following claim states that $\forall$ can always maintain the link for one more round of the evaluation game. \begin{claim} \label{cl:tbcp1} Let $\Sigma = (\phi_{0},\pi_{0})(\phi_{1},\pi_{1})\cdots (\phi_{n},\pi_{n})$ be some $\mathcal{E}$-match and let $\pi$ be an $f$-guided $\game{\mathstr{T}}$-match that is linked to $\Sigma$. Then the following hold. \begin{urlist} \item If $(\phi_{n},\pi_{n})$ is a position for $\forall$ in $\mathcal{E}$, then he has a move $(\phi_{n+1},\pi_{n+1})$ such that some $f$-guided extension $\pi'$ of $\pi$ is linked to $\Sigma\cdot (\phi_{n+1},\pi_{n+1})$. \item If $(\phi_{n},\pi_{n})$ is not a position for $\forall$ in $\mathcal{E}$, then for any move $(\phi_{n+1},\pi_{n+1})$ there is some $f$-guided extension $\pi'$ of $\pi$ that is linked to $\Sigma\cdot (\phi_{n+1},\pi_{n+1})$. \end{urlist} \end{claim} \begin{pfclaim} Let $\Sigma$ and $\pi$ be as in the formulation of the claim. Then $\pi = \pi_{0}\circ\cdots \circ \pi_{i_{k}-1}\circ \rho$, where $\rho \sqsubseteq \pi_{n}$ and $i_{1},\ldots,i_{k}$ are such that $0 < i_{1} < \cdots < i_{k} \leq n$ and $\phi_{i_{1}-1},\ldots,\phi_{i_{k}-1}$ is the sequence of all \emph{modal} formulas among $\phi_{0},\ldots,\phi_{n-1}$. Furthermore $(\phi_{i})_{i\leq n} = \rdc{\tau}$ for some trail $\tau$ on $\pi$. Write $\rho = v_{i_{k}}\cdots v_{l}$, then $\rho = \pi_{n}$ iff $v_{l}$ is modal. We prove the claim by a case distinction on the nature of $\phi_{n}$. Note that $\phi_{n} \in \Phi_{v_{l}}$, and that by Claim~\ref{cl:fair} there is a node $v_{i}$ on the path $\pi_{n}$ such that $i_{k} \leq i$ and $\phi_{n}$ is active at $v_{i}$. \begin{description} \item[Case $\phi_{n} = \psi_{0} \land \psi_{1}$] for some formulas $\psi_{0}, \psi_{1}$. The position $(\phi_{n},\pi_{n})$ in $\mathcal{E}$ then belongs to $\forall$. As $\psi_{0} \land \psi_{1}$ is the active formula at the node $v_{i}$ in $\mathstr{T}$, this means that $\mathsf{Q}_{v_{i}} = \ensuremath{\mathsf{R}_{\land}}\xspace$, so that $v_{i}$, as a position of $\game{\mathstr{T}}$, belongs to Refuter\xspace. This means that in $\mathcal{E}$, $\forall$ may pick the formula $\psi_{j}$ which is associated with the successor $v_{i+1}$ of $v_{i}$ on $\pi_{n}$. Note that, since $\pi_{n}$ is part of the $f$-guided match $\pi$, this successor is the one that is picked by Refuter\xspace in $\game{\mathstr{T}}$ at the position $v_{i}$ in the match $\pi$. We define $\Sigma' \mathrel{:=} \Sigma \cdot (\psi_{j},\pi_{n})$, $\pi' \mathrel{:=} \pi \cdot v_{l+1}\cdots v_{i}v_{i+1}$, and $\tau' \mathrel{:=} \tau \cdot \phi_{n} \cdots \phi_{n} \cdot \psi_{j}$. It is then immediate by the definitions that $\pi' = \pi_{0}\circ\cdots \circ \pi_{i_{k}-1}\circ \rho'$, where $\rho' \mathrel{:=} \rho \cdot v_{l+1}\cdots v_{i}\cdot v_{i+1}$; Observe that since $v_{i+1}$ lies on the path $\pi_{n}$, we still have $\rho' \sqsubseteq \pi_{n}$. Furthermore, it is obvious that $\tau'$ extends $\tau$ via a number of passive trail steps, i.e., where $\phi_{n}$ is not active, until $\phi_{n}$ is the active formula at $v_{i}$; from this it easily follows that $\rdc{\tau'} = \rdc{\tau} \cdot \psi_{j} = \phi_{0}\cdots\phi_{n}\cdot\psi_{j}$. Furthermore, since the position $v_{i+1}$ of $v_{i}$ lies on the path $\pi_{n}$, it was picked by Refuter\xspace's winning strategy in $\game{\mathstr{T}}$ at the position $v_{i}$ in the match $\pi$; this means that the match $\pi'$ is still $f$-guided. \item[Case $\phi_{n} = \psi_{0} \lor \psi_{1}$] for some formulas $\psi_{0}, \psi_{1}$. The position $(\phi_{n},\pi_{n})$ in $\mathcal{E}$ then belongs to $\exists$, so suppose that she continues the match $\Sigma$ by picking the formula $\psi_{j}$. In this case we have $\mathsf{Q}_{v_{i}} = \ensuremath{\mathsf{R}_{\lor}}\xspace$, so that $v_{i}$ has a unique successor $v_{i+1}$ which features both $\psi_{0}$ and $\psi_{1}$ in its label set. This means that if we define $\Sigma' \mathrel{:=} \Sigma \cdot (\psi_{j},\pi_{n})$, $\pi' \mathrel{:=} \pi \cdot v_{l+1}\cdots v_{i}v_{i+1}$ and $\tau' \mathrel{:=} \tau \cdot \phi_{n} \cdots \phi_{n} \cdot \psi_{j}$, it is not hard to see that $\Sigma'$ and $\pi'$ are linked, with $\tau'$ the witnessing trail on $\pi'$. \item[Case $\phi_{n} = \eta x. \psi$] for some binder $\eta$, variable $x$ and formula $\psi$. The match $\Sigma$ is then continued with the automatic move $(\psi[\eta x\, \psi/x],\pi_{n+1})$. This case is in fact very similar to the one where $\phi$ is a disjunction, so we omit the details. \item[Case $\phi_{n} = \Box\psi$] for some formula $\psi$. Then the position $(\phi_{n},\pi_{n})$ belongs to $\forall$: he has to come up with an $R_{f}$-successor of the state $\pi_{n}$. Since $\Box\psi$ is active in it, the node $v_{i}$ must be modal, in the sense that $\mathsf{Q}_{v_{i}} = \ensuremath{\mathsf{\mathsf{M}}}\xspace$. By the definition of a state this can only be the case if $v_{i}$ is the last node on the path/state $\pi_{n}$; recall that in this case we have $\rho = \pi_{n}$. Let $u \in E[v_{i}]$ be the successor of $v$ associated with $\psi$, and let $\pi_{n+1}$ be any state with $\mathsf{first}(\pi_{n+1}) = u$. It follows by definition of $R_{f}$ that $\pi_{n+1}$ is a successor of $\pi_{n}$ in the model $\mathstr{S}_{f}$. This $\pi_{n+1}$ will then be $\forall$'s (legitimate) pick in $\mathcal{E}$ at the position $(\Box\psi,\pi_{n+1})$. Define $\Sigma' \mathrel{:=} \Sigma \cdot (\psi,\pi_{n+1})$, $\pi' \mathrel{:=} \pi \cdot v_{l+1} \cdots v_{i} u$ and $\tau' \mathrel{:=} \tau \cdot \phi_{n} \cdots \phi_{n} \cdot \psi$. Then we find that $\pi' = \pi_{0}\circ\cdots \circ \pi_{i_{k}-1}\circ \rho \circ \rho'$, where $\rho'$ is the one-position path $u$. Clearly then $\rho' \sqsubseteq \pi_{n+1}$. Furthermore, it is easy to verify that $\rdc{\tau'} = \rdc{\tau} \cdot \psi = \phi_{0}\cdots\phi_{n}\psi$. This means that $\Sigma'$ and $\pi'$ are linked, as required. \item[Case $\phi_{n} = \Diamond\psi$] for some formula $\psi$. As in the previous case this means that $v_{i}$ is a modal node, and $v_{i} = \mathsf{last}(\pi_{n})$. However, the position $(\phi_{n},\pi_{n})$ now belongs to $\exists$; suppose that she picks an $R_{f}$-successor $\pi_{n+1}$ of $\pi_{n}$. Let $u \mathrel{:=} \mathsf{first}(\pi_{n+1})$, then it follows from the definition of $R_{f}$ that $u$ is an $E$-successor of $v_{i}$. As such, $u$ is a legitimate move for Prover\xspace in the tableau game. It then follows, exactly as in the previous case, that $\pi' \mathrel{:=} \pi \cdot v_{l+1} \cdots v_{i} u$ is linked to $\Sigma' \mathrel{:=} \Sigma \cdot (\psi,\pi_{n+1})$. \end{description} This finishes the proof of the claim. \end{pfclaim} On the basis of Claim~\ref{cl:tbcp1}, we may assume that $\forall$ indeed uses a strategy $f'$ that keeps a link between the $\mathcal{E}$-match and his privately played $f$-guided $\game{\mathstr{T}}$-match. We claim that $f'$ is actually a winning strategy for him. To prove this, consider a \emph{full} $f'$-guided match $\Sigma$; we claim that $\forall$ must be the winner of $\Sigma$. This is easy to see if $\Sigma$ is finite, since it follows by the first item of the Claim that playing $f'$, $\forall$ will never get stuck. This leaves the case where $\Sigma$ is infinite. Let $\Sigma = (\phi_{n},s_{n})_{n<\omega}$; it easily follows from Claim~\ref{cl:tbcp1} that there must be an infinite $f$-guided $\game{\mathstr{T}}$-match $\pi$, such that the sequence $(\phi_{n})_{n<\omega}$ is the tightening of some trail on $\pi$. Since $\pi$ is guided by Refuter\xspace's winning strategy $f$ this means that all of its trails are $\mu$-trails; but then obviously $(\phi_{n})_{n<\omega}$ is a $\mu$-trace, meaning that $\forall$ is the winner of $\Sigma$ indeed. \medskip The implication from left to right in \eqref{eq:admain} is proved along similar lines, so we permit ourselves to be a bit more sketchy. Assume that $\Phi$ is refuted in some pointed model $(\mathstr{S},s)$. Then by the adequacy of the game semantics for the modal $\mu$-calculus, $\forall$ has a winning strategy $f$ in the evaluation game $\mathcal{E}(\bigvee\phi,\mathstr{S})$ initialised at position $(\bigvee\Phi,s)$. Without loss of generality we may assume $f$ to be \emph{positional}, i.e., it only depends on the current position of the match. The idea of the proof is now simple: while playing $\game{\mathstr{T}}$, Refuter\xspace will make sure that, where $\pi = v_{0}\cdots v_{k}$ is the current match, every formula in $\Phi_{v_{k}}$ is the endpoint of some trail, and every trail $\tau$ on $\pi$ is such that its tightened trace $\rdc{\tau}$ is the projection of an $f$-guided match of $\mathcal{E}(\bigvee\phi,\mathstr{S})$ initialised at position $(\phi,s)$ for some $\phi \in \Phi$. To show that Refuter\xspace can maintain this condition for the full duration of the match, it suffices to prove that he can keep it during one single round. For this proof we make a case distinction, as to the rule applied at the last node $v_{k}$ of the partial $\game{\mathstr{T}}$-match $\pi = v_{0}\cdots v_{k}$. The proof details are fairly routine, so we confine ourselves to one case, leaving the other cases as an exercise. Assume, then, that $v_{k}$ is a conjunctive node, that is, $\mathsf{Q}_{v_{k}} = \ensuremath{\mathsf{R}_{\land}}\xspace$. This node belongs to Refuter\xspace, so as his move he has to pick an $E$-successor of $v_{k}$. The active formula at $v_{k}$ is some conjunction, say, $\psi_{0}\land\psi_{1} \in \Phi_{v_{k}}$. By the inductive assumption there is some trail $\tau = \phi_{0}\cdots \phi_{k}$ on $\pi$ such that $\phi_{k} = \psi_{0}\land\psi_{1}$, and there is some $f$-guided $\mathcal{E}$-match of which $\rdc{\tau}$ is the projection, i.e., it is of the form $\Sigma = (\phi_{0},s_{0})\cdots(\phi_{k},s_{k})$. Now observe that in $\mathcal{E}$, the last position of this match, viz., $(\phi_{k}, s_{k}) = (\psi_{0}\land \psi_{1},s_{k})$, belongs to $\forall$. Assume that his winning strategy $f$ tells him to pick the formula $\psi_{j}$ at this position, then in the tableau game, at the position $v_{k}$, Refuter\xspace will pick the $E$-successor $u_{j}$ of $v_{k}$ that is associated with the conjunct $\psi_{j}$. That is, he extends the match $\pi$ to $\pi' \mathrel{:=} \pi\cdot u_{j}$. To see that Refuter\xspace has maintained the invariant, consider an arbitrary trail on $\pi'$; clearly such a trail is of the form $\sigma' = \sigma \cdot \psi$, for some trail $\sigma$ on $\pi$, and some formula $\psi \in \Phi_{u_{j}}$. It is not hard to see that either $\mathsf{last}(\sigma) = \psi_{0}\land\psi_{1}$ and $\psi=\psi_{j}$, or else $\mathsf{last}(\sigma) = \psi$. In the first case $\rdc{\sigma'}$ is the match $(\phi_{0},s_{0})\cdots (\phi_{k},s_{k})\cdot (\psi_{j},s_{k})$; in the second case we find that $\rdc{\sigma'} = \rdc{\sigma}$ so that for the associated $f$-guided $\mathcal{E}$-match we can take any such match that we inductively know to exist for $\sigma$. \end{proof} \begin{corollary} \label{cor:invariant} Let $\mathstr{T}$ and $\mathstr{T}'$ be two tableaux for the same sequent. Then Prover\xspace has a winning strategy in $\game{\mathstr{T}}$ iff she has a winning strategy in $\game{\mathstr{T}'}$. \end{corollary} \subsection{Thin and progressive proofs} When we prove the soundness of our proof system it will be convenient to work with (infinite) proofs that are in a certain normal form. The idea here is that we restrict (as much as possible) attention to sequents that are \emph{thin} in the sense that they do not feature formulas that are both in and out of focus, and to proofs that are \emph{progressive} in the sense that when (from the perspective of proof search) we move from the conclusion of a boolean or fixpoint rule to its premise(s), we may drop the principal formula. Theorem~\ref{t:tpp} below states that we can make these assumptions without loss of generality. \begin{definition} An annotated sequent $\Sigma$ is \emph{thin} if there is no formula $\phi \in \muML^{\mathit{af}}$ such that $\phi^f \in \Sigma$ and $\phi^u \in \Sigma$. Given an annotated sequent $\Sigma$, we define its \emph{thinning} \[ \thin{\Sigma} \mathrel{:=} \{ \phi^{f} \mid \phi^{f} \in \Sigma \} \cup \{ \phi^{u} \mid \phi^{u} \in \Sigma, \phi^{f} \not\in \Sigma \}. \] A pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{thin} if for all $v \in T$ with $\phi^f,\phi^u \in \Sigma_v$ we have that $\mathsf{R}_v = \ensuremath{\mathsf{W}}\xspace$ and $\phi^u \notin \Sigma_u$ for the unique $u$ with $P v u$. \end{definition} Note that one may obtain the thinning $\thin{\Sigma}$ from an annotated sequent $\Sigma$ by removing the \emph{unfocused} versions of the formulas with a double occurrence in $\Sigma$. The definition of a thin proof implies that whenever a thin proof contains a sequent that is not thin then this sequent is followed by applications of the weakening rule until all the duplicate formulas are weakened away. For example if the sequent $\Sigma_v = p^u,p^f,q^u,q^f,r$ occurs in a thin proof then at $v$ and all of its immediate successors there need to be applications of weakening until only one annotated version of $p$ and one annotated version of $q$ is left. This might look for instance as follows: \begin{center} \begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$p^f,q^u,r$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$p^f,q^u,q^f,r$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$p^u,p^f,q^u,q^f,r$} \noLine \UnaryInfC{$\vdots$} \end{prooftree} \end{center} \begin{definition} An application of a boolean or fixpoint rule at a node $u$ in a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{progressive} if for the principal formula $\phi^a \in \Sigma_u$ it holds that $\phi^a \notin \Sigma_v$ for all $v$ with $Puv$.\footnote{% Note that since we assume guardedness, the principal formula is different from its residuals. } The proof $\Pi$ is \emph{progressive} if all applications of the boolean rules and the fixpoint rules in $\Pi$ are progressive. \end{definition} Our main result is the following. \begin{theorem} \label{t:tpp} Every \ensuremath{\mathsf{Focus}_\infty}\xspace-derivable sequent $\Phi$ has a thin and progressive \ensuremath{\mathsf{Focus}_\infty}\xspace-proof. \end{theorem} For the proof of Theorem~\ref{t:tpp} we need some preparations. Recall that we defined the linear order $\sqsubseteq$ on annotations such that $u \sqsubset f$. \begin{definition} \label{d:mf} Let $\Sigma$ and $\Gamma$ be annotated sequents. We define $\morefocus{\Gamma}{\Sigma}$ to hold if for all $\phi^a \in \Gamma$ there is a $b \sqsupseteq a$ such that $\phi^b \in \Sigma$. \end{definition} \begin{definition} \label{d:backcl} Let $\Sigma$ be a set of annotated formulas. We define $Q_0(\Sigma)$ as the set of all annotated formulas $\phi^a$ such that either \begin{enumerate} \item $\phi^b \in \Sigma$ for some $b \sqsupseteq a$; \item $\phi = \phi_0 \lor \phi_1$, and $\phi_0^a \in \Sigma$ and $\phi_1^a \in \Sigma$; \item $\phi = \phi_0 \land \phi_1$, and $\phi_0^a \in \Sigma$ or $\phi_1^a \in \Sigma$; \item $\phi = \mu x . \phi_0$ and $\phi_0(\phi)^u \in \Sigma$; or \item $\phi = \nu x . \phi_0$ and $\phi_0(\phi)^a \in \Sigma$. \end{enumerate} The map $Q_0$ clearly being a monotone operator on the sets of annotated formulas, we define the \emph{backwards closure} of $\Sigma$ as the least fixpoint $Q(\Sigma)$ of the operator $\Gamma \mapsto \Sigma \cup Q_0(\Gamma)$. \end{definition} In words, $Q(\Sigma)$ is the least set of annotated formulas such that $\Sigma \subseteq Q(\Sigma)$ and $Q_0(Q(\Sigma)) \subseteq Q(\Sigma)$. The following proposition collects some basic properties of $Q$; recall that we abbreviate $\allfocus{\Sigma} = \allfocus{\uls{\Sigma}}$, that is, $\allfocus{\Sigma}$ consists of the annotated formulas $\phi^{f}$ such that $\phi^a \in \Sigma$ for some $a$. \begin{proposition} \label{p:progressive facts}\label{p:pf} The map $Q$ is a closure operator over sets of annotated formulas. Furthermore, the following hold for any pair of annotated sequents $\Gamma,\Sigma$. \begin{enumerate} \item If $\morefocus{\Gamma}{\Sigma}$ then $\Gamma \subseteq Q(\Sigma)$. \item \label{i:pf:2} If $\Gamma \subseteq Q(\Sigma)$ and $\Gamma$ contains only atomic or modal formulas, then $\morefocus{\Gamma}{\Sigma}$. \item \label{i:proof rules} If $\Gamma$ is the conclusion and $\Sigma$ is one of the premises of an application of one of the rules \ensuremath{\mathsf{R}_{\lor}}\xspace, \ensuremath{\mathsf{R}_{\land}}\xspace, \RuFp{\mu}, or \RuFp{\nu}, then $\Gamma \subseteq Q(\Sigma)$. \item \label{i:thinning} $\{\phi^u,\phi^f\} \cup \Sigma \subseteq Q(\{\phi^f\} \cup \Sigma)$. \item \label{i:more focus} If $\phi^a \in Q(\Sigma)$ for some $a$ then $\phi^u, \phi^f \in Q(\allfocus{\Sigma})$. \end{enumerate} \end{proposition} \begin{proof} These statements are straightforward consequences of the Definitions~\ref{d:mf} and~\ref{d:backcl}. For instance, in order to establish part \eqref{i:more focus} it suffices to prove the following: \begin{equation} \label{eq:974} \phi^{a} \in Q_{0}(\Sigma) \text{ only if } \phi^{f} \in Q(\Sigma^{f}). \end{equation} To see this, take an arbitary annotated formula $\phi^{a} \in Q_{0}(\Sigma)$ and make a case distinction as to the reason why $\phi^{a} \in Q_{0}(\Sigma)$. (1) If $\phi^b \in \Sigma$ for some $b \sqsupseteq a$, then $\Phi^{f} \in \Sigma^{f}$, and so $\phi^{f} \in Q_{0}(\Sigma) \subseteq Q(\Sigma)$. (2) If $\phi = \phi_0 \lor \phi_1$, and $\phi_0^a, \phi_1^a \in \Sigma$ then $\phi_0^f, \phi_1^f \in \Sigma^{f}$, so that $\phi^{f} \in \Sigma^{f}$. (3) If $\phi = \phi_0 \land \phi_1$, and $\phi_i^a \in \Sigma$ for some $i \in \{0,1\}$, then $\phi_i^f \in \Sigma^{f}$ so that $\phi^{f}\in\Sigma^{f}$. (4) If $\phi = \mu x . \phi_0$ and $\phi_0(\phi)^u \in \Sigma$, then clearly also $\phi_0(\phi)^u \in Q(\Sigma)$, and so $\phi^{a} \in Q(Q(\Sigma)) \subseteq Q(\Sigma)$. Finally, (5) if $\phi = \nu x . \phi_0$ and $\phi_0(\phi)^a \in \Sigma$, then $\phi_0(\phi)^f \in \Sigma^{f}$, so that $\phi^{f} \in Q_{0}(\Sigma) \subseteq Q(\Sigma)$ indeed. \end{proof} \begin{definition} A local pre-proof $\Pi'$ of $\Gamma'$ is a \emph{simulation} of a local pre-proof $\Pi$ of $\Gamma$ if $\Gamma \subseteq Q(\Gamma')$, and for every open assumption $\Delta'$ of $\Pi'$ there is an open assumption $\Delta$ of $\Pi$ such that $\Delta \subseteq Q(\Delta')$. \end{definition} In the proof below we will frequently use the following proposition, the proof of which is straightforward. \begin{proposition} \label{p:thinning} Let $\Gamma$ and $\Delta$ be two sequents such that $\Gamma \subseteq Q(\Delta)$. Then $\thin{\Delta}$ is thin and satisfies $\Gamma \subseteq Q(\thin{\Delta})$, and there is a thin, progressive proof $\Pi$ of $\Delta$, which has $\thin{\Delta}$ as its only open assumption and uses only the weakening rule. \end{proposition} \begin{proof} It is clear that $\thin{\Delta}$ is thin and that we may write $\Delta = \{\phi_1^u, \dots,\phi_n^u\} \cup \thin{\Delta}$, where $\phi_{1},\ldots,\phi_{n}$ are the formulas that occur both focused and unfocused in $\Delta$. We then let $\Pi'$ be the proof that weakens the formulas $\phi_1^u,\dots, \phi_n^u$ one by one. By item~\ref{i:thinning} of Proposition~\ref{p:progressive facts} it follows that $\Delta \subseteq Q(\thin{\Delta})$. Thus, $\Gamma \subseteq Q(\Delta)$ implies $\Gamma \subseteq Q(\thin{\Delta})$ because $Q$ is a closure operator. \end{proof} The key technical observation in the proof of Theorem~\ref{t:tpp} is Proposition~\ref{p:ps} below. \begin{definition} \label{p:baspr} Recall that a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ is \emph{basic} if $T$ consists of the root $r$ and its successors, $\mathsf{R}_{r} \neq \RuDischarge{}$ and $\mathsf{R}_{u} = \star$ for every successor of $r$. \end{definition} A basic derivation is thus a pre-proof $\Pi = (T,P,\Sigma,\mathsf{R})$ of $\Sigma_{r}$ (where $r$ is the root of $\Pi$) with open assumptions $\{ \Sigma_{u} \mid u \neq r \}$. \begin{proposition} \label{p:progressive simulation} \label{p:ps} Let $\Pi$ be a basic pre-proof of $\Gamma$ and let $\Gamma'$ be a sequent such that $\Gamma \subseteq Q(\Gamma')$. Then there is a thin and progressive simulation $\Pi'$ of $\Pi$ that proves the sequent $\Gamma'$. Moreover, if $\mathsf{R}_{r} \neq \ensuremath{\mathsf{F}}\xspace, \ensuremath{\mathsf{U}}\xspace$ then $\Pi'$ does not use $\ensuremath{\mathsf{F}}\xspace$ or $\ensuremath{\mathsf{U}}\xspace$, and if $\mathsf{R}_{r} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ then $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ is also the rule applied at the root of $\Pi'$. \end{proposition} Before we prove this proposition, we first show how our main theorem follows from it. \begin{proofof}{Theorem~\ref{t:tpp}} Let $\Pi = (T,P,\Sigma,\mathsf{R})$ be a \ensuremath{\mathsf{Focus}_\infty}\xspace-proof of the sequent $\Phi$, then by definition we have $\Sigma_{r} = \Phi^{f}$, where $r$ is the root of $\Pi$. Obviously we have $\Sigma_{r} \subseteq Q(\Sigma_{r})$. We will transform $\Pi$ into a thin proof of $\Phi$ as follows. On the basis of Proposition~\ref{p:ps} it is straightforward to define a map $\Xi$ which assigns a thin sequent $\Xi_{t}$ to each node $t \in T$, in such a way that $\Xi_{r} \mathrel{:=} \Sigma_{r}$, and for every $t \in T$ we find $\Sigma_{t} \subseteq Q(\Xi_{t})$, while we also have a thin and progressive pre-proof $\Pi_{t}$ of the sequent $\Xi_{t}$ from the assumptions $\{ \Xi_{u} \mid Ptu \}$. In addition we know that if $\mathsf{R}_{t} \neq \ensuremath{\mathsf{F}}\xspace, \ensuremath{\mathsf{U}}\xspace$, then the derivation $\Pi_{t}$ does not involve the focus rules, and that if $\mathsf{R}_{t} = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ then $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ is also the rule applied at the root of $\Pi_{t}$. We obtain a thin and progressive proof $\Pi'$ from this by simply adding all these thin and progressive derivations $\Pi_{t}$ to the `skeleton structure' $(T,P,\Xi)$, in the obvious way. \rood{ It is easy to show that $\Pi'$ is a pre-proof, and the additional conditions on the focus rules and $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ guarantee that every infinite branch of $\Pi'$ witnesses infinitely many applicatinos of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$, but only finitely many applications of the focus rules. To prove the remaining condition on focused formulas, consider an infinite branch $\alpha = (v_{n})_{n\in\omega}$ of $\Pi'$. It is easy to see that by construction we may associate an infinite branch $\beta = (t_{n})_{n\in\omega}$ of $\Pi$ with $\alpha$, together with a map $f: \omega \to \omega$ such that $\Sigma_{t_{n}} \subseteq Q(\Xi_{v_{f(n)}})$. This path $\beta$ is successful since $\Pi$ is a proof, and so there is a $k \in \omega$ such that for all $n \geq k$ the sequent $\Sigma_{t_{n}}$ contains a formula in focus, and $\mathsf{R}(t_{n}) \neq \ensuremath{\mathsf{F}}\xspace$. But by Proposition~\ref{p:pf}(\ref{i:pf:2}) for any $n\geq k$ such that $\mathsf{R}(t_{n}) = \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$, the sequent $\Xi_{v_{f(n)}}$ must contain a focused formula as well. Since $\alpha$ features infinitely many applications of $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$, this implies the existence of infinitely many nodes $v_{m}$ on $\alpha$ such that $\Xi_{v_{m}}$ contains a focused formula. And since the focus rule is applied only finitely often on $\alpha$, by Proposition~\ref{p:lr1} it follows from this that $\alpha$ actually contains cofinitely many such nodes, as required. } Furthermore it is obvious that, being constructed by glueing together thin and progressive proofs, $\Pi'$ has these properties as well. Finally, since $\Xi_{r} = \Sigma_{r} = \Phi^{f}$, we have indeed obtained a proof for the plain sequent $\Phi$. \end{proofof} \begin{proofof}{Proposition~\ref{p:ps}} By definition of a basic proof, $\Pi = (T,P,\Sigma,\mathsf{R})$ consists of nothing more than a single application of the rule $\mathsf{R} \mathrel{:=} \mathsf{R}_{r}$ to the annotated sequent $\Gamma = \Sigma_{r}$, where $r$ is the root of $\Pi$. Because of Proposition~\ref{p:thinning} we can assume without loss of generality that $\Gamma'$ is thin. We then make a case distinction depending on the rule $\mathsf{R}$. Recall that we use $\ensuremath{\mathsf{W}}\xspace^*$ to denote a finite (potentially zero) number of successive applications of weakening. \begin{description} \item[\it Case for \ensuremath{\mathsf{Ax1}}\xspace:] In this case $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$p^a, \atneg{p}^b$} \end{prooftree} \end{center} The assumption is that $\{p^a,\atneg{p}^b\} \subseteq Q(\Gamma')$. By item~\ref{i:pf:2} in Proposition~\ref{p:progressive facts} it follows that $p^a,\atneg{p}^b \in \Gamma'$. We can thus define $\Pi'$ to be the proof \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$p^a, \atneg{p}^b$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} \item[\it Case for \ensuremath{\mathsf{Ax2}}\xspace:] In this case $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax2}}\xspace} \UnaryInfC{$\top^a$} \end{prooftree} \end{center} From the assumption that $\top^a \subseteq Q(\Gamma')$ it follows with item~\ref{i:pf:2} of Proposition~\ref{p:progressive facts} that that $\top^a \in \Gamma'$. We define $\Pi'$ to be the proof. \begin{center} \begin{prooftree} \AxiomC{\phantom{X}} \RightLabel{\ensuremath{\mathsf{Ax1}}\xspace} \UnaryInfC{$\top^a$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} \item[\it Case for \ensuremath{\mathsf{R}_{\lor}}\xspace:] In this case $\Gamma = \phi_0 \lor \phi_1,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0^a,\phi_1^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInfC{$(\phi_0 \lor \phi_1)^a,\Sigma$} \end{prooftree} \end{center} Let $\phi \mathrel{:=} \phi_0 \lor \phi_1$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^a \in Q(\Gamma')$. By definition of $Q$ there are two cases for why this might hold, either $\phi^b \in \Gamma'$ for $b \sqsupseteq a$ or $\phi_0^a \in Q(\Gamma')$ and $\phi_1 \in Q(\Gamma')$. In the latter case where $\phi_0^a \in Q(\Gamma')$ and $\phi_1 \in Q(\Gamma')$ we can let $\Pi'$ consist of just the sequent $\Gamma'$. This proof is thin and progressive and it clear follows that $\phi_0^a,\phi_1^a,\Sigma \subseteq Q(\Gamma')$ because $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$. In the former case, where $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$, consider the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\ensuremath{\mathsf{R}_{\lor}}\xspace} \UnaryInfC{$(\phi_0 \lor \phi_1)^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} We let $\Pi'$ be this proof. Clearly, this is a proof of $\Gamma' = (\phi_0 \lor \phi_1)^b,\Gamma' \setminus \{\phi^b\}$ and it is progressive. Moreover, we have from the definition of $Q$ that $\phi_0^a,\phi_1^a \subseteq Q(\phi_0^b, \phi_1^b)$, as $b \sqsupseteq a$. By item~\ref{i:proof rules} of Proposition~\ref{p:progressive facts} it holds that $\Gamma' \subseteq Q(\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\})$. By assumption we have that $\Gamma \subseteq Q(\Gamma')$ and hence $\Sigma \subseteq \Gamma \subseteq Q(\Gamma') \subseteq Q(\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\})$. Putting all of these together it follows that \[ \phi_0^a,\phi_1^a,\Sigma \subseteq Q(\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\}). \] It remains to be seen that $\Pi$ can be made thin. For the sequent $\Gamma'$ at the root of $\Pi$ we have already established that it is thin. It might be, however, that the open assumption $\phi_0^b,\phi_1^b,\Gamma' \setminus \{\phi^b\}$ is not thin. If this is the case we can simply apply Proposition~\ref{p:thinning} and obtain the required proof. \item[\it Case for \ensuremath{\mathsf{R}_{\land}}\xspace:] In this case $\Gamma = \phi_0 \land \phi_1,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0^a,\Sigma$} \AxiomC{$\phi_1^a,\Sigma$} \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace} \BinaryInfC{$(\phi_0 \land \phi_1)^a,\Sigma$} \end{prooftree} \end{center} Let $\phi \mathrel{:=} \phi_0 \land \phi_1$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^a \in Q(\Gamma')$. By the definition $Q$ we may split into two cases: either $\phi^b \in \Gamma'$ for $b \sqsupseteq a$ or $\phi_i^a \in Q(\Gamma')$ for some $i \in \{0,1\}$. In the subcase where $\phi_i^a \in Q(\Gamma')$ for some $i \in \{0,1\}$ we let $\Pi'$ just be the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show that there is some open assumption $\Delta_i$ of $\Pi$ such that $\Delta_i \subseteq Q(\Gamma')$. Let this be the assumption $\phi_i^a, \Sigma$. We already know that $\phi_i^a \in Q(\Gamma')$, so we it only remains to be seen that $\Sigma \subseteq Q(\Gamma')$. But this follows because $\Sigma \subseteq \Gamma$ and $\Gamma \subseteq Q(\Gamma')$. In the other subcase we have that $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$. We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0^b,\Gamma' \setminus \{\phi^b\}$} \AxiomC{$\phi_1^b,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\ensuremath{\mathsf{R}_{\land}}\xspace} \BinaryInfC{$(\phi_0 \land \phi_1)^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} By definition this proof is progressive and it is a proof of $\Gamma' = (\phi_0 \land \phi_1)^b,\Gamma' \setminus \{\phi^b\}$. We then show that for each open assumption $\phi_i^b, \Gamma' \setminus \{\phi^b\}$ of $\Pi$, where $i \in \{0,1\}$, there is the open assumption $\phi_i^a,\Sigma$ of $\Pi$ such that \begin{equation*} \phi_i^a,\Sigma \subseteq Q(\phi_i^b, \Gamma' \setminus \{\phi^b\}). \end{equation*} Because $a \sqsubseteq b$ it is clear that $\phi_i^a \in Q(\{\phi_i^b\})$. So we only need $\Sigma \subseteq Q(\phi_i^b, \Gamma' \setminus \{\phi^b\})$. But this follows from $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$ and the fact that $\Gamma' \subseteq Q(\phi_i^b,\Gamma' \setminus \{\phi^b\})$, which is item~\ref{i:proof rules} in Proposition~\ref{p:progressive facts}. Finally, as before, we use Proposition~\ref{p:thinning} to deal with non-thin open assumptions of $\Pi'$, if any. \item[\it Case for \RuFp{\mu}:] In this case $\Gamma = \mu x . \phi_0(x),\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^u,\Sigma$} \RightLabel{\RuFp{\mu}} \UnaryInfC{$(\mu x . \phi_0(x))^a,\Sigma$} \end{prooftree} \end{center} Here we write $\phi = \mu x . \phi_0(x)$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^u \in Q(\Gamma')$. By definition of $Q$ this gives us the cases that either $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$ or $\phi_0(\phi)^u \in Q(\Gamma')$. In the subcase where $\phi_0(\phi)^u \in Q(\Gamma')$ we let $\Pi'$ just be the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show $\phi_0(\phi)^u,\Sigma \subseteq Q(\Gamma')$. Because we are in the subcase for $\phi_0(\phi)^u \in Q(\Gamma')$ it suffice to show that $\Sigma \subseteq Q(\Gamma')$. But this follows because $\Sigma \subseteq \Gamma$ and $\Gamma \subseteq Q(\Gamma')$. In the other subcase we have that $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$. We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^u,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\RuFp{\mu}} \UnaryInfC{$(\mu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} Clearly, this proof is progressive and it is a proof of $\Gamma' = (\mu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$. We can also show that \[ \phi_0(\phi)^u,\Sigma \subseteq Q(\phi_0(\phi)^u, \Gamma' \setminus \{\phi^b\}). \] For this it clearly suffices to show that $\Sigma \subseteq Q(\phi_0(\phi)^u, \Gamma' \setminus \{\phi^b\})$. This follows from $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$ and the fact that $\Gamma' \subseteq Q(\phi_0(\phi)^u, \Gamma' \setminus \{\phi^b\})$, which comes from item~\ref{i:proof rules} in Proposition~\ref{p:progressive facts}. Finally, as before, we use Proposition~\ref{p:thinning} to deal with non-thin open assumptions of $\Pi'$, if any. \item[\it Case for \RuFp{\nu}:] In this case $\Gamma = \nu x . \phi_0(x),\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^a,\Sigma$} \RightLabel{\RuFp{\nu}} \UnaryInfC{$(\nu x . \phi_0(x))^a,\Sigma$} \end{prooftree} \end{center} Here, we write $\phi = \nu x . \phi_0(x)$. Because $\Gamma \subseteq Q(\Gamma')$ it follows that $\phi^u \in Q(\Gamma')$. By the definition $Q$ this gives us the cases that either $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$ or $\phi_0(\phi)^u \in Q(\Gamma')$. In the subcase where $\phi_0(\phi)^u \in Q(\Gamma')$ we let $\Pi'$ just be the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show $\phi_0(\phi)^u,\Sigma \subseteq Q(\Gamma')$. Because we are in the subcase for $\phi_0(\phi)^u \in Q(\Gamma')$ it suffice to show that $\Sigma \subseteq Q(\Gamma')$. But this follows because $\Sigma \subseteq \Gamma$ and $\Gamma \subseteq Q(\Gamma')$. In the other subcase we have that $\phi^b \in \Gamma'$ for some $b \sqsupseteq a$. We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\phi_0(\phi)^b,\Gamma' \setminus \{\phi^b\}$} \RightLabel{\RuFp{\nu}} \UnaryInfC{$(\nu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$} \end{prooftree} \end{center} Clearly, this proof is progressive and it is a proof of $\Gamma' = (\mu x . \phi_0(x))^b,\Gamma' \setminus \{\phi^b\}$. We can also show that \[ \phi_0(\phi)^a,\Sigma \subseteq Q(\phi_0(\phi)^b, \Gamma' \setminus \{\phi^b\}). \] Because $a \sqsubseteq b$ it is clear that $\phi_0(\phi)^a \in Q(\{\phi_0(\phi)^b\})$. So it clearly suffices to show that $\Sigma \subseteq Q(\phi_0(\phi)^b, \Gamma' \setminus \{\phi^b\})$. This follows from $\Sigma \subseteq \Gamma \subseteq Q(\Gamma')$ and the fact that $\Gamma' \subseteq Q(\phi_0(\phi)^b, \Gamma' \setminus \{\phi^b\})$, which comes from item~\ref{i:proof rules} in Proposition~\ref{p:progressive facts}. Any remaining non-thin open assumptions are dealt with using Proposition~\ref{p:thinning}. \item[\it Case for \ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace:] In this case $\Gamma$ must be of the form $\Gamma = \Box\phi^{a},\Diamond\Sigma$, and $\Pi$ is the derivation \begin{center} \begin{prooftree} \AxiomC{$\phi^{a},\Sigma$} \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInfC{$\Box\phi^a,\Diamond\Sigma$} \end{prooftree} \end{center} Because $\Gamma \subseteq Q(\Gamma')$ it follows from Proposition~\ref{p:pf}\eqref{i:pf:2} that $\morefocus{\Gamma}{\Gamma'}$. But then $\Gamma'$ must contain a subset of the form $\Box\phi^{b},\Diamond\Sigma'$, with $a \sqsubseteq b$ and $\morefocus{\Sigma}{\Sigma'}$. Consider the following derivation $\Pi'$: \begin{center} \begin{prooftree} \AxiomC{$\phi^{b},\Sigma'$} \RightLabel{\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace} \UnaryInfC{$\Box\phi^b,\Diamond\Sigma'$} \RightLabel{$\ensuremath{\mathsf{W}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} It is easy to see that we have $\morefocus{\Delta}{\Delta'}$, where $\Delta \mathrel{:=} \phi^{a},\Sigma$ and $\Delta' \mathrel{:=} \phi^{b},\Sigma'$ are the assumptions of the pre-proofs $\Pi$ and $\Pi'$, respectively. Furthermore, the proof $\Pi'$ is obviously progressive, and if not thin already, can be made so by applying Proposition~\ref{p:thinning}. \item[\it Case for \ensuremath{\mathsf{W}}\xspace:] In this case $\Gamma = \phi^a,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\Sigma$} \RightLabel{\ensuremath{\mathsf{W}}\xspace} \UnaryInfC{$\phi^a,\Sigma$} \end{prooftree} \end{center} We can let $\Pi'$ consist of just the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show that $\Sigma \subseteq Q(\Gamma')$. Clearly $\Sigma \subseteq \Gamma$, and $\Gamma \subseteq Q(\Gamma')$ holds by assumption. \item[\it Case for \ensuremath{\mathsf{F}}\xspace:] In this case $\Gamma = \phi^a,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi^f,\Sigma$} \RightLabel{\ensuremath{\mathsf{F}}\xspace} \UnaryInfC{$\phi^u,\Sigma$} \end{prooftree} \end{center} We let $\Pi'$ be the proof \begin{center} \begin{prooftree} \AxiomC{$\allfocus{(\Gamma')}$} \RightLabel{$\ensuremath{\mathsf{F}}\xspace^*$} \UnaryInfC{$\Gamma'$} \end{prooftree} \end{center} Here, $\allfocus{(\Gamma')} = \{\phi^f \mid \phi^a \in \Gamma' \mbox{ for some } a \in \{u,f\}\}$, as in Proposition~\ref{p:progressive facts}, and $\ensuremath{\mathsf{F}}\xspace^*$ are as many applications of the focus rule as we need to put every formula in $\Gamma'$ in focus. This proof $\Pi'$ is trivially progressive and it is thin because $\Gamma'$ is thin and hence changing the annotations of some formulas in $\Gamma'$ in this way still yields a thin sequent. From item~\ref{i:more focus} of Proposition~\ref{p:progressive facts} it is clear that $\phi^f,\Sigma \subseteq Q{\allfocus{(\Gamma')}}$ is implied by $\phi^u,\Sigma \subseteq Q(\Gamma')$. \item[\it Case for \ensuremath{\mathsf{U}}\xspace:] In this case $\Gamma = \phi^f,\Sigma$ and $\Pi$ is of the form \begin{center} \begin{prooftree} \AxiomC{$\phi^u,\Sigma$} \RightLabel{\ensuremath{\mathsf{U}}\xspace} \UnaryInfC{$\phi^f,\Sigma$} \end{prooftree} \end{center} We can let $\Pi'$ consist of just the sequent $\Gamma'$. This sequent is thin and the proof is trivially progressive. We need to show that $\phi^u,\Sigma \subseteq Q(\Gamma')$. By the definition of $Q$ we have that $\phi^u \in Q(\phi^f)$. Thus $\phi^u, \Sigma \subseteq Q(\phi^f,\Sigma)$. Moreover, we have by assumption that $\phi^f,\Sigma = \Gamma \subseteq Q(\Gamma')$. Putting this together, and using that $Q$ is a closure operator, we get $\phi^u,\Sigma \subseteq Q(\Gamma')$. \end{description} Since we have covered all the cases in the above case distinction, this proves the main part of the proposition. The additional statements about the focus rules and the rule $\ensuremath{\mathsf{\mathsf{R}_{\Box}}}\xspace$ can easily be verified from the definition of $\Pi'$ given above. \end{proofof}
proofpile-arXiv_065-3895
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Draft} We are interested in studying the following Gaussian channel \[Y_t = \begin{cases} U_t + W_t \quad 0\leq t< t^\star\\ \theta_0 + W_t \quad t\geq t^\star \end{cases}\] where $W_t \sim \mathcal{N}(\mu, \sigma^2)$. We assume that $\theta_0$ is a constant and that $U_t$ is sampled from an unknown stationary distribution $P$. We are interested in finding such distribution, under the constraint that $\mathbb{E}[U_t^2]\leq \lambda^2$. Observe that the pdf of $Y_t$ for $t\geq t^\star$ is simply $\mathcal{N}(\theta_0+\mu, \sigma^2)$. Let $Y_1$ be sampled from the distribution of $\theta_0+W_t$, whilst $Y_0$ be sampled from $U_t+W_t$. We are interested in the following optimization problem \begin{equation} \begin{aligned} \min_{P} \quad &\mathrm{KL}(Y_1|Y_0)& \\ \textrm{s.t.} \quad & \mathbb{E}_{P}[U_t^2]\leq \lambda^2 \end{aligned} \end{equation} Observe that \begin{align*} \mathrm{KL}(Y_1|Y_0) &= \int_{\mathbb{R}}f_1(y)\ln(f_1(y)/f_0(y)) dy\\ &= -H(Y_1) -\int_{\mathbb{R}}f_1(y)\ln(f_0(y)) dy \end{align*} Suppose that $U_t$ is a constant term. It means that $f_0(y) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-(y-u-\mu)^2/2\sigma^2}$. It follows that \subsection{Full-information case} \label{sec:full_information} \textbf{Definitions and introduction. } In the full-information case the eavesdropper is able to measure both the state and action $(x_t,a_t)$ at time $t$, from which follows that minimizing leakage of information is potentially much harder for the agent. It turns out that the agent is able to affect the quantity $I$ only for some MDPs, whilst for others, such as linear dynamical systems with additive changes, privacy depends solely on the properties of the change, and the stochastic properties of the MDP. Without loss of generality consider the case $M_0\to M_1$. For a sequence of observations $\{X_1,A_1,X_2,\dots, X_t\}$ (measured by the eavesdropper) we can define the following log-likelihood ratio \begin{equation} Z_{i} = \ln \frac{P_1(X_i|X_{i-1}, A_{i-1})}{P_0(X_i|X_{i-1}, A_{i-1})},\quad i\in \{2,\dots, t\}. \end{equation} Due to ergodicity it follows that $n^{-1}\sum_{t=\nu}^{\nu+n} Z_t$ converges to $I_1(\pi_1)$ under $\mathbb{P}_\nu$, where $\mathbb{P}_\nu$ indicates the the probability measure induced by $\pi_1$ in $M_1$, and the quantity $I_1(\pi_1)$ is \begin{equation}\label{eq:I_full_information} I_{1}(\pi_1) = \int_{X\times A} \mathbb{E}_{P_1}[Z_2|X_{1}=x, A_1=a] \textrm{d}\pi_1(a|x)\textrm{d}\mu_1^{\pi_1}(x). \end{equation} Similarly, for the case $M_1\to M_0$ one can define $I_{0}(\pi_0)$. An important observation, that we will be using in the following, is that $I_1$ in the full-information case is affected only by $\pi_1$, and not by $\pi_0$ (also the converse holds true). This is a consequence of the assumption that the adversary is able to observe the sequence of actions taken by the agent. Based on these quantities, one can define an indicator of hardness, that takes into account both cases $M_0\to M_1$ and $M_1\to M_0$, and that can be thought of being the symmetrised KL-divergence between the two models. \begin{definition} Let $M=\{M_0, M_1\}$ and $\pi=\{\pi_0,\pi_1\}$. We indicate the overall privacy level for $(M,\pi)$ in the full-information case by \begin{equation} \mathcal{I}_F(M,\pi) = I_{1}(\pi_1) +I_{0}(\pi_0). \end{equation} \end{definition} In other words, the lower $\mathcal{I}_F(M,\pi)$, the more difficult is the detection problem for both the cases $M_0\to M_1$ and $M_1\to M_0$. Because of the properties of the KL-divergence, one can immediately observe that $\mathcal{I}_F(M,\pi)$ is non-negative, and that a sufficient condition for $\mathcal{I}_F(M,\pi)$ being $0$ is that KL divergence of $\xi_1^\pi$ and $\xi_0^\pi$ is $0$. Furthermore, $\mathcal{I}_F(M,\pi)$ is convex in the state-action distributions $(\xi_1,\xi_2)$, where $\xi_i(x,a) = \pi_i(a|x)\mu_i^{\pi_i}(x)$. The fact that $I_i$ depends solely on $\pi_i$ allows us to find the maximum level of privacy by optimizing separately in $\pi_1$ and $\pi_0$, as shown in the following lemma. \begin{lemma}\label{lemma:lower_bound_privay} For simplicity, let $X,A$ be finite sets. Given $(M,\pi)$ the maximum level of attainable privacy in the full-information case is \begin{equation} \mathcal{I}_F(M) \coloneqq \underbrace{\inf_{\pi_1} I_1(\pi_1)}_{i_1}+ \underbrace{\inf_{\pi_0} I_0(\pi_0)}_{i_0} \end{equation} where $i_1$ (similarly $i_0$) is the solution of the following convex program \begin{equation}\label{eq:min_c1_full_information} \begin{aligned} \min_{\xi\in \Delta(X\times A)} \quad & \mathbb{E}_{(x,a) \sim \xi}\left[I\left(P_1(\cdot|x,a), P_0(\cdot|x,a)\right)\right]\\ \textrm{s.t.} \quad & \sum_{a} \xi_{*,a}^\top P_1(a) =\sum_{a} \xi_{*,a}^\top,\\ \end{aligned} \end{equation} with $P_1(a)$ being a $|X|\times |X|$ matrix containing the transition probabilities for a specific action $a$ in MDP $M_1$. \end{lemma} \begin{proof} The maximum level of privacy is attained when $\mathcal{I}_F(M,\pi)$ is minimum, therefore for $\inf_\pi \mathcal{I}_F(M,\pi) =\inf_{\pi_1,\pi_0} \left(I_1(\pi_1) + I_0(\pi_0)\right)$. Since $I_1(\pi_1)$ is positive, and does not depend on $\pi_0$ (similar arguments for $I_0$), we have $\inf_{\pi_1,\pi_0} \left(I_1(\pi_1) + I_0(\pi_0)\right)= \inf_{\pi_1} I_1(\pi_1)+\inf_{\pi_0} I_0(\pi_0)$. Now, without loss of generality, consider just $i_1$. Observe that $I_1(\pi_1)$ in the discrete case can be rewritten as \begin{align*} I_1(\pi_1) &= \sum_{x,a} \mu_1^{\pi_1}(x) \pi_1(a|x) \mathbb{E}_{X_2\sim P_1}[Z_2|X_{1}=x, A_1=a],\\ &=\sum_{x,a} \mu_1^{\pi_1}(x) \pi_1(a|x)\sum_{y} P_1(y|x,a) \ln \frac{P_1(y|x,a)}{P_0(y|x,a)} \\ &=\sum_{x,a} \xi_1(x,a)\sum_{y} P_1(y|x,a) \ln \frac{P_1(y|x,a)}{P_0(y|x,a)} \end{align*} where $\xi_1(x,a)\coloneqq\pi_1(a|x)\mu_1^{\pi_1}(x)$. To conclude the proof, notice that $\xi_1$ must satisfy the constraint that $\mu_1^{\pi_1}$ is a stationary distribution. Therefore minimizing $I_1(\pi_1)$ is equivalent to minimizing the last equation over $\xi_1\in \Delta(X\times A)$, subject to $\sum_{a} \xi_{*,a}^\top P_1(a) =\sum_{a} \xi_{*,a}^\top$. \end{proof} \textbf{Privacy in exponential systems. }Unfortunately $\underline{\mathcal{I}}_F(M)$ depends on the type of MDP being controlled, and the constants $i_1,i_0$ cannot be computed without further assumptions on the model. In case of exponential models where the change is not state/action dependent we have the following corollary. \begin{corollary}\label{corollary:constant_privacy} Consider lemma \ref{lemma:lower_bound_privay}. Suppose $P_1(\cdot|x,a)$ and $P_0(\cdot |x,a)$ to be exponential distributions parametrized by two parameters $\theta_{1},\theta_{0}\in \mathbb{R}^k$, such that $P_{i}(y|x,a) = P_{\theta_i}(y|x,a)$ and \[P_{\theta_i}(y|x,a)= h(y)\exp\left(\langle \theta_i, T_{x,a}(y)\rangle + A(\theta_{i}) \right)\] where $T_{x,a}(y)$ is a sufficient statistics, $\theta_i$ is a natural parameter that does not depend on $(x,a)$, $A(\theta_i)$ is the log partition function and $h(y)$ is a non-negative function. It follows that $i_1 = (A(\theta_{1}) - A(\theta_{0})) + \langle\eta(\theta_{1}) - \eta(\theta_{0}), \nabla A(\theta_1)\rangle =D_A(\theta_1,\theta_0)$ (similarly $i_0$), where $D_A(z,w)$ denotes the Bregman Divergence associated with $A$ for points $z,w\in \mathbb{R}^k$. Therefore the maximum level of attainable privacy is \begin{equation} \underline{\mathcal{I}}_F(M) = D_A(\theta_1,\theta_0)+D_A(\theta_0,\theta_1). \end{equation} \end{corollary} \begin{proof} Consider the constant $i_1$. The the log-likelihood ratio is simply \begin{align*} \ln \frac{P_1(y |x,a)}{P_0(y |x,a)} = \langle \theta_1-\theta_0, T_{x,a}(y)\rangle + A(\theta_{1}) - A(\theta_{0}). \end{align*} Since $\mathbb{E}_{y\sim P_i(\cdot|x,a)}[T_{x,a}(y)] = \nabla_\theta A(\theta_i)$, due to linearity it follows that \begin{align*} \mathbb{E}_{y\sim P_1(\cdot|x,a)}\left[\ln \frac{P_1(y |x,a)}{P_0(y |x,a)}\right] &= \langle \theta_1-\theta_0, \nabla_{\theta} A(\theta_1)\rangle \\&\quad+ A(\theta_{1}) - A(\theta_{0}). \end{align*} The last expression does not depend on $(x,a)$, therefore we must conclude that \begin{align*} i_1 &= \langle \theta_1-\theta_0, \nabla_{\theta} A(\theta_1)\rangle + A(\theta_{1}) - A(\theta_{0}) =D_A(\theta_1,\theta_0)\\ i_0 &= \langle \theta_0-\theta_1, \nabla_{\theta} A(\theta_0)\rangle + A(\theta_{0}) - A(\theta_{1})=D_A(\theta_0,\theta_1) \end{align*} and thus $i_1+i_0 = \langle \theta_1-\theta_0, \nabla_{\theta} (A(\theta_1) - A(\theta_0))\rangle$. \end{proof} The previous corollary sheds a light on the interesting case where the change is not affected by the current state/action pairs. For this particular case privacy depends solely on the difference of the natural parameters, scaled by the sensitivity of the log-partition function to change in the natural parameters. This result holds, for example, for linear dynamical systems affected by an additive change, whilst it does not hold for multiplicative changes. \textbf{Privacy-utility trade-off.} Now that we performed an initial analysis of the problem, we are ready to tackle the utility-privacy trade-off problem. Computing a policy that only minimizes information leakage is per-se interesting, but in most cases useless since the agent is also interested in maximizing the collected reward. For sake of notation, let $T_1(x,a,y) = P_1(y|x,a) \ln \frac{P_1(y|x,a)}{P_0(y|x,a)}$ and $T_0(x,a,y)=P_0(y|x,a) \ln \frac{P_0(y|x,a)}{P_1(y|x,a)}$. The problem of maximizing reward collection whilst minimizing information leakage (in the full-information case) for MDP $M_i$ can be cast to a convex optimization problem, with a constraint on the privacy level as follows \begin{equation} \max_{\pi_i: I_i(\pi_i)\leq \alpha_i } \lim_{N\to \infty} \mathbb{E}_{M_i}^{\pi_i}\left[\frac{1}{N}\sum_{t=1}^N r_i(s_t,a_t)\right], \end{equation} for some $\alpha_i \in \mathbb{R}^+$, $i=1,2$. Using lemma \ref{lemma:lower_bound_privay}, observe that the policy optimization problem is equivalent to the following \cite{puterman2014markov} \begin{equation} \begin{aligned} \max_{\xi\in \Delta(X\times A)} \quad & \sum_{x,a} \xi_{x,a}r_i(x,a)\\ \textrm{s.t.} \quad & \sum_{a} \xi_{*,a}^\top P_i(a) =\sum_{a} \xi_{*,a}^\top,\\ & \sum_{x,a} \xi_{x,a}\sum_{y} T_i(x,a,y) \leq \alpha_i \end{aligned} \end{equation} where the policy $\pi_i$ can be obtained by $\pi_i(a|x) = \dfrac{\xi_{x,a}}{\|\xi_{x,*}\|_1}$. It follows that for a suitable Lagrange multiplier $\lambda \geq 0$ we can move the privacy constraint into the main objective, so that the problem is equivalent to solving an MDP $(X,A,P_i, r_i^\lambda)$ where the reward is now given by \[r_i^\lambda(x,a) = r_i(x,a)-\lambda\sum_{y}T_i(x,a,y)\] and the policy can be computed by solving the following convex program \begin{equation} \max_{\xi\in \Delta(X\times A)} \sum_{x,a} \xi_{x,a}r_i^\lambda(x,a) \quad \textrm{s.t.}\quad \sum_{a} \xi_{*,a}^\top P_i(a) =\sum_{a} \xi_{*,a}^\top, \end{equation} \subsubsection{Example: case of additive changes in linear dynamical systems} Here we investigate an additive change in an $n$-dimensional process $(x_t)_t$ described by the following state-space model \[x_{t+1} = Ax_t + Ba_t + F\theta \mathds{1}_{\{t\geq \nu\}} + w_t\] where $(a_t)_t$ is the control signal, and $(w_t)_t$ is a white noise sequence with $0$ mean and covariance $Q$. The parameter $\theta \in \mathbb{N}$ models the exogenous input, that is unobservable for the eavesdropper. One can immediately observe that the model is a particular case of corollary \ref{corollary:constant_privacy}, therefore the level of privacy is constant and depends solely on $F,\theta$ and $Q$. To see this, notice that the conditional densities for the two cases are: \begin{align*} P_1: x'|(x,a) &\sim \mathcal{N}(Ax+Ba+F\theta, Q)\\ P_0 :x'|(x,a) &\sim \mathcal{N}(Ax+Ba, Q) \end{align*} from which follows that \begin{align*} \int_{\mathbb{R}^n}P_1(x'|x,a)\ln \frac{P_1(x'|x,a)}{P_0(x'|x,a)} \textrm{d}x'&=\theta^\top F^\top Q^{-1} F\theta \end{align*} therefore the privacy level depends solely on the signal-to-noise ratio (SNR) $\theta^\top F^\top Q^{-1} F\theta$, that is maximum if $F\theta \in \ker(Q^{-1})$. \section*{Appendix} In this section we shall see that the function \[q(\alpha, \beta) = \sum_{x,a} \alpha_{x,a} \log \frac{\alpha_{x,a}/ \sum_{a'}\alpha_{x,a'}}{\beta_{x,a}/ \sum_{a'}\beta_{x,a'}},\quad \alpha,\beta \in \Delta(X\times U) \] is not necessarily convex. One can prove that $q$ is equal to \[q(\alpha, \beta) = \underbrace{\sum_{x,a} \alpha_{x,a} \log \frac{\alpha_{x,a}}{\beta_{x,a}}}_{f(\alpha,\beta)} - \underbrace{\sum_{x} \|\alpha_{x,*}\|_1 \log \frac{\|\alpha_{x,*}\|_1}{\|\beta_{x,*}\|_1}}_{g(\alpha,\beta)}. \] For a convex set $\mathcal{X}$ a function $h: \mathcal{X} \to \mathbb{R}$ is convex if the following condition holds $\forall\lambda \in [0,1]$ and $x,y\in \mathcal{X}$: \[D_h(x,y,\lambda) = \lambda h(x) + (1-\lambda) h(y) - h(\lambda x + (1-\lambda)y)\geq 0.\] Let $\mathcal{X}= \Delta(X\times U)$ and $x=(\alpha, \beta), y=(\alpha', \beta')$. Then, $D_q(x,y,\lambda)\geq 0$ is equivalent to $D(f,x,y,\lambda)- D_g(x,y,\lambda)\geq 0$. Since $f(x) -g(x)\geq 0$ it is not necessarily true that $D_f(x,y,\lambda)- D_g(x,y,\lambda)\geq 0$. For example, consider $|X|=|U|=2$. Then, the following values \[\alpha= \begin{bmatrix} .5704 & .0206\\ .1980 & .2110\\ \end{bmatrix}, \beta= \begin{bmatrix} .1312 & .1403\\ .3757 & .3529\\ \end{bmatrix} \] \[ \alpha'= \begin{bmatrix} .2891 & .0753\\ .5033 & .1322\\ \end{bmatrix}, \beta'= \begin{bmatrix} .1031 & .3591\\ .3672 & .1706\\ \end{bmatrix} \] yield $D_q(x,y,\lambda)\leq 0$ for all $\lambda \in [0,1]$ (Fig. \ref{fig:q_example_1}). \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{Figures/appendix/non_convexity.pdf} \caption{Example where $D_q$ is lower than $0$ for all $\lambda\geq 0$.} \label{fig:q_example_1} \end{figure} \\Another example is shown in Fig. \ref{fig:q_example_2}, where $(\alpha,\beta)$ are \[\alpha= \begin{bmatrix} .2110 & .3764\\ .3246 & .0881\\ \end{bmatrix}, \beta= \begin{bmatrix} .4428 & .3469\\ .0297 & .1805\\ \end{bmatrix} \] and $(\alpha',\beta')$ \[ \alpha'= \begin{bmatrix} .1935 & .3282\\ .4342 & .0441\\ \end{bmatrix}, \beta'= \begin{bmatrix} .3474 & .2314\\ .0416 & .3796\\ \end{bmatrix}. \] \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{Figures/appendix/non_convexity2.pdf} \caption{Example where $D_q$ can be either positive or negative.} \label{fig:q_example_2} \end{figure} \iffalse \subsection{Full-information case} For simplicity, we assume the control laws linearly depend on $x_t$. To ensure absolute continuity of the control laws before and after the change we add a noise term to the control law. The control laws are $u_t = Kx_t+ \beta_t^0$ and $u_t=Kx_t+\beta_t^1$, where $\beta_t^i$ is i.i.d. white Gaussian noise distributed according to $\mathcal{N}(\alpha_i, R)$, where $R$ is a diagonal PSD matrix. The problem is to choose appropriate values of $\alpha_0,\alpha_1$. Assuming that $A+BK$ ensures stability of the system, then the system converges to a stationary distribution before and after the change. Specifically, the two distributions are $\mu_0 \sim \mathcal{N}(B\alpha_0, \Sigma)$ and $\mu_1\sim \mathcal{N}((I-A-BK)^{-1}(B\alpha_1+F\theta), \Sigma)$, where $\Sigma$ satisfies the Riccati equations \begin{align*} \Sigma &= (A+BK)\Sigma (A+BK)^\top + Q+R \end{align*} Therefore the value of the policy before and after the change is $V_0(\pi_0) = -\mathbb{E}_{x\sim\mu_0}[x^\top x]$ and $V_1(\pi_1) = -\mathbb{E}_{x\sim\mu_1}[x^\top x]$. For a normal random variable $y\sim \mathcal{N}(m,V)$ it holds that $\mathbb{E}[y^\top Qy] = m^\top Qm+\mathrm{Tr}(QV)$. Therefore we have \[V_0 = -\alpha_0^\top B^\top B\alpha_0-\mathrm{Tr}(\Sigma)\] and \[ V_1=-(B\alpha_1+F\theta)^\top L^{-\top}L^{-1}(B\alpha_1+F\theta)-\mathrm{Tr}(\Sigma) \] where $L=(I-A-BK)$. The information value on the other hand is \[I_F = \frac{\theta^\top F^\top Q^{-1} F\theta + (\alpha_1-\alpha_0)^\top R^{-1}(\alpha_1-\alpha_0)}{2} \] Let $c_\theta \coloneqq \theta^\top F^\top Q^{-1} F\theta$. Maximizing the utility privacy trade off is equivalent to minimizing the quantity $-\rho V_1 -(1-\rho) V_0 +\lambda I_F$, that is \begin{align*} &=\rho\alpha_0^\top B^\top B\alpha_0 +\mathrm{Tr}(\Sigma)\\ &\quad + (1-\rho)(B\alpha_1+F\theta)^\top L^{-\top}L^{-1}(B\alpha_1+F\theta)\\ &\quad +\lambda \frac{c_\theta+ (\alpha_1-\alpha_0)^\top R^{-1}(\alpha_1-\alpha_0)}{2} \end{align*} For $\rho=1$ one can immediately deduce that we get two identical control laws. Moreover, the minimizer is non-unique, given by $\alpha_0=\alpha_1$ with $\alpha_0\in \ker(B)$. Similarly, for $\rho=0$, one can immediately deduce that $\alpha_0$ needs to be equal to $\alpha_1$, and $\alpha_1$ is simply \[ \alpha_1=-(B^\top EB)^{-1}B^\top EF\theta \] where $E=L^{-\top}L^{-1}$. While for $\rho\in (0,1)$ one has... Now it's just matter of optimizing with respect to $\alpha_0, \alpha_1$. Taking the derivative with respect to $\alpha_0,\alpha_1$ yields \begin{small} \begin{align*} \nabla_{\alpha_{0}} V&=\rho 2B^\top B\alpha_0 -\lambda R^{-1}(\alpha_1-\alpha_0)\\ \nabla_{\alpha_{1}} V&=2(1-\rho)B^\top L^{-\top}L^{-1}(B\alpha_1+F\theta)-\lambda R^{-1}(\alpha_1-\alpha_0) \end{align*}\end{small} TBD... F $= -(\lambda R^{-1}+2\rho B^\top B)^{-1} \lambda R^{-1}\alpha_1$ with respect to $\alpha_1$ \[ \nabla_{\alpha_{1}} V=-2(1-\rho)B^\top L^{-\top}L^{-1}(B\alpha_1+F\theta)-\lambda R^{-1}(\alpha_1-\alpha_0)\] \[\nabla_{\alpha_{1}} V=0\Rightarrow \alpha_1 =(2(1-\rho)B^\top L^{-\top}L^{-1}B+\lambda R^{-1})^{-1}(\lambda R^{-1}\alpha_0-2(1-\rho)B^\top L^{-\top}L^{-1}F\theta) \] Let $E=L^{-\top}L^{-1}$. \[ \alpha_1 =(2(1-\rho)B^\top EB+\lambda R^{-1})^{-1}(\lambda R^{-1}\alpha_0-2(1-\rho)B^\top EF\theta) \] thus \[ -2(1-\rho)B^\top E(B\alpha_1+F\theta)-(\lambda R^{-1}+\lambda R^{-1}(\lambda R^{-1}+2\rho B^\top B)^{-1} \lambda R^{-1})\alpha_1 \] Observe that \[\lambda R^{-1}(\lambda R^{-1}+2\rho B^\top B)^{-1} \lambda R^{-1}= \lambda R^{-1} -(R/\lambda + (2\rho B^\top B)^{-1} )^{-1}\] thus \[ -2(1-\rho)B^\top E(B\alpha_1+F\theta)-(2\lambda R^{-1}-(R/\lambda + (2\rho B^\top B)^{-1} )^{-1})\alpha_1=0 \] thus \[ -2(1-\rho)B^\top EF\theta=(2(1-\rho)B^\top EB+2\lambda R^{-1}-(R/\lambda + (2\rho B^\top B)^{-1} )^{-1})\alpha_1 \] For $\rho=0$ we have the result $\alpha_0=\alpha_1$ and \[ \alpha_1 =\underbrace{(2B^\top EB+\lambda R^{-1})^{-1}}_{G^{-1}}(\lambda R^{-1}\alpha_1-2B^\top EF\theta) \] thus \[ \alpha_1 = -2(I-\lambda (GR)^{-1})^{-1} G^{-1}B^\top EF\theta \] \fi \iffalse \subsection{Limited-information case} In the case of limited-information one can exploit the stochastic noise present in the system and just use two deterministic control laws. For simplicity we'll focus on the case $u_t=Kx_t+\alpha_0$ for $t<\nu$ and $u_t=Kx_t+\alpha_1$ for $t\geq \nu$. For this case there are multiple solutions that minimize $I_L$, given by $\alpha_1-\alpha_0 = - (B^\top B)^{-1}B^\top F\theta$. To see this observe that the information value is \[ I_L = \frac{1}{2}\left(F\theta + B(\alpha_1-\alpha_0)\right)^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right) \] We can proceed to analyse the utility-privacy trade-off. The two stationary distributions are the same as in the full-information case, with the only exception that $\Sigma$ satisfies $\Sigma=(A+BK)\Sigma (A+BK)^\top +Q$. Therefore the quantity $\rho V_1 +(1-\rho) V_0$ does not change compared to the full information case (apart from $\Sigma$). Taking the derivative with respect to $\alpha_0,\alpha_1$ yields \begin{small} \begin{align*} \nabla_{\alpha_{0}} V&=\rho 2B^\top B\alpha_0-\lambda B^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right)\\ \nabla_{\alpha_{1}} V&=2(1-\rho)B^\top L^{-\top}L^{-1}(B\alpha_1+F\theta)+\lambda B^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right) \end{align*}\end{small} One can easily conclude that for $\rho=1$ the solution is given by $\alpha_0=0$ and $\alpha_1= -(B^\top B)^{-1}B^\top F\theta$. For $\rho=0$ instead we have \[\alpha_1= -(B^\top E B)^{-1}B^\top E F\theta\] and \[\alpha_0 = (B^\top B)^{-1}B^\top(I -B(B^\top E B)^{-1}B^\top E )F\theta\] For a general $\rho$ all solutions for $\alpha_0$ will be of the type \[\alpha_0 = B(\rho, \lambda)^{-1}\lambda B^\top Q^{-1}(F\theta + B\alpha_1)\] and \[ \alpha_1 = (B^\top P(\lambda, \rho)^\top P(\lambda, \rho)B)^{-1} B^\top P(\lambda, \rho)^\top P(\lambda, \rho) F\theta \] Need to clean this section below.. \[\alpha_0 = (B^\top B)^{-1}B^\top (F\theta+B\alpha_1)\] Let $\tilde B = (B^\top B)^{-1}B^\top$. Then \[ \rho 2B^\top B\alpha_0 + 2(1-\rho)B^\top L^{-\top}L^{-1}(B\alpha_1+F\theta)=0 \] \[ B\alpha_0 = \lambda B^\top Q^{-1}(I-B(B^\top B)^{-1}B^\top) F\theta\] $(F\theta + B(\alpha_1-\alpha_0))=0 $ and $B\alpha_1+F\theta=0$ Therefore \[ B^\top( 2\rho I +\lambda Q^{-1} )B\alpha_0=\lambda B^\top Q^{-1}F\theta + \lambda B^\top Q^{-1} B\alpha_1 \] Let $B(\rho, \lambda) = B^\top( 2\rho I +\lambda Q^{-1} )B$. Then \[\alpha_0 = B(\rho, \lambda)^{-1}\lambda B^\top Q^{-1}(F\theta + B\alpha_1)\] whilst \begin{small} \[ B^\top(2(1-\rho)E+\lambda Q^{-1})F\theta -\lambda B^\top Q^{-1}B\alpha_0 =-\lambda B^\top Q^{-1}B\alpha_1-2(1-\rho)B^\top EB\alpha_1 \] \[ B^\top(2(1-\rho)E+\lambda Q^{-1})F\theta -\lambda B^\top Q^{-1}B\alpha_0 =-B^\top (\lambda Q^{-1}+2(1-\rho) E)B\alpha_1 \] Let $C(\rho, \lambda)=B^\top (\lambda Q^{-1}+2(1-\rho) E)$ \[ C(\rho, \lambda)F\theta -B(0,\lambda)\alpha_0 =-C(\rho, \lambda)B\alpha_1 \] then \[ (C(\rho, \lambda)-B(0,\lambda)B(\rho, \lambda)^{-1}\lambda B^\top Q^{-1}) F\theta =(B(0,\lambda)B(\rho, \lambda)^{-1}\lambda B^\top Q^{-1}-C(\rho, \lambda))B\alpha_1 \] Let $P(\lambda, \rho) = C(\rho, \lambda)-B(0,\lambda)B(\rho, \lambda)^{-1}\lambda B^\top Q^{-1}$ \[P(\lambda, \rho) F\theta = -P(\lambda, \rho)B\alpha_1 \] thus \[ \alpha_1 = (B^\top P(\lambda, \rho)^\top P(\lambda, \rho)B)^{-1} B^\top P(\lambda, \rho)^\top P(\lambda, \rho) F\theta \] it follows that no matter $P$ we have $\alpha_1= -(B^\top B)^{-1}BF\theta$ and $\alpha_0 = B(\rho, \lambda)^{-1}\lambda B^\top Q^{-1}(I-(B^\top B)^{-1}B)F\theta$ \end{small} \fi \section{Conclusions} \noindent In this work, we analyzed the problem of minimizing information leakage of abrupt changes in Markov Decision Processes. By computing policies that minimize the statistical difference between the system before and after the change, one can reduce the loss of privacy resulting from this leakage of information. Future work will focus on removing the assumption that the agent perfectly knows when the change occurs, and how can Reinforcement Learning be applied to compute policies that minimize information leakage. \section{Full-information scenario} \label{sec:full_information} In the full-information case, the eavesdropper can measure both the state and action $(X_t,U_t)$ at time $t$. We first analyze the privacy level $I_F(\pi_0,\pi_1)$, and then investigate the utility-privacy trade-off in this case. \subsection{Privacy level} In the full-information case, $I_F(\pi_0,\pi_1)$ can be decomposed in the sum of the average KL-divergence of the two models and the KL-divergence of the two policies: \begin{theorem}\label{theorem:privacy_full_info} (i) If for all $x\in \supp(\mu_1^{\pi_1})$, $\pi_1(x) \ll \pi_0(x)$, then the sequence of observations $\{Y_t\}_{t\ge1}$ (made by the eavesdropper), with $Y_t=(X_t,U_t)$, satisfies Assumption 1, and we have: \begin{align}\label{eq:I_full_information} I_F(\pi_0,\pi_1) = &\ \mathbb{E}_{x\sim \mu_1^{\pi_1}, u\sim \pi_1(x)}\left[D(P_1(x,u),P_0(x,u))\right]\nonumber \\ & \ \ \ \ + \mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[D(\pi_1(x), \pi_0(x))\right]. \end{align} (ii) If $\exists x\in \supp(\mu_1^{\pi_1}): \pi_1(x) \centernot{\ll} \pi_0(x)$ then $I_F=\infty$. \end{theorem} \begin{proof} If $\pi_1(x) \ll \pi_0(x)$ does not hold for some $x\in \supp(\mu_1^{\pi_1})$, then Assumption 1 does not hold, and (ii) follows by definition. To prove (i), using the Markov property, one easily get an expression of the conditional densities $f_0$ and $f_1$, and deduce what is $Z_i$ in (\ref{eq:llr}): for all $i\ge 2$, \[ Z_{i} = \ln \frac{f_1(Y_i|Y_{i-1})}{f_0(Y_i|Y_{i-1})}=\ln \frac{\pi_1(U_i|X_i)P_1(X_i|X_{i-1}, U_{i-1})}{\pi_0(U_i|X_i)P_0(X_i|X_{i-1}, U_{i-1})}, \] By ergodicity, it follows that $n^{-1}\sum_{t=\nu}^{\nu+n} Z_t$ converges to \[ \resizebox{\hsize}{!}{% $I_F(\pi_0,\pi_1)=\sum_{(x,u)\in {\cal X}\times {\cal U}} \mathbb{E}[Z_2|X_{1}=x, U_1=u] \pi_1(u|x)\mu_1^{\pi_1}(x)$}. \] Furthermore, $\mathbb{E}[Z_2|X_{1}=x, U_1=u] $ is \begin{align*} &\sum_{(y,a)\in {\cal X}\times {\cal U}} \ln \frac{\pi_1(a|y)P_1(y|x,u)}{\pi_0(a|y)P_0(y|x,u)}\pi_1(a|y)P_1(y|x,u),\\ =&\underbrace{\sum_y\ln \frac{P_1(y|x,u)}{P_0(y|x,u)}P_1(y|x,u)}_{=D(P_1(x,u),P_0(x,u))} \\ &\qquad + \sum_{y} P_1(y|x,u) \underbrace{\sum_a\ln \frac{\pi_1(a|y)}{\pi_0(a|y)}\pi_1(a|y)}_{=D(\pi_1(y), \pi_0(y))}. \end{align*} Observe now that $\sum_{x,u,y} D(\pi_1(y), \pi_0(y))P_1(y|x,u) \mu_1^{\pi_1}(x)$ is equal to \begin{align*} &\sum_{x,y} D(\pi_1(y), \pi_0(y))P_1^{\pi_1}(y|x)\mu_1^{\pi_1}(x)\\ =& \sum_{y} D(\pi_1(y), \pi_0(y)) \underbrace{\sum_{x}P_1^{\pi_1}(y|x)\mu_1^{\pi_1}(x)}_{=\mu_1^{\pi_1}(y)}. \end{align*} Then, the result follows from the fact that for a stationary distribution $\mu_1^{\pi_1}$ it holds that $\sum_{x}P_1^{\pi_1}(y|x)\mu_1^{\pi_1}(x)=\mu_1^{\pi_1}(y)$. \end{proof} Theorem \ref{theorem:privacy_full_info}, as well as the other theorems and propositions in this paper, can be established for general state-action spaces, and therefore it is quite general. A first important consequence of Theorem \ref{theorem:privacy_full_info} is that when $\pi_0$ and $\pi_1$ are different deterministic policies, then the absolute continuity condition is not met and the level of privacy is 0 (since the actions reveal the change point to the eavesdropper). Hence, there is a price to pay to get a non-zero level of privacy. Theorem \ref{theorem:privacy_full_info} also allows us to compute the policies maximizing the level of privacy (or equivalently minimizing $I_F(\pi_0,\pi_1)$): \begin{proposition}\label{proposition:lower_bound_privay} The best level of privacy in the full-information case is given by $\underline{I}_F = \inf_{\pi_0,\pi_1} I_F(\pi_0,\pi_1)$ that can be computed by solving the following linear program \begin{equation} \begin{aligned}\label{eq:min_c1_full_information} \min_{\xi \in\Delta({\cal X}\times {\cal U})} \quad &\sum_{x,u} \xi_{x,u} D(P_1(x,u),P_0(x,u))\\ \textrm{s.t.} \quad &\sum_{u} \xi_{*,u}^\top P_1(u) =\sum_{u} \xi_{*,u}^\top \end{aligned} \end{equation} where $P_1(u)$ is a $|{\cal X}|\times |{\cal X}|$ matrix containing the transition probabilities for action $u$ in MDP $M_1$. The policies achieving $\underline{I}_F$ are given by $\pi_1(u|x) = \xi_{x,u}/\|\xi_{x,*}\|_1$ and $\pi_0=\pi_1$. \end{proposition} \begin{proof} Observe that for any $\pi_1$ the infimum of $I_F(\pi_0,\pi_1)$ over $\pi_0$ is simply $\pi_0=\pi_1$. Therefore the problem becomes to minimize $\mathbb{E}_{u\sim \pi_1(x), x\sim \mu_1^{\pi_1}}\left[D(P_1(x,u),P_0(x,u))\right]$ over $\pi_1$. Let $\xi \in \Delta({\cal X}\times {\cal U})$ be a distribution over the states and the actions. We can equivalently rewrite $\mathbb{E}_{u\sim \pi_1(x), x\sim \mu_1^{\pi_1}}\left[D(P_1(x,u),P_0(x,u))\right]$ through a change of variables $\xi_{x,u} = \pi_1(u|x)\mu_1^{\pi_1}(x)$, subject to the affine constraint $\sum_{u} \xi_{*,u}^\top P_1(u) =\sum_{u} \xi_{*,u}^\top$ that guarantees stationarity of the distribution. The result follows from this rewriting. \end{proof} \noindent Alternatively, it is possible to compute the best level of privacy $\underline{I}_F$ by solving an MDP $({\cal X},{\cal U},P_1,r)$ with reward function $r(x,u)=-D(P_1(x,u), P_0(x,u))$. \subsection{Privacy-utility trade-off} Next we investigate the utility-privacy trade-off by studying the solution of the optimization problem (\ref{eq:optpb}) for different values of $\lambda$. We denote the objective function by: \begin{equation} V_F(\rho,\lambda,\pi_0,\pi_1)= V(\rho,\pi_0,\pi_1) - \lambda I_F(\pi_0,\pi_1). \end{equation} Note that we may be interested in optimizing just $\pi_1$, the policy after the change, i.e., solve $\sup_{\pi_1} V_{M_1}(\pi_1) -\lambda I_F(\pi_0,\pi_1)$ for some fixed $\pi_0$ (where $\pi_0$ may be the optimal policy in $M_0$ for example). This problem corresponds to $\rho=0$ in (\ref{eq:optpb}), and hence is just a special case in our analysis. In the following theorem, we show that solving the problem is equivalent to minimizing a difference of convex functions under convex constraints, and is hence a concave minimization problem. \begin{theorem} \label{theorem:utility_privacy_problem_full_information} The solution to $\sup_{\pi_0,\pi_1} V_F(\rho,\lambda,\pi_0,\pi_1)$ is obtained by solving: \begin{equation}\label{eq:utility_privacy_problem_full_information} \begin{aligned} \min_{(\gamma,\xi^0,\xi^1)\in \Omega} \quad & \gamma- \lambda\sum_{x} \|\xi_{x,*}^1\|_1\ln\frac{\|\xi_{x,*}^1\|_1}{\|\xi_{x,*}^0\|_1} \\ \textrm{s.t.} \quad & \sum_u (\xi_{*,u}^i)^\top P_i(u) =\sum_{u} (\xi_{*,u}^i)^\top \quad i=0,1\\ & \sum_{x,u}\lambda f(x,u,\xi^0,\xi^1) - q(x,u,\rho,\xi^0, \xi^1)\leq \gamma \end{aligned} \end{equation} where $\Omega =\mathbb{R}\times \Delta({\cal X}\times {\cal U})\times \Delta({\cal X}\times {\cal U})$, and \begin{small} \begin{align*} f(x,u,\xi^0,\xi^1)&=\xi_{x,u}^1D(P_1(x,u),P_0(x,u))+\xi_{x,u}^1\ln \frac{\xi_{x,u}^1}{\xi_{x,u}^0},\\ q(x,u,\rho,\xi^0, \xi^1)&=\rho\xi_{x,u}^1 r_1(x,u) +(1-\rho)\xi_{x,u}^0 r_0(x,u), \end{align*} \end{small} and by choosing $\pi_i(u|x) = \xi_{x,u}^i/\|\xi_{x,*}^i\|_1$ for $i=0,1$. \end{theorem} \begin{proof} Observe that the problem is equivalent to $ \min_{\pi_0,\pi_1} -\rho V_{M_1}(\pi_1)-(1-\rho)V_{M_0}(\pi_0) + \lambda I_F(\pi_0,\pi_1) $. Through a change of variable $\xi_{x,a}^i=\pi_i(a|x)\mu_i^{\pi_i}(x)$, as in Proposition \ref{proposition:lower_bound_privay}, the problem becomes \begin{equation*} \resizebox{\hsize}{!}{% $ \begin{aligned} \min_{\xi^0,\xi^1} \quad & \sum_{x,a} -\rho \xi_{x,a}^1r_1(x,a) - (1-\rho) \xi_{x,a}^0r_0(x,a) +\lambda I_F(\pi_0,\pi_1)\\ \textrm{s.t.} \quad & \sum_{a} (\xi_{*,a}^i)^\top P_i(a) =\sum_{a} (\xi_{*,a}^i)^\top \quad i=0,1. \end{aligned}$ } \end{equation*} Note now that $\mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[D(\pi_1(x), \pi_0(x))\right]$ in $I_F$ is equivalent to $\sum_{x,u}\xi_{x,u}^1 \left[\ln \frac{\xi_{x,u}^1}{\xi_{x,a}^0} - \ln\frac{\|\xi_{x,*}^1\|_1}{\|\xi_{x,*}^0\|_1} \right],$ that is the difference of two convex functions. Consequently, the original objective is a difference of convex functions. Define $f,g$ as in the statement of the theorem. The problem can rewritten as a concave program with convex constraint by introducing an additional parameter $\gamma\in \mathbb{R}$, with constraint $\sum_{x,u}\lambda f(x,u,\xi^0,\xi^1) - q(x,u,\rho,\xi^0, \xi^1)\leq \gamma$\ifdefined\shortpaper .\else . \fi \end{proof} Problem (\ref{eq:utility_privacy_problem_full_information}) can be solved using methods from DC programming (Difference of Convex functions). Note, however, that there are specific instances of (\ref{eq:utility_privacy_problem_full_information}) that could be convex. This happens when $D(\pi_1(x),\pi_0(x))$ is constant for all $x$, or if we impose the additional constraint $\pi_0=\pi_1$. The latter constraint appears if $\rho=1$, in which case the problem is equivalent to solving an MDP $({\cal X}, {\cal U},P_1, r_1^\lambda)$ with modified reward $r_1^\lambda(x,u)= r_1(x,u)-\lambda D(P_1(x,u), P_0(x,u))$. We have a few additional remarks to make regarding Theorem \ref{theorem:utility_privacy_problem_full_information}. The term $- \sum_{x} \|\xi_{x,*}^1\|_1\ln\frac{\|\xi_{x,*}^1\|_1}{\|\xi_{x,*}^0\|_1}$ can be interpreted as the negative KL-divergence between the two stationary distributions $-D(\mu_1^{\pi_1}, \mu_0^{\pi_0})$. This term causes the problem to be concave. Solutions of (\ref{eq:utility_privacy_problem_full_information}) favor distributions $\xi^0,\xi^1$ that are close to each other in the KL-divergence sense. As a consequence, in case $r_0=r_1$, the solutions of (\ref{eq:utility_privacy_problem_full_information}) will hardly depend on $\rho$. To see this, let $\delta_{x,u} \coloneqq \xi^{1}_{x,u}-\xi^{0}_{x,u}$ and notice that the following equality holds $\rho V_{M_1}(\pi_1) + (1-\rho)V_{M_0}(\pi_0)= \sum_{x,u} r_0(x,u) (\rho \delta_{x,u} + \xi_{x,u}^0)$. The KL-divergence is an upper bound of the total variation distance. It follows that a small KL-divergence between $\xi_1$ and $\xi_0$ implies a small value of $\delta_{x,u}$ in the absolute sense for all $(x,u)$, and thus a small dependence on $\rho$. \section{Introduction}\label{sec:introduction} Being able to detect changes in stochastic systems has several applications: it enables industrial quality control, fault detection, segmentation of signals, monitoring in biomedicine, and more. The topic of change detection has been widely studied for nearly a century \cite{shewhart1931economic,veeravalli2014quickest,lai1998information,lai2010sequential,lorden1971procedures,moustakides1986optimal,page1954continuous,pollak1985optimal,shiryaev1963optimum,tartakovsky2014sequential}, and has recently sparked an interest in exploring the problem through the lens of differential privacy \cite{cummings2018differentially}. Differential privacy \cite{dwork2014algorithmic} has emerged as a technique for enabling data analysis while preventing information leakage. Privacy, in the context of linear dynamical systems, has been used to study the problem of private filtering \cite{le2013differentially}, the problem of private parameters estimation \cite{wang2017differential}, and more. Similarly, also private change-point detection algorithms have been developed \cite{cummings2018differentially}, whose goal is to detect distributional changes at an unknown change-point in a sequence of data while making sure to satisfy a certain level of privacy. In contrast to previous work on privacy, we study the scenario where an eavesdropper tries to detect a change in a controlled stochastic system $\mathcal{S}$. Eavesdropping, which is a leakage of information, leads to a loss of privacy. This privacy loss, in turn, may reveal private information regarding the system. For example, it may expose the action that a person performed on the system, or, in buildings, may reveal when a person enters or leaves an apartment. Furthermore, eavesdropping is more likely to happen if the system has many sensors, which is usually the case in modern cyber-physical systems. The impact of such an attack could be sensibly reduced, if not nullified, in case encryption is used. Nevertheless, encryption may not always be the best option due to increased processing time. Therefore, it is of paramount importance to be able to minimize information leakage while at the same time satisfying some performance requirements of the system. Our analysis draws inspiration from \cite{alisic2020ensuring}, where the authors analyze the privacy properties of an autonomous linear system undergoing step changes. In contrast to their work, we consider the online case for generic Markov processes, whereas in \cite{alisic2020ensuring} they considered the case of offline change detection in linear systems. \textit{Contributions: } the objectives of this work are twofold: (1) to properly define the problem of privacy in online change-detection problems for Markov processes; (2) to provide ways to derive privacy bounds and show how to compute policies that attain higher privacy level. We conclude by providing: (A) a library to solve the optimization problems presented here and (B) an example for a linear dynamical system (more examples can be found in the library). \textit{Organization of the paper: } \cref{sec:preliminaries} introduces Quickest Change Detection, Markov Decision Processes and our proposed definition of privacy. In \cref{sec:body}, we introduce the model; in \cref{sec:full_information} we analyze the case where the eavesdropper can measure both state and action, and in \cref{sec:limited_information} we analyse the case where only the state is measured. We conclude with examples and numerical results in \cref{sec:simulations}. \section{Limited-information case}\label{sec:limited_information} We now analyze the limited-information case, where the eavesdropper has access to the states $\{X_t\}_{t\ge 1}$ only. \subsection{Privacy level} As in the full-information case, we can characterize $I_L$. Theorem \ref{theorem:privacy_full_info}. Unfortunately, it is not possible to have a separation of the KL-divergences between the models and the policies as in the full-information case. \begin{theorem}\label{theorem:privacy_limited_info} (i) If for all $x \in \supp(\mu_1^{\pi_1})$, $P_1^{\pi_1}(x)\ll P_0^{\pi_0}(x)$, then the sequence of observations $\{Y_t\}_{t\ge 1}$ , with $Y_t=X_t$, satisfies Assumption 1, and we have: \begin{equation}\label{eq:I_limited_information} I_L(\pi_0,\pi_1) = \mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right)\right]. \end{equation} Furthermore, $I_L(\pi_0,\pi_1) \leq I_F(\pi_0,\pi_1)$. \begin{equation} \label{eq:lower_bound_IL} \mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[\sup_{x'}d\left(P_1^{\pi_1}(x'|x), P_0^{\pi_0}(x'|x)\right)\right] \leq I_L(\pi_0,\pi_1). \end{equation} (ii) If $\exists x\in \supp(\mu_1^{\pi_1}): P_1^{\pi_1}(x)\centernot{\ll} P_0^{\pi_0}(x)$ then $I_L=\infty$. \end{theorem} \begin{proof} \ifdefined\shortpaper The proof of $I_L= \mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right)\right]$ is omitted for simplicity. The former inequality $I_L(\pi_0,\pi_1)\leq I_F(\pi_0,\pi_1)$ follows from an application of the log-sum inequality. The latter is a consequence of the fundamental data processing inequality \cite{garivier2019explore}, where one has $D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right) \geq d\left(\mathbb{E}_{P_1^{\pi_1}(x)}[Z], \mathbb{E}_{P_0^{\pi_0}(x)}[Z]\right)$ for a measurable random variable $Z$. By choosing $Z$ as the event of transitioning from $x$ to $x'$, and optimizing over $x'$, concludes the proof. \else We prove (i) and the bounds of $I_L(\pi_0,\pi_1)$. For $Y_t= (X_t)$ eq. \ref{eq:llr} becomes $Z_i = \ln \frac{P_1^{\pi_1}(X_i|X_{i-1})}{P_0^{\pi_0}(X_i|X_{i-1})}$ for $i\in \{2,\dots, t\}$. The limit $\lim_{n\to\infty}n^{-1}\sum_{t=\nu}^{t+\nu}Z_t$ converges to \begin{align*} I_L(\pi_0,\pi_1)&=\sum_{x\in {\cal X}} \mu_1^{\pi_1}(x) \sum_{y\in {\cal X}}P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)}. \end{align*} where the inner term is just the KL-divergence between $P_1^{\pi_1}(\cdot|x)$ and $P_1^{\pi_1}(\cdot|x)$, thus $I_1(\pi_1,\pi_0) = \mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right)\right].$ To prove the inequality just apply the log-sum inequality on the inner term in $I_L$. \begin{align*}\sum_{y\in {\cal X}}P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)}&\leq\\ \sum_{y,u} P_1(y|x,u)&\pi_1(u|x)\ln \frac{P_1(y|x,u)\pi_1(u|x)}{P_0(y|x,u)\pi_0(u|x)}. \end{align*} Compare now the new expression with the one in theorem \ref{theorem:privacy_full_info} to see that it is equal to $I_F(\pi_0,\pi_1)$. The last inequality is a consequence of the fundamental data processing inequality \cite{garivier2019explore}, where one has $D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right) \geq d\left(\mathbb{E}_{P_1^{\pi_1}(x)}[Z], \mathbb{E}_{P_0^{\pi_0}(x)}[Z]\right)$ for a measurable random variable $Z$. By choosing $Z$ as the event of transitioning from $x$ to $x'$, and optimizing over $x'$, concludes the proof. \fi \end{proof} \noindent Note that since we assume that for all $(x,u)$, $P_1(x,u)\ll P_0(x,u)$, the condition to get a finite $I_L(\pi_0,\pi_1)$ holds if $\pi_1\ll \pi_0$ (but this is not a necessary condition). In addition, as expected, the limited information case yields a higher privacy level than the full information scenario. Further observe that the lower bound in (\ref{eq:lower_bound_IL}) is tighter than $\min_x D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right)$, and can be used to upper bound the privacy level $I_L^{-1}$. However, computing policies that attain the best level of achievable privacy is more challenging compared to the full-information case. The fact that it is not possible to separate the policies and the models in Theorem \ref{theorem:privacy_limited_info} as we did in Theorem \ref{theorem:privacy_full_info} implies that we cannot use the trick to optimize only over $\pi_1$ to find the best level of privacy. As a consequence, it turns out that finding the best level of achievable privacy becomes a concave problem, in general. \begin{proposition}\label{proposition:lower_bound_privay_limited_information} The best level of privacy in the limited-information case is given by $\underline{I}_L = \inf_{\pi_0,\pi_1} I_L(\pi_0,\pi_1)$ that can be computed by solving the following concave program \begin{equation}\label{eq:12} \begin{aligned} &\min_{(\gamma,\alpha,\xi^1)\in\Omega'} \quad \gamma -\sum_{x}\|\xi_{x,*}^1\|_1\ln \|\xi_{x,*}^1\|_1\\ &\textrm{s.t.} \quad \sum_{u} (\xi_{*,u}^1)^\top P_1(u) =\sum_{u} (\xi_{*,u}^1)^\top,\\ & \qquad \sum_{x,y} \left(\sum_{u} P_1(y|x,u)\xi_{x,u}^1\right) \ln \frac{\sum_{u'} P_1(y|x,u')\xi_{x,u'}^1}{\sum_{u'} P_0(y|x,u')\alpha_{x,u'}}\leq \gamma. \end{aligned} \end{equation} where $\Omega'=\mathbb{R}\times \Delta({\cal U})^{{\cal X}}\times \Delta({\cal X}\times {\cal U})$. \end{proposition} \begin{proof} Similarly to the full-information case we perform a change of variable so that the problem becomes a minimization over state-action distributions. \begin{align*} I_L(\pi_0,\pi_1)&=\sum_{x\in {\cal X}} \mu_1^{\pi_1}(x) \sum_{y\in {\cal X}}P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)}. \end{align*} Let $\xi_{x,u}^1 = \pi_1(u|x)\mu_{1}^{\pi_1}(x)$, and denote the policy $\pi_0$ by $\alpha \in \Delta({\cal U})^{{\cal X}}$. Thus $I_L(\pi_0,\pi_1)$ is equivalent to \begin{small} \begin{align*} &=\sum_{x,y} \left(\sum_{u} P_1(y|x,u)\xi_{x,u}^1\right) \ln \frac{\sum_{u'} P_1(y|x,u')\xi_{x,u'}^1}{\|\xi_{x,*}^1\|_1\sum_{u'} P_0(y|x,u')\alpha_{x,u'}}\\ &=\underbrace{-\sum_{x,y} \left(\sum_{u} P_1(y|x,u)\xi_{x,u}^1\right) \ln \|\xi_{x,*}^1\|_1}_{(u)}\\ &\qquad+ \underbrace{\sum_{x,y} \left(\sum_{u} P_1(y|x,u)\xi_{x,u}^1\right) \ln \frac{\sum_{u'} P_1(y|x,u')\xi_{x,u'}^1}{\sum_{u'} P_0(y|x,u')\alpha_{x,u'}}}_{(b)}. \end{align*}\end{small} Note that (a) is equal to $- \sum_{x}\|\xi_{x,*}^1\|_1\ln \|\xi_{x,*}^1\|_1$. One can conclude that the expression is a difference of convex functions. Consequently it is possible to use the same approach as in Theorem \ref{theorem:utility_privacy_problem_full_information} \ifdefined\shortpaper to get the result. \else . We can rewrite the problem as (\ref{eq:12}) by introducing an additional variable $\gamma \in \mathbb{R}$. \fi \end{proof} \noindent As already mentioned (\ref{eq:12}) may be hard to solve, but there are still some instances where it corresponds to a convex program. This is the case if $D(P_1^{\pi_1}(x), P_0^{\pi_0}(x))$ does not depend on $x$. Alternatively, consider the inequality $I_L(\pi_0, \pi_1) \leq \max_x D\left(P_1^{\pi_1}(x), P_0^{\pi_0}(x)\right)$. Minimizing the right-hand side over $(\pi_0,\pi_1)$ is a convex problem, and can be used as an approximation to $\inf_{\pi_0,\pi_1} I_L(\pi_0,\pi_1)$. As a final remark, note that contrarily to Proposition \ref{proposition:lower_bound_privay}, it is not necessarily true that at the infimum of $I_L(\pi_0,\pi_1)$, the two policies coincide. \subsection{Privacy-utility trade-off} \noindent We end this section by providing a way to compute policies that maximize utility and privacy in the limited-information case. The concave program to be solved is, for the most part, similar to the one solved in Theorem \ref{theorem:utility_privacy_problem_full_information}, with the only difference being the privacy term that appears in the constraint. \begin{theorem} \label{theorem:utility_privacy_problem_limited_information} Let $\rho \in[0,1], \lambda\geq 0$ and define $ V_L(\rho,\lambda,\pi_0,\pi_1)= V(\rho,\pi_0,\pi_1) - \lambda I_L(\pi_0,\pi_1)$. The solution to $\sup_{\pi_0,\pi_1} V_L(\rho,\lambda,\pi_0,\pi_1)$ is obtained by solving \begin{equation}\label{eq:utility_privacy_problem_limited_information} \resizebox{\hsize}{!}{% $ \begin{aligned} \min_{(\gamma,\xi^0,\xi^1)\in \Omega} \quad & \gamma- \lambda\sum_{x}\|\xi_{x,*}^1\|_1\ln \frac{\|\xi_{x,*}^1\|_1}{\|\xi_{x,*}^0\|_1}\\ \textrm{s.t.} \quad & \sum_{u} (\xi_{*,u}^i)^\top P_i(u) =\sum_{u} (\xi_{*,u}^i)^\top \quad i=0,1\\ & \sum_{x}\left(\lambda f(x,\xi_0,\xi_1) -\sum_{u}q(x,u,\rho,\xi^0,\xi^1) \right)\leq \gamma \end{aligned}$} \end{equation} where $\Omega $ and $q$ are as in Theorem \ref{theorem:utility_privacy_problem_full_information} and \[ \resizebox{\hsize}{!}{% $ f(x,\xi_0,\xi_1)=\sum_y \left(\sum_{u} P_1(y|x,u)\xi_{x,u}^1\right) \ln \frac{\sum_{u'} P_1(y|x,u')\xi_{x,u'}^1}{\sum_{u'} P_0(y|x,u')\xi_{x,u'}^0},$} \] and by choosing $\pi_i(u|x) = \xi_{x,u}^i/\|\xi_{x,*}^i\|_1$ for $i=0,1$. \end{theorem} \begin{proof} The proof is along the same lines as that of Theorem \ref{theorem:utility_privacy_problem_full_information} by making use of the decomposition of $I_L$ shown in Proposition \ref{proposition:lower_bound_privay_limited_information}. \end{proof} \iffalse Similarly one can define $I_0(\pi_0,\pi_1)$. Compare now eq. \ref{eq:I_limited_information} with eq. \ref{eq:I_full_information}: it is now evident that in the limited-information case the agent can affect $I_1$ through $\pi_0$ (and vice-versa for $I_0$). The eavesdropper not being able to observe the action $a_t$ allows the agent to take an advantage, that she can use to improve the privacy level. Despite being potentially easier for the agent to confuse an eavesdropper, the problem of actually choosing policies $\pi_0,\pi_1$ becomes harder since they will affect each other's privacy. Because of this reason, it is apparently not clear, and perhaps not possible, to define the level of privacy as in the previous case, that was the sum of $I_1$ and $I_0$ (thanks to the fact that $I_1$ was independent of $\pi_0$, and $I_0$ of $\pi_1$). This motivates us to introduce a different, but similar, hardness metric. Let us first define the minmax case scenario: \begin{definition} Given $M=\{M_0,M_1\}$ we define the minmax-case privacy level in the limited-information case to be \begin{equation} \underline{\mathcal{I}}_L(M) =\inf_{\pi_0,\pi_1} \max_{i=0,1} I_i(\pi_0,\pi_1) \end{equation} and the best-$\rho$ level of privacy, with $\rho\in[0,1]$, as \begin{equation} \underline{\mathcal{I}}_L(M,\rho) = \inf_{\pi_0,\pi_1} \rho I_0(\pi_0,\pi_1) + (1-\rho) I_1(\pi_0,\pi_1). \end{equation} \end{definition} Such a definition was not introduced in the previous section because of the independence of $I_1$ with $\pi_0$, and vice-versa. To be robust against the worst case, one should choose a pair of policies that attain $\underline{\mathcal{I}}_L(M)$ in the worst-case scenario. The definition of $\underline{\mathcal{I}}_L(M,\rho)$ follows from the fact that the following holds \[\underline{\mathcal{I}}_L(M) = \inf_{\pi_0,\pi_1}\max_{\rho \in \Delta( \{0,1\})} \sum_{i\in \{0,1\}}\rho_i I_i(\pi_0,\pi_1)\] and therefore, alternatively, one can consider a prior distribution on the possible model switches, which leads to the definition of $\underline{\mathcal{I}}_L(M,\rho)$ and the following definition of privacy level for $(M,\pi,\rho)$: \begin{definition} Let $\rho\in[0,1]$. We quantify the privacy level for $(M,\pi,\rho)$ in the limited-information case by \begin{equation} \mathcal{I}_L(M,\pi,\rho) = \rho I_1(\pi_0,\pi_1) + (1-\rho)I_0(\pi_0,\pi_1) \end{equation} \end{definition} Compared to the previous case, it is not immediately clear what are the properties of $\mathcal{I}_L(M,\pi,\rho$. We have the following lemma: \begin{lemma} Given $(M,\pi)$ the quantity $\mathcal{I}_L(M,\pi,\rho)$ is non-negative, and the maximum level of privacy is given for \[\] \begin{enumerate} \item is it convex? seems to be the sum of aconvex + concave function...this can be seen as minimizing a concave function on a convex set (DCCP) \item how to compute it? \end{enumerate} \end{lemma} \begin{proof} The fact that $\mathcal{P}(M,\pi)$ is non-negative follows directly from $I_1$ and $I_0$ being non-negative. To see this observe that $I_1(\pi_1,\pi_0)$ can be rewritten as \begin{align*} I_1(\pi_1,\pi_0) &= \sum_{x} \mu_1^{\pi_1}(x)\sum_y P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)} \end{align*} that is the KL-divergence between $P_1^{\pi_1}(\cdot|x)$ and $P_1^{\pi_1}(\cdot|x)$: \[ \sum_y P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)} = I\left(P_1^{\pi_1}(\cdot|x), P_0^{\pi_0}(\cdot|x)\right) \] from which follows that $I_1\geq 0$ since the KL-divergence is non negative and \[I_1(\pi_1,\pi_0) = \mathbb{E}_{x\sim \mu_1^{\pi_1}}\left[I\left(P_1^{\pi_1}(\cdot|x), P_0^{\pi_0}(\cdot|x)\right)\right].\] Perform now the change of variable $\xi_{x,a}^i = \pi_i(a|x)\mu_i^{\pi_i}(x)$ (therefore $\pi_i(a|x) = \xi_{x,a}^i/\|\xi_{x,*}^i\|_1$). We can rewrite \begin{align*} I_1(\pi) &= \sum_{x,a,y}\xi_{x,a}^1 P_1(y|x,a) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)}\\ &= \sum_{x,a,y}\xi_{x,a}^1 P_1(y|x,a)\ln \frac{\|\xi^0_{x,*}\|_1\sum_{a'} P_1(y|x,a')\xi^1_{x,a'}}{\|\xi^1_{x,*}\|_1\sum_{a'} P_0(y|x,a')\xi^0_{x,a'}}\\ &=\sum_{x,a,y}\xi_{x,a}^1 P_1(y|x,a)\Bigg[\ln \frac{\sum_{a'} P_1(y|x,a')\xi^1_{x,a'}}{\sum_{a'} P_0(y|x,a')\xi^0_{x,a'}} \\ &\qquad +\ln \frac{\|\xi^0_{x,*}\|_1}{\|\xi^1_{x,*}\|_1} \Bigg] \end{align*} Observe that \[\sum_{x,a,y}\xi_{x,a}^1 P_1(y|x,a)\ln \frac{\|\xi^0_{x,*}\|_1}{\|\xi^1_{x,*}\|_1} = - \sum_{x}\mu_1^{\pi_1}(x)\ln \frac{\mu_1^{\pi_1}(x)}{\mu_0^{\pi_0}(x)} \] that is equal to $-D\left(\mu_1^{\pi_1}|\mu_0^{\pi_0} \right)$. This shows that it's the sum of a convex with a concave function. Let us try another approach. Consider \begin{align*} I_1(\pi_1,\pi_0) &= \sum_{x} \mu_1^{\pi_1}(x)\sum_y P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)} \end{align*} that is also equivalent to \[I_1(\pi_1,\pi_0) = \min_{\mu \in \Delta(X)} \sum_{x} \mu_x\sum_y P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)}\] subject to $\mu^\top P_1^{\pi_1} = \mu^\top$. We can then write \begin{align*} I_1(\pi) &= \min_{\mu \in \Delta(X)} \sum_{x} \mu_x\sum_y P_1^{\pi_1}(y|x) \ln \frac{P_1^{\pi_1}(y|x)}{P_0^{\pi_0}(y|x)}\\ &= \min_{\mu \in \Delta(X)}\sum_{x,a,y}\mu_x \pi_1(a|x) P_1(y|x,a)\ln \frac{\sum_{a'} P_1(y|x,a')\pi_1(a'|x)}{\sum_{a'} P_0(y|x,a')\pi_0(a'|x)}\\ &\leq \min_{\mu \in \Delta(X)}\sum_{x,a,y}\mu_x \pi_1(a|x) P_1(y|x,a)\ln \frac{ P_1(y|x,a)\pi_1(a|x)}{ P_0(y|x,a)\pi_0(a|x)}\\ &= \min_{\mu \in \Delta(X)}\sum_{x,a,y}\mu_x \pi_1(a|x) P_1(y|x,a)\ln \frac{ P_1(y|x,a)}{ P_0(y|x,a)}\\ &\qquad +\sum_{x,a}\mu_x \pi_1(a|x)\ln \frac{\pi_1(a|x)}{\pi_0(a|x)} \end{align*} \end{proof} \fi \iffalse In reality this lower bound can not, in most cases, be attained since we often wish to control these MDPs as to maximize, over time, some collected reward. This creates a \textit{utility-privacy} trade-off, that is formalized by the following optimization problem \begin{lemma}\label{lemma:utility_privacy_program} Given $M=\{M_1,M_2\}$, under the assumption that $P_1$ and $P_0$ are absolutely continuous with respect to each other, the lowest level of attainable privacy is \[\inf_\pi \mathcal{P}(M,\pi)\geq c_1 + c_0\] where $c_1$ (similarly $c_0$) is the solution of the following convex program \begin{equation} \begin{aligned} \min_{\xi\in [0,1]^{X\times A}} \quad & \sum_{x,a} \xi_{x,a}\left[\sum_{x'}P_1(x'|x,a) \ln \frac{P_1(x'|x,a)}{P_0(x'|x,a)} \right]\\ \textrm{s.t.} \quad & \sum_{a} \xi_{*,a}^\top P_1(a) =\sum_{a} \xi_{*,a}^\top,\\ &\|\xi\|_1=1, \\ \end{aligned} \end{equation} with $P_1(a)$ being a $|X|\times |X|$ matrix containing the transition probabilities for a specific a \end{lemma} \fi \iffalse WORK IN PROGRESS with $\Delta K = K_0-K_1$ Moreover, one can easily see that choosing $u_t=K_1x_t$ leads to $\xi_1$ being normally distributed, with distribution $\mathcal{N}(\mu, \Sigma)$ where $\mu$ and $\Sigma$ satisfy \begin{align*} \mu&= (I-A-BK_1)F\theta\\ \Sigma&=(A+BK_1)\Sigma(A+BK_1)^\top +Q \end{align*} Therefore the lowest level of achievable privacy is given by \begin{equation} \begin{aligned} \min_{K_1} \quad & \int_{\mathbb{R}^n}\\ \textrm{s.t.} \quad & \sum_{a} \xi_{*,a}^\top P_1(a) =\sum_{a} \xi_{*,a}^\top,\\ &\|\xi\|_1=1, \\ \end{aligned} \end{equation} \fi \section*{Acknowledgements} \noindent \small{This work was supported by the Swedish Foundation for Strategic Research through the CLAS project (grant RIT17-0046).} \bibliographystyle{IEEEtran} \section{Preliminaries and Problem Formulation}\label{sec:preliminaries} In this section we give a brief description of (1) Minimax Quickest Change Detection, (2) the framework of Markov Decision Processes and (3) the problem formulation. \subsection{Minimax Quickest Change Detection (QCD)} Consider an agent willing to detect an abrupt change in a stochastic system. To this aim, the agent has access to a non-i.i.d. sequence of observations $\{Y_t\}_{t\geq 1}$. The change occurs at the unknown time $\nu$, and we let $\mathbb{P}_\nu$ denote the probability measure under which the system dynamics are generated if the change point is $\nu$. We also denote by $\mathbb{P}_\infty$ the probability measure in absence of a change point. Under $\mathbb{P}_\nu$, the conditional density function of $Y_t$ given $(Y_1,\dots, Y_{t-1})$ is $f_0(\cdot|Y_1,\dots, Y_{t-1})$ for $t<\nu$, and $f_1(\cdot|Y_1,\dots, Y_{t-1})$ for $t\geq\nu$. The agent needs to detect the change point in an online manner: her decision takes the form of a stopping time $T$ with respect to the filtration $\{ {\cal F}_t\} _{t\ge 1}$ where ${\cal F}_t=\sigma(Y_1,\ldots Y_t)$. In absence of any prior information about $\nu$, a common approach, due to Lordern and Pollak \cite{lorden1971procedures,pollak1985optimal}, is to aim at devising stopping rule $T$ minimizing the worst case expected delay\footnote{The essential supremum of a real-valued r.v. $X$ is defined up to an event with zero probability: $\esssup X =\inf\{a\in \mathbb{R} : \mathbb{P}_\nu[X\ge a]=0$\}.} \begin{equation} \overline{\mathbb{E}}_1(T):= \sup_{\nu\geq 1}\esssup \mathbb{E}_\nu[(T-\nu)^+| \mathcal F_{\nu-1}], \end{equation} over all possible rules satisfying the constraint $\mathbb{E}_\infty[T]\geq \bar T$ on the expected duration to false alarm (we impose this constraint since we work in a non-Bayesian setting, where it is not possible to impose a constraint on the false alarm rate \cite{lai1998information}). Additionally, for non-i.i.d. observation it is common to make an assumption on the convergence of the average log-likelihood ratio (see \cite{lai1998information} or \cite{tartakovsky2014sequential}), which permits us to find a lower bound on the expected delay. \begin{assumption}\label{assump:lai_assumption} Define the log-likelihood ratio (LLR) as \begin{equation}\label{eq:llr} Z_i \coloneqq \ln \frac{f_1(Y_i|Y_1,\dots, Y_{i-1})}{f_0(Y_i|Y_1,\dots, Y_{i-1})}. \end{equation} Assume that $n^{-1}\sum_{t=\nu}^{\nu+n}Z_t$ converges a.s. under $\mathbb{P}_\nu$, to some constant $I$, and that, for all $\delta >0$, \begin{small} \begin{equation} \lim_{n\to\infty}\sup_{\nu\geq 1} \esssup \mathbb{P}_\nu \left(\max_{t\leq n} \sum_{i=\nu}^{t+\nu}Z_i\geq I(1+\delta)n\Big| \mathcal F_{\nu-1}\right) = 0. \end{equation} \end{small} \end{assumption} Assumption \ref{assump:lai_assumption} involves conditioning on $\{Y_t\}_{t=1}^{\nu-1}$, and depends on $I$, which can be interpreted as the average amount of information per observation sample for discriminating between the two models $f_1$ and $f_0$ (note that Assumption \ref{assump:lai_assumption} is quite general, and holds, for example, for stable linear dynamical systems). Under the above assumption, Lai \cite{lai1998information} established an asymptotic (as $\bar T\to\infty$) lower bound on the worst case expected delay of any stopping rule in $D({\bar T})$ (the set of rules satisfying $\mathbb{E}_\infty[T]\geq \bar T$): \begin{equation}\label{eq:lower_bound_change_detection_nonidd} \liminf_{\bar T\to\infty } \inf_{T \in D(\bar T)} \frac{\overline{\mathbb{E}}_1(T)}{\ln \bar T} \geq I^{-1}. \end{equation} This lower bound also provides an interpretation of $I$: $I$ plays the same role in the change detection theory as the Cramer-Rao lower bound in estimation theory \cite{tartakovsky2014sequential}, hence it quantifies the detection difficulty. It is also proved that the lower bound is achieved by the CUSUM algorithm with stopping time $T=\inf \left\{t: \max_{1\leq k\leq t} \sum_{i=k}^t Z_i \geq c \right\}$ provided that $c$ is chosen so that $\mathbb{E}_\infty[T]= \bar T$. These results can also be extended to unknown models through the generalized likelihood ratio test \cite{lai2010sequential}. \subsection{Privacy and hardness of change detection inference} The asymptotic lower bound in \ref{eq:lower_bound_change_detection_nonidd} provides a notion of privacy in online change-detection problems. In order to maintain privacy, we would like the statistical differences before and after the abrupt change to be as small as possible. From the perspective of differential privacy \cite{dwork2008differential}, we are interested in bounding the following quantity $\sup_{\tau_N}\ln \frac{\mathbb{P}_\nu(\tau_N)}{\mathbb{P}_\infty(\tau_N)}$, where $\tau_N= (Y_1,\dots, Y_N)$ is a trajectory of size $N$. \begin{remark}In contrast to the classical definition of differential privacy, we are not interested in minimizing the statistical difference between two trajectories $(\tau,\tau')$, but the difference in any trajectory before and after the abrupt change. \end{remark} However, uniformly bounding $\frac{\mathbb{P}_\nu(\tau_N)}{\mathbb{P}_\infty(\tau_N)}$ may be detrimental. It is sensitive to outliers, and, in practice, results in unsatisfactory utility \cite{wang2016average}. Instead, a more natural approach is to bound $ \mathbb{E}_{\tau_N\sim {\cal D}}\left[\ln \frac{\mathbb{P}_\nu(\tau_N)}{\mathbb{P}_\infty(\tau_N)}\right]$ over some distribution ${\cal D}$. This quantity, also known as on-average KL-Privacy \cite{wang2016average}, is distribution-specific quantity, and allows us to study the problem for a specific distribution ${\cal D}$. In this work it comes natural to choose ${\cal D} = \mathbb{P}_\nu$: if Assumption \ref{assump:lai_assumption} is satisfied, and we let $N\to\infty$, we obtain that the on-average KL-privacy coincides with the quantity $I$ in \cref{eq:lower_bound_change_detection_nonidd}. This result is not surprising: $I$ dictates how difficult the detection problem is. As $I$ decreases, the time needed to discriminate between the two models increases, and thus becomes harder to note if an abrupt change happened. Therefore, the quantity $I$ lends itself well to define the privacy of an abrupt change. \begin{definition}[Privacy of an abrupt change] Consider the observations ${\cal Y}=\{Y_t\}_{t\geq 1}$ of a stochastic dynamical system, where the conditional density function of $Y_t$ given $(Y_1,\dots, Y_{t-1})$ is $f_0$ for $t<\nu$, and $f_1$ otherwise. If $\mathcal{Y}$ satisfies Assumption \ref{assump:lai_assumption}, we define the privacy level of $\mathcal{Y}$ as $\mathcal{I}(\mathcal{Y}) = I^{-1}$. \end{definition} In controlled system, we can modify the control policy to manipulate $I$, and, in turn, create a trade-off between control performance and information leakage. We can select a policy that increases the privacy (i.e., minimizes $I$), but this may come at the expense of decreased utility. \subsection{Markov Decision Processes (MDPs)} We study stochastic systems that can be modeled using the MDP framework. An MDP $M$ is a controlled Markov chain, described by a tuple $M=({\cal X}, {\cal U}, P, r)$, where ${\cal X}$ and ${\cal U}$ are the state and action spaces, respectively. $P: {\cal X}\times {\cal U} \to \Delta({\cal X})$ denotes the conditional state transition probability distributions ($\Delta({\cal X})$ denote the set of distributions over ${\cal X}$), i.e., $P(x'|x,u)$ is the probability to move from state $x$ to state $x'$ given that action $u$ is selected. Finally, $r: {\cal X}\times {\cal U}\to \mathbb{R}$ is the reward function. A (randomized) control policy $\pi: {\cal X}\to \Delta({\cal U})$ determines the selected actions, and $\pi(u|x)$ denotes the probability of choosing $u$ in state $x$ under $\pi$. For simplicity, we focus on ergodic MDPs, where ${\cal X}$ and ${\cal U}$ are finite, and where any policy $\pi$ generates a positive recurrent Markov chain with stationary distribution $\mu^\pi$. The value of a policy $\pi$ is defined as $V_M^\pi = \lim_{N\to \infty} \mathbb{E}_M^\pi\left[\frac{1}{N} \sum_{t=1}^{N} r(X_t, U_t)\right]$ (here $U_t$ is distributed as $\pi(\cdot|X_t)$). In ergodic MDPs, the objective is to find a policy $\pi$ with maximal value $ V_M^\star = \max_\pi V_M^\pi$. In the sequel, we denote by $D(P,Q)=\mathbb{E}_{\omega\sim P}[\ln\frac{P}{Q}(\omega)]$ the KL-divergence between two distributions $P$ and $Q$, and by $d(p,q) = p\ln\frac{p}{q} + (1-p)\ln\frac{1-p}{1-q}$ the KL-divergence between two Bernoulli distributions of parameter $p$ and $q$. For two probability measures $P$ and $Q$ we write $P\ll Q$ if $P$ is absolutely continuous with respect to $Q$, i.e., for every measurable set $A$, $Q(A) = 0\Rightarrow P(A)=0$. \subsection{Problem formulation}\label{sec:body} In this paper, we investigate the utility privacy trade-off in controlled dynamical systems with one change point. One can also extend the analysis to multiple change points but for simplicity of the exposition, we restrict our attention to a single change point, always denoted by $\nu$. We formulate the problem for ergodic MDPs with finite state and action spaces (however, our results hold for other types of system, e.g., classical linear systems). Consider two ergodic MDPs $M_0$ and $M_1$, and assume that the main agent faces $M_0$ before $\nu$ and $M_1$ after. Let $M_i = ({\cal X},{\cal U}, P_i, r_i), i=0,1$, and assume that $P_1$ is absolute continuous w.r.t. $P_0$, which means that for all pair $(x,u)$, $P_1(x,u)\ll P_0(x,u)$.\\ \noindent We make the following assumptions for the two agents: \begin{itemize} \item The {\bf main agent} {\it knows} the time at which the MDP changes, and applies the control policy $\pi_0$ (resp. $\pi_1$) for $t<\nu$ (resp. $t\ge \nu$). We assume that just before the change occurs, the system state distribution is $\mu_0^{\pi_0}$, the stationary distribution of the Markov chain induced in $M_0$ by $\pi_0$ (resp. $\mu_1^{\pi_1}$ is the stationary distribution induced by $\pi_1$ on $M_1$). \item The {\bf eavesdropper} wishes to infer the change point $\nu$ by observing the system's dynamics. \end{itemize} \noindent Then, based on what the eavesdropper can observe, we consider two possible scenarios (depicted in \cref{fig:mdp_scheme}): \begin{enumerate} \item The \textit{full information} scenario, where the eavesdropper is able to observe $Y_t:=(X_t,U_t)$ at time $t$. \item The \textit{limited information} case, where the eavesdropper is able to observe only $Y_t:=X_t$ at time $t$. \end{enumerate} In the full information case, we denote by $I_F(\pi_0,\pi_1)$ the inverse of the privacy level. Similarly, $I_L(\pi_0,\pi_1)$ is the inverse of the privacy level in the limited information scenario. We will prove that these levels are well-defined (in the sense that Assumption 1 holds). The objective of the main agent is to design the control policies $\pi_0$ and $\pi_1$ realizing an appropriate trade-off between their rewards and privacy level. The utility of $(\pi_0,\pi_1)$ is a linear combination, parametrized by $\rho\in [0,1]$, of the ergodic rewards before after the change point: $V(\rho,\pi_0,\pi_1):= \rho V_{M_0}^{\pi_0} +(1-\rho) V_{M_1}^{\pi_1}$. To assess the trade-off between utility and privacy of the main agent, we will analyze the solution of the following optimization problem for different values of $\lambda\ge 0$: \begin{equation}\label{eq:optpb} \sup_{\pi_0,\pi_1} \rho V_{M_0}^{\pi_0} +(1-\rho) V_{M_1}^{\pi_1} -\lambda I(\pi_0,\pi_1), \end{equation} where $I(\pi_0,\pi_1)= I_F(\pi_0,\pi_1)$ (resp. $=I_L(\pi_0,\pi_1)$) in the full (resp. limited) information scenario. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{Figures/schemes/mdp_scheme-cropped.pdf} \caption{Scenarios considered} \label{fig:mdp_scheme} \end{figure} \section{Examples and numerical results}\label{sec:simulations} \noindent We implemented a library\footnote{The code and instructions to run the simulations can be found at \url{https://github.com/rssalessio/PrivacyStochasticSystems}.} built on top of the DCCP library \cite{shen2016disciplined} to solve the concave problems presented above. Due to space constraints, we restrict our attention to specific linear systems and a simple MDP with three states. \subsection{Additive changes in linear systems} \noindent Consider the following linear model $x_{t+1} = Ax_t + Bu_t + F\theta \mathds{1}_{\{t\geq \nu\}} + w_t$ where $x_t\in \mathbb{R}^n$ is the state, $u_t\in \mathbb{R}^m$ is the control signal, and $w_t\in \mathbb{R}^n$ is a white noise sequence with $0$ mean and covariance $Q$. The parameter $\theta \in \mathbb{R}^k$ models the exogenous input, unknown to the eavesdropper.\\ \noindent\textbf{Full information.} In this case, the best level of privacy is obtained with $\pi_0=\pi_1$, and we can prove that it does not depend on $\pi_0$. This is a simple consequence of Proposition 1 and the fact that $ D(P_1(x,u), P_0(x,u))=(1/2)\theta^\top F^\top Q^{-1} F\theta=\underline{I}_F. $ In turn, the privacy level depends solely on the signal-to-noise ratio (SNR) $\frac{1}{2}\theta^\top F^\top Q^{-1} F\theta$, which increases as the minimum eigenvalue of $Q$ increases. This result agrees with the conclusions of \cite{alisic2020ensuring}. Next, to investigate the privacy-utility trade-off, we assume the columns of $B$ are linearly independent and that there exists $K \in \mathbb{R}^{n\times m}$ such that $A+BK$ is a Schur matrix. We assume the reward function is $r(x,u) = -x^\top x$. To shorten the notation, we use the following definitions: $L\coloneqq I-A-BK$, $E\coloneqq (L^{-1})^{\top}L^{-1}$. \begin{proposition}\label{proposition:full_info_utility_privacy_linear_case} Suppose the control laws are of the type $u_t = Kx_t+ \beta_t^0$ for $t<\nu$, and $u_t=Kx_t+\beta_t^1$ otherwise, where $\beta_t^i$ is i.i.d. white Gaussian noise distributed according to $\mathcal{N}(\alpha_i, R)$, with $R \succ 0$. Then, the utility-privacy value function is \begin{align*} V_F(\rho, \lambda,\alpha_0,\alpha_1)&=-(1-\rho)\alpha_0^\top B^\top EB\alpha_0 -\mathrm{Tr}(\Sigma)\\ &\quad - \rho(B\alpha_1+F\theta)^\top E(B\alpha_1+F\theta)\\ &\quad -\lambda \frac{c_\theta+ (\alpha_1-\alpha_0)^\top R^{-1}(\alpha_1-\alpha_0)}{2}, \end{align*} where $c_\theta=\theta^\top F^\top Q^{-1} F\theta$ and $\Sigma$ satisfies the Lyapunov equation $\Sigma = (A+BK)\Sigma (A+BK)^\top + Q+BRB^\top$. For $\lambda >0, \rho\in[0,1]$, the solutions to $\max_{\alpha_0,\alpha_1} V_F(\rho,\lambda,\alpha_0,\alpha_1)$ are given by \begin{align*} \alpha_0^\star(\rho,\lambda) &= -\rho\left(B^\top T(\rho,\lambda )B \right)^{-1}B^\top E F\theta,\\ \alpha_1^\star(\rho,\lambda)&=\left( 2\frac{1-\rho}{\lambda} RB^\top EB +I\right)\alpha_0^\star(\rho,\lambda), \end{align*} where $T(\rho,\lambda) = \left(I + \frac{2(1-\rho)\rho}{\lambda} EBRB^\top\right)E$. Moreover, the solution to $\max_{\alpha_1} V_F(1,\lambda,\alpha_1, 0)$ is given by $\alpha_1^\star(\lambda) = -\left(RB^\top E B + \frac{\lambda}{2}I\right)^{-1}RB^\top E F\theta$. \end{proposition} \ifdefined\shortpaper The proof is omitted for simplicity. \else \begin{proof} If $A+BK$ is Schur, then the system converges to a stationary distribution before and after the change. Specifically, the two distributions are $\mu_0 \sim \mathcal{N}(L^{-1}B\alpha_0, \Sigma)$ and $\mu_1\sim \mathcal{N}(L^{-1}(B\alpha_1+F\theta), \Sigma)$, where $\Sigma$ satisfies the Riccati equations $ \Sigma = (A+BK)\Sigma (A+BK)^\top + Q+BRB^\top $. Therefore the value of the policy before and after the change is $V_{M_0}(\alpha_0) = -\mathbb{E}_{x\sim\mu_0}[x^\top x]$ and $V_{M_1}(\alpha_1) = -\mathbb{E}_{x\sim\mu_1}[x^\top x]$. For a normal random variable $y\sim \mathcal{N}(m,W)$ it holds that $\mathbb{E}[y^\top y] = m^\top m+\mathrm{Tr}(W)$. Therefore we have $V_{M_0}(\alpha_0) = -\alpha_0^\top B^\top EB\alpha_0-\mathrm{Tr}(\Sigma)$ and $ V_{M_1}(\alpha_1)=-(B\alpha_1+F\theta)^\top E (B\alpha_1+F\theta)-\mathrm{Tr}(\Sigma)$. The information value on the other hand is \[I_F = \frac{\overbrace{\theta^\top F^\top Q^{-1} F\theta}^{c_\theta} + (\alpha_1-\alpha_0)^\top R^{-1}(\alpha_1-\alpha_0)}{2}. \] Then $-V_F(\rho, \lambda,\alpha_0,\alpha_1)$ is convex, and maximizing $V_F$ is equivalent to minimizing $-V_F$. Taking the gradient of $V_F$ with respect to $\alpha_0,\alpha_1$ yields \begin{align*} -\nabla_{\alpha_{0}} V_F&=2(1-\rho) B^\top EB\alpha_0 -\lambda R^{-1}(\alpha_1-\alpha_0)\\ -\nabla_{\alpha_{1}} V_F&=2\rho B^\top E(B\alpha_1+F\theta)+\lambda R^{-1}(\alpha_1-\alpha_0). \end{align*} $\rho=0$ implies $\alpha_0=\alpha_1$ and $\alpha_0=0$ since $B^\top EB$ is full rank. $\rho=1$, similarly, implies $\alpha_0=\alpha_1$ and $B^\top EB\alpha_1=-B^\top EF\theta$, hence $\alpha_1 = -(B^\top EB)^{-1}B^\top EF\theta$. For the general case using the first equation one can write the following expression for $\alpha_1$ \begin{align*} &\left( 2\frac{1-\rho}{\lambda} RB^\top EB +I\right)\alpha_0 =\alpha_1. \end{align*} Now consider $-(\nabla_{\alpha_{1}} V_F+\nabla_{\alpha_{0}} V_F)=0$ and plug in the expression found for $\alpha_1$ \begin{align*} &(1-\rho) B^\top EB\alpha_0 \\ &\qquad +\rho B^\top E\left(B\left( 2\frac{1-\rho}{\lambda} RB^\top EB +I\right)\alpha_0+F\theta\right)=0 \end{align*} that is also equal to \begin{align*} (1-\rho) B^\top EB\alpha_0 +\rho B^\top EB&\left( 2\frac{1-\rho}{\lambda} RB^\top EB +I\right)\alpha_0\\ &\qquad =-\rho B^\top E F\theta. \end{align*} Then, we can conclude that the left-hand side is equal to \[B^\top \left[(1-\rho) E + \frac{2(1-\rho)\rho}{\lambda} EBRB^\top E + \rho E\right]B\alpha_0.\] Let now $T(\rho,\lambda) = \left(I + \frac{2(1-\rho)\rho}{\lambda} EBRB^\top\right)E $, hence \[ \alpha_0 = -\rho\left(B^\top T(\rho,\lambda )B \right)^{-1}B^\top E F\theta,\] from which follows also the expression for $\alpha_1$. Finally, notice that the solution to $\max_{\alpha_1} V_F(1,\lambda,\alpha_1, 0)$ can be easily derived by using the equation $-\nabla_{\alpha_1} V_F(1,\lambda,\alpha_1, 0)=2B^\top E (B\alpha_1+F\theta)+\lambda R^{-1}\alpha_1=0$. \end{proof}\fi \noindent Proposition \ref{proposition:full_info_utility_privacy_linear_case} uses stochastic policies to ensure absolute continuity of the policies. Consequently, one can optimize over the mean of the policy while keeping fixed the covariance term $R$. The larger the eigenvalues of $R$, the better it is in terms of privacy (but worse performance).\\ \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/linear_sys_limited_information/limited_information_plot2.pdf} \caption{Limited information case (example in \ref{eq:linear_sys_example}): plot of $V$ and $I_L^{-1}$ as function of the optimal solutions $\alpha_0^\star, \alpha_1^\star$. The larger the values (in blue), the better.} \label{fig:linear_sys_limited_case} \end{figure} \noindent \textbf{Limited information.} To find the best privacy level in the limited information case, we exploit the presence of process noise to just consider deterministic policies. Let the policies be described by $u_t = g_0(x_t) + \Delta g(x_t)$ for $t\geq\nu$ and $u_t=g_0(x_t)$ for $t<\nu$, where $g_0,\Delta g$ are deterministic mappings from $\mathbb{R}^n\to \mathbb{R}^m$. \ifdefined\shortpaper \else Therefore it follows that the two densities are $P_1^{\pi_1}(x'|x)=\mathcal{N}(Ax+Bg_1(x)+F\theta, Q)$ and $P_0^{\pi_0}=\mathcal{N}(Ax+Bg_0(x), Q)$. Consequently, we obtain that the KL-divergence is \begin{equation*} \int_{\mathbb{R}^n}P_1^{\pi_1}(x'|x)\ln \frac{P_1^{\pi_1}(x'|x)}{P_0^{\pi_0}(x'|x)} \textrm{d}x'=\frac{1}{2}h(\theta,x)^\top Q^{-1} h(\theta, x), \end{equation*} where $h(\theta,x) = F\theta + B\Delta g(x).$ \fi One easily deduce that the infimum of $I_L$ is attained for $\Delta g =- (B^\top Q B)^{-1}B^\top QF\theta$, which means that at the minimum, the difference in the control laws does not depend on $x$, and that the minimum is attained for a control law $g_1(x)$ that cancels out the effect of the additive change. Hence $\underline{I}_L = \frac{1}{2}\theta^\top F^\top G^\top Q^{-1}GF\theta$ where $G=I- B(B^\top QB)^{-1}B^\top Q$. Next, we investigate the privacy-utility trade-off. As previously mentioned, we consider deterministic policies. Define $\tilde B_T(M) \coloneqq ( B^\top (M + T) B)^{-1}B^\top M$ for any symmetric invertible matrix $M\in \mathbb{R}^{n\times n}$ and symmetric semi-positive definite matrix $T$. Then, we have the following result. \begin{proposition}\label{proposition:limited_info_utility_privacy_linear_case} Consider the limited-information case. Consider deterministic control laws of the type $u_t=Kx_t+\alpha_0$ for $t<\nu$ and $u_t=Kx_t+\alpha_1$ for $t\geq \nu$. The utility-privacy value function $V_L$ is \begin{align*} V_L(&\rho,\lambda,\alpha_0,\alpha_1)=-(1-\rho)\alpha_0^\top B^\top EB\alpha_0 -\mathrm{Tr}(\Sigma)\\ & - \rho(B\alpha_1+F\theta)^\top E(B\alpha_1+F\theta)\\ & -\lambda \frac{1}{2}\left(F\theta + B(\alpha_1-\alpha_0)\right)^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right), \end{align*} where $\Sigma$ satisfies $\Sigma = (A+BK)\Sigma (A+BK)^\top + Q$. For $\lambda>0, \rho\in[0,1],$ the solutions $\alpha_0^\star(\rho,\lambda)$ and $\alpha_1^\star(\rho,\lambda)$ are \begin{align*} \alpha_1^\star(\rho,\lambda) &=-\tilde{B}_0\left( (1-\rho) EB\tilde{B}_{2(1-\rho) E/\lambda}(Q^{-1}) +\rho E\right)F \theta,\\ \alpha_0^\star(\rho,\lambda) &= \tilde{B}_{2(1-\rho) E/\lambda}(Q^{-1})(F\theta+ B\alpha_1^\star(\rho,\lambda)) \end{align*} that simplify to $\alpha_0=0,\alpha_1=-\tilde{B}_0(Q^{-1})F\theta$ if $\rho=0$, $\alpha_0=\tilde{B}_0(Q^{-1})(I -B\tilde{B}_0(E))F\theta,\alpha_1=-\tilde{B}_0(E)F\theta$ if $\rho=1$. Moreover, the solution to $\max_{\alpha_1} V_L(1,\lambda,\alpha_1, 0)$ is given by $\alpha_1^\star(\lambda) = -\tilde{B}_0\left(\frac{2}{\lambda}E+Q^{-1}\right)F\theta$. \end{proposition} \ifdefined\shortpaper The proof is omitted for simplicity. \else \begin{proof} The first part of the proof is identical to the one in Proposition \ref{proposition:full_info_utility_privacy_linear_case}. Now, one can find out that $I_F$ is \[I_F = \frac{\left(F\theta + B(\alpha_1-\alpha_0)\right)^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right)}{2}. \] Taking the gradient of $V_F$ with respect to $\alpha_0,\alpha_1$ yields \begin{align*} -\nabla_{\alpha_{0}} V_F&=2(1-\rho) B^\top EB\alpha_0\\ &\qquad -\lambda B^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right),\\ -\nabla_{\alpha_{1}} V_F&=2\rho B^\top E(B\alpha_1+F\theta)\\ &\qquad+\lambda B^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right). \end{align*} Therefore, it is possible to conclude that for $\rho=0$ the solution is given by $\alpha_0=0$ since $B^\top E B$ is full rank, and $\alpha_1= -(B^\top Q^{-1} B)^{-1}B^\top Q^{-1} F\theta = -\tilde{B}_0(Q^{-1})F\theta$. Similarly, for $\rho=1$ one has \begin{equation*} \begin{cases} -\lambda B^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right)=0,\\ 2B^\top E(B\alpha_1+F\theta)+\lambda B^\top Q^{-1} \left(F\theta + B(\alpha_1-\alpha_0)\right)=0. \end{cases} \end{equation*} Using the first equation in the second one concludes that $2B^\top E(B\alpha_1+F\theta)=0$ and hence $\alpha_1=- \tilde{B}_0(E)F\theta$. From the first equation one obtains $ B^\top Q^{-1}B \alpha_0= B^\top Q^{-1} \left(F\theta + B\alpha_1\right)$ and consequently $\alpha_0 = \tilde{B}_0(Q^{-1})(F\theta+B\alpha_1)=\tilde{B}_0(Q^{-1})(I-B \tilde{B}_0(E))F\theta$. For the general case using $\nabla_{\alpha_{0}} V_F=0$ one can write \[ B^\top\left( 2\frac{1-\rho}{\lambda} E + Q^{-1} \right)B\alpha_0= B^\top Q^{-1}(F\theta + B\alpha_1) \] that results in $\alpha_0 =\tilde{B}_{2(1-\rho) E /\lambda}(Q^{-1})(F\theta + B\alpha_1).$ Replacing this expression in $-(\nabla_{\alpha_{0}} V_F+\nabla_{\alpha_{1}} V_F)=0$ gives \[((1-\rho) B^\top E B\tilde{B}_{2(1-\rho) E/\lambda}(Q^{-1}) +\rho B^\top E)(B\alpha_1+F\theta)=0 \] that is \begin{align*} B^\top E((1-\rho) &B\tilde{B}_{2(1-\rho) E/\lambda}(Q^{-1}) +\rho I)B\alpha_1=\\ &- B^\top E( (1-\rho) B\tilde{B}_{2(1-\rho)/\lambda}(Q^{-1}) +\rho I)F\theta, \end{align*} and, consequently, \begin{align*} \alpha_1 &=-\tilde{B}_0\left( (1-\rho) EB\tilde{B}_{2(1-\rho) E/\lambda}(Q^{-1}) +\rho E\right)F \theta. \end{align*} Finally, the solution to $\max_{\alpha_1} V_L(1,\lambda,\alpha_1, 0)$ can be easily derived by using the equation $-\nabla_{\alpha_1} V_L(1,\lambda,\alpha_1, 0)=2B^\top E (B\alpha_1+F\theta)+\lambda B^\top Q^{-1}(B\alpha_1+F\theta)=0$. \end{proof}\fi \noindent Observe that both the value and the information term contain $B\alpha_1+F\theta$. This suggests choosing $\alpha_0=0$, and minimizing the impact of $F\theta$ using an appropriate $\alpha_1$. This choice of $(\alpha_0,\alpha_1)$ corresponds to the case $\rho=0$. This observation is confirmed by numerical results to have a better performance. \medskip \noindent\textbf{Numerical example.} Here we consider the linear system in the limited-case scenario, with parameters \begin{equation}\label{eq:linear_sys_example} Q=I_2, A=\begin{bmatrix}0 & 1\\1 & 1\end{bmatrix}, B=\begin{bmatrix}0.01\\1\end{bmatrix}, F=\begin{bmatrix}0.5 \\ 0.7\end{bmatrix} \end{equation} and $\theta=1$. The control law is $u_t=Kx_t + \alpha_t$ where $\alpha_t=\alpha_0$ for $t<\nu$ and $\alpha_t=\alpha_1$ for $t\geq \nu$. The control gain stabilizes the system, chosen as $K=\begin{bmatrix}-0.7 & -0.9\end{bmatrix}$. In Fig. \ref{fig:linear_sys_limited_case} are shown results for the privacy-utility trade-off as a function of $(\rho, \lambda)$. Notice that for $\rho=1$ we obtain the best result, as previously observed in the discussion of Proposition \ref{proposition:full_info_utility_privacy_linear_case}. \ifdefined\shortpaper\else In Fig. \ref{fig:linear_sys_limited_case_example} is shown the average value of $\|x\|_2^2$, computed over $10^3$ simulations, with $95\%$ confidence interval (grayed area). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/linear_sys_limited_information/limited_information_plot3.pdf} \caption{Limited information case (example in \ref{eq:linear_sys_example}): plot of the average value of $\|x\|_2^2$ for different values of $\rho$ and $\lambda=1.5$. The grayed area depicts the confidence interval (95\%).} \label{fig:linear_sys_limited_case_example} \end{figure} \subsection{3-States MDP} We illustrate our results in an MDP with $3$ states and $2$ actions (the details can be found in the code). The densities are as follows \begin{small} \begin{alignat*}{2} &P_0(a_1) = \begin{bmatrix} .6 &.3 &.1\\ .05 & .85 & .1\\ .15 & .15 &.7 \end{bmatrix},\quad &&P_0(a_2) = \begin{bmatrix} .5 &.2 &.3\\ .5 & .3 & .2\\ .3 & .3 &.4 \end{bmatrix}\\ &P_1(a_1) = \begin{bmatrix} .3 &.3 &.4\\ .35 & .5 & .15\\ .8 & .05 &.15 \end{bmatrix},\quad &&P_1(a_2) = \begin{bmatrix} .3 &.55 &.15\\ .8 & .1 & .1\\ .5 & .3 &.2 \end{bmatrix}. \end{alignat*}\end{small} \begin{figure}[b] \centering \includegraphics[width=0.75\columnwidth]{Figures/mdp/mdp_3states_best_privacy_level.pdf} \caption{Scaling of the best achievable privacy between $P_0$ and $P_\theta$ as a function of $\theta$ for both the full and limited information cases. Notice that the $y$-scale is logarithmic.} \label{fig:sim_mdp3states_best_privacy} \end{figure} Using this example, we analyze how privacy changes according to how "similar" the two models are. For that purpose, we examine what is the best level of privacy between $P_0$ and $P_\theta$, where $P_\theta(y|x,a) = \theta P_0(y|x,a) + (1-\theta) P_1(y|x,a)$, and let $\theta$ range between $0$ and $1$. Results are shown in Fig. \ref{fig:sim_mdp3states_best_privacy}. As one may expect, for $\theta=1$ the privacy level tends to $\infty$, since the two models coincide. For $\theta=0$ we have the level of privacy between $P_0$ and $P_1$. \fi
proofpile-arXiv_065-3900
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} \label{sec:Introduction} Ultrafast lasers were very early recognized as a powerful tool to process transparent materials. This is because the transfer of laser energy to the solid starts from nonlinear absorption by ionization. Nonlinear absorption is very efficient only around the focal point of the beam. This way, absorption can be completely negligible on the surface of a dielectric while a large amount of the input pulse energy is absorbed within the bulk of the material. After a number of different physical processes \cite{Mao2004, Gattass2008}, this eventually yields a modification of the bulk material that can range from a simple index change to the formation of an empty cavity. Longer laser pulses, such as nanosecond pulses, can also be used to process transparent materials, but in this case, absorption of the laser pulse initiates from defects. This is random and poorly reproducible from shot to shot. In contrast, ultrafast pulses, {\it i.e.} sub $\sim$10~ps pulses, modify materials with high reproducibility and with very reduced heat affected zone. In the last decade, structuring with high aspect ratio has emerged as a new field in ultrafast laser processing of transparent materials. In early works, most of the attention was focused on creating extremely localized damages in 3 dimensions (index change, nanogratings, nano-voids) with high numerical aperture illumination. Nonlinear propagation effects, in the general sense "filamentation", were regarded as highly detrimental. However, it progressively appeared that filamentation of Gaussian and shaped beams, could create extended modifications along the propagation direction. This was even possible in single shot illumination regime. Controlling these phenomena is a great challenge but allows for creating modifications that are impossible to generate with conventional point-by-point processing. These propagation regimes create high aspect ratio modifications: their length that is much longer than their typical diameter. High aspect ratio processing is required for a number of applications such as microfluidics, photonics or material separation. As we will describe in section \ref{sec:applications}, an important field of application is glass separation with a non-ablative technique. This technique is based on generating a high aspect ratio structure in the bulk of a transparent brittle material, organized on a plane, which allows the material to cleave along this weakened plane. The process is high speed and is obviously very well suited to mass fabrication of cover glass for touchscreens and consumer electronics. However, structuring with high aspect ratio is challenging because the propagation of an intense ultrafast pulses inside a transparent solid is nonlinear by essence. This chapter is intended as a guide in this field of research and technology. We will first briefly review the phenomena occurring during high-intensity pulse propagation in dielectrics as well as the means to characterize plasma formation inside their bulk. We will point out the specificity of bulk propagation and associated numerical simulations. In section \ref{sec:multishot}, a review of high aspect ratio processing in the multiple shot regime will point out the difficulties to face with bulk processing. In contrast, single shot void formation seems more simple and faster. In the related section \ref{sec:void}, we will review the main results which open new routes for laser processing of transparent materials. Increasing the length of the empty cavities can be done with filamentation of Gaussian beams, which we will review in section \ref{sec:filamentation}. But due to the nonlinearities, this process is difficult to predict or scale. In section \ref{sec:Bessel}, we will show that a specific class of beam shapes allows for seeding filamentation such that the propagation is free from distortion in space and time and is therefore much more predictable. This beam shape is the zeroth-order Bessel beam. It induces high aspect ratio index modifications, nanochannels or nano-voids in the bulk of a number of transparent materials. In section \ref{sec:applications}, we will review the applications of both filamentation of Gaussian-like beams and of Bessel beams. These are numerous and we will describe the technologies as a function of the processed material (glass, sapphire, diamond, or silicon). \newpage \section{Ultrafast phenomena and nonlinear propagation in transparent materials} \label{sec:phenomena} In this section, we will briefly review the main physical phenomena occurring during high intensity ultrashort laser pulse propagation in the bulk of transparent materials. Our objective is to point out that there are a number of differences in the experimental techniques and numerical modelling in comparison with surface ablation, which has been described in detail in the preceding chapters. In brief, the physical sequence of material modification by an ultrafast laser pulse can be split in two main steps: first, nonlinear propagation generates a plasma of free electrons and holes via nonlinear ionization. This is the energy deposition step, which is terminated at the end of the laser pulse, typically in the sub-picosecond range. Then, the second step is the relaxation. It involves a sequence of different physical phenomena extending up to microsecond scale. These phenomena are identical to the ones occurring at the surface of dielectrics. We will therefore focus on the first step, where the nonlinear propagation plays a determinant role for the modifications of the material. For the numerical modelling, the propagation in the bulk of transparent materials imposes a number of additional constraints in comparison with surface ablation. First, the propagation distances considered in bulk processing of materials are orders of magnitude longer than those simulated for surface ablation. Second, while surface ablation modeling can sometimes be reduced to 0 or 1 dimension, bulk processing requires at least models in two dimensions. We will emphasize in the following how the physics of optical breakdown over long distances can be simulated with reasonably powerful computers. As for experimental characterizations, specific techniques have been developed to characterize plasma formation within the bulk of the materials. We will describe them in the second part of this section. \subsection{Linear and Kerr effects} Propagation in transparent materials is determined by several linear and nonlinear effects. As for the linear contributions, the pulse shape is affected by diffraction, dispersion and aberrations such as chromatism or spherical aberration\index{spherical aberration}. Diffraction and aberrations are important effects which explain a number of experimental results \cite{Song2008}. We note that dispersion in the material can be generally safely neglected because thicknesses of materials on the order of a few millimeters only weakly affect ultrashort pulses of duration longer than 100~fs. Kerr\index{Kerr effect} effect mainly contributes to the transient index of refraction, as well as for the self steepening of the pulse. Cross phase modulation effects have usually a negligible impact on the pulse intensity. For sufficiently long pulse duration, Raman contribution to Kerr effect can be included. Kerr self-focusing \index{self-focusing} shifts backwards the focusing point as the peak intensity increases. \subsection{Nonlinear absorption, plasma absorption and plasma defocusing } The interaction between a laser pulse and a photo-excited solid dielectric is threefold. i) Nonlinear absorption\index{nonlinear absorption} occurs because of high intensity field. The excited electrons interact ii) with the field and iii) in the field via collisions\index{collisions}. From first principles, description of nonlinear absorption and interaction with the laser pulse should be based on a quantum model of the band system in the periodic high electric field \cite{Otobe2008}. Despite the number of advances in theoretical chemistry, it is still a challenge to accurately describe the ground state of solids such as sapphire, and it is obviously even more difficult for amorphous solids like fused silica. Transition rates between excited states are mostly out of experimental reach, yet a number of highly excited states should be taken into account \cite{Barilleau2016}. Time-Domain Density Functional Theory (TD-DFT) can model the high-field ionization, but collisional absorption is still difficult to describe in this framework \cite{Yabana2012}. In the framework of numerical simulation of laser pulse propagation of several 100's fs over a few millimeters of propagation, these approaches are computationally too demanding. In first approximation, the excited electrons in the upper bands can be considered as free electrons with effective masses\index{effective mass} (parabolic band). This is why in the rest of the chapter, the excited electrons will be referred to as "free electrons", and nonlinear transitions to the excited states will be referred to as nonlinear ionization\index{ionization}. Thus, the ionization phenomena in dielectrics are described in similar way as ionization of atoms. The modeling of pulse propagation will therefore follow the work that was developed for filamentation in air. The description of Keldysh\index{Keldysh model} for ionization is computationally efficient. In this framework, multiphoton and tunnel ionization are asymptotic behaviors \cite{Sudrie2002, Couairon2005}. In basic models, the distribution of the free electrons\index{free electrons} in the excited levels is neglected and the free electrons are described only via the number density $\rho$\index{plasma density}. More refined models insert additional spectroscopic levels to describe the energy distribution of the free electrons. This is the multiple rate equations model\index{multiple rate equations} (MRE) \cite{Rethfeld2006}. The interaction of the laser pulse with the plasma is twofold: i) absorption by the free-electron gas excites the free-electrons to upper levels. When the energy of the free electrons is sufficiently large, impact ionization occurs and the free-electron density increases. ii) the presence of plasma effectively reduces the effective index of refraction, yielding defocusing effect on the laser pulse. Drude model\index{Drude model} can be used to efficiently describe this interaction. The plasma conductivity $\sigma (\omega)$ is derived from the plasma number density $\rho$. The plasma can be described as a contribution to the complex permittivity. A frequency-dependent description is valid as long as the number density does not vary too fast in time. Meanwhile, the evolution of the free-electron distribution and impact ionization effects can be either described in the MRE model or by simply considering that every K photons absorbed by the plasma will contribute to the ionization of an additional free electron (see Equation \ref{eq:plasmaeq}). In detail, the \index{plasma susceptibility} is: \begin{equation} \chi(\omega) = -\frac{\rho e^2}{\varepsilon_0 m} \end{equation} \noindent with $\varepsilon _0$ the vacuum permittivity, $e$ the unsigned electron charge, $m$ the effective electron mass. If the plasma-free permittivity of the medium is $\varepsilon_{SiO_2}$, the combined permittivity\index{plasma permittivity} of the medium and plasma reads \cite{Mao2004}: \begin{equation} \label{eq:epsilon_of_omega} \varepsilon(\omega) = \varepsilon_{SiO_2}(\omega)- \omega_p^2 \bigg[ \frac{\tau_c^2}{1+\omega^2\tau_c^2}-i \frac{\tau_c^2}{\omega\tau (1+\omega^2\tau_c^2)}\bigg] =n^2(\omega) \end{equation} \noindent $\omega_p=\sqrt{\frac{e^2 \rho}{\varepsilon_0 m}}$ is the plasma frequency and $n$ the complex index of refraction of the medium with the plasma. We see in Equation \ref{eq:epsilon_of_omega} that plasma contribution reduces the permittivity, and therefore reduces the index of refraction. This is why plasma defocuses an incoming laser pulse. The collision time\index{collision time} $\tau_c$ is a parameter that should be derived from the free-electron plasma density and distribution in momentum space (or temperature for a Maxwellian distribution). In practice, the collision time is usually considered as fixed with values typically ranging from 0.5~fs (dense plasmas \cite{Velpula2016}) to $\sim$10~fs (low density plasmas \cite{Sudrie2002}). Yet imperfect, this model still describes with reasonable accuracy the absorption of the plasma and the change of the local permittivity due to the presence of plasma \cite{Gamaly2011}. Finally, other mechanisms such as recombination, Auger decay do exist and can also be included in models depending on computational capability. \subsection{Numerical simulations of pulse propagation in transparent dielectrics} A number of different physical models for ultrafast laser pulse propagation have been developed. Here, we discuss a basic model of propagation to provide the reader a first view on how the different mechanisms described above can be numerically simulated. More detailed models can be found for instance in references \cite{Couairon2011,Bulgakova2014}. An early model is based on solving simultaneously a NonLinear Schr\"odinger Equation (NLSE\index{NLSE}) and a rate equation for the plasma density \cite{Feit1974,Tzortzakis2001,Sudrie2002,Wu2002,Couairon2005,Winkler2006}. The following NLSE is derived from Maxwell's equations using a scalar, paraxial, unidirectional model for the field envelope $A(x,y,z,\tau)$ describing the laser pulse with central frequency $\omega_0$ with a temporal reference frame defined by $\tau = t-z/v_g$. $t$ and $z$ are time and propagation distance in the laboratory reference frame and $v_g =c/n_{SiO_2}$ the group velocity. The NLSE reads \index{NLSE} : \begin{multline} \label{eq:NLSE} \frac{\partial A}{\partial z} = \frac{i}{2k}\bigg( \frac{\partial^2}{\partial r^2}+\frac{1}{r}\frac{\partial}{\partial r}\bigg) A-\frac{ik''}{2}\frac{\partial^2 A}{\partial^2\tau^2}+ik_0 n_2 |A|^2A \\ -\frac{\sigma}{2}(1+i\omega_0\tau_c)\rho A-\frac{W_{PI}(|A|) U_{gap}}{2|A|^2}A \end{multline} \noindent and it has to be solved together with the plasma equation: \begin{equation} \label{eq:plasmaeq} \frac{\partial \rho}{\partial t} = \bigg(W_{PI}(|A|)+ \frac{\sigma}{U_{gap}}\bigg) \bigg(1-\frac{\rho}{\rho_{nt}}\bigg) -\frac{\rho}{\tau_t} \end{equation} \noindent where $k=n_0k_0$ is the wavevector inside the medium of refractive index $n_0=\sqrt{\varepsilon_{SiO_2}}$, and $k_0$ the wavevector in vacuum, $k'' = \partial^2 k(\omega) /\partial \omega ^2 |_{\omega=\omega_0}$ is the group velocity dispersion coefficient, $U_{gap}$ is the bandgap of the dielectric medium, $n_2$ is the nonlinear Kerr index, $W_{PI}$ is the nonlinear photoionization rate. $\sigma$ is the plasma conductivity evaluated by Drude model: \begin{equation} \sigma(\omega) = \frac{k\omega_0 e^2 \tau_c}{n_0^2 \varepsilon_0 m \omega_0^2 (1+\omega_0^2\tau_c^2)} \end{equation} The first term in equation \ref{eq:NLSE} corresponds to diffraction, the second to dispersion, the third is Kerr effect, the fourth is plasma absorption (real part) and plasma defocusing (imaginary part) and the last one is nonlinear absorption. NLSE is usually numerically solved via a split-step algorithm to calculate the pulse shape in $(x,y,t)$ at each point. Simultaneously, the plasma rate equation is solved for the free electron number density $\rho(x,y,z,t)$. Multiple rate equations can be added to describe more accurately the avalanche process \cite{Rethfeld2006}. In addition, for pulses on the order of a several 100's fs to some ps, the transient generation of self trapped excitons (STEs), at timescales of $\sim$150~fs \cite{Guizard1996,Mao2004}, as well as structural defects left by previous pulses, can be described by including the rate equations for new spectroscopic levels \cite{Bulgakova2014}. We note that the terms $(1-\rho / \rho_{nt})$ describe the saturation because of the finite number of available electrons to promote to the conduction band. In practice the number density $\rho_nt$ is estimated by the density of states (typically $2.2\times 10^{22}$ cm $^{-3}$ for fused silica). \begin{figure} \centering \includegraphics[width =\columnwidth]{Simul_filament.jpg} \caption{Simulation of ultrafast IR pulse propagating in fused silica. (left) Evolution of the filament diameter along the propagation (right) evolution of the pulse temporal profile on the first 4~mm of propagation. Reprinted figure with permission from \cite{Tzortzakis2001} with courtesy of A. Couairon. Copyright (2001) by the American Physical Society. } \label{fig:Simul_filament} \end{figure} Figure \ref{fig:Simul_filament} shows such simulation result for the evolution of the beam diameter and the pulse temporal profile. The temporal distorsion of the pulse is particulary apparent. Finally, we note that there is no straightforward link between the plasma density and the final modification in the transparent material. This is because a number of physical effects occur after the energy deposition: electron-electron scattering, electron-phonon scattering, recombinations, structural changes, phase changes, shockwaves, thermal diffusion, etc... \cite{Gattass2008}. Void formation occurs approximately when the plasma density approaches the critical plasma density, but this is a crude estimate \cite{Papazoglou2011, Gamaly2006}. Several other parameters have been used to predict the threshold for melting or vaporisation \cite{Grehn2014,Bulgakova2015}. \subsection{Experimental diagnostics} Experimental characterization is crucial to understand the physics and to identify the regimes in which the modifications are created. Here, we review experimental diagnostics so as to make the reader aware of the potential limitations of the techniques and of the conclusions drawn out of the results obtained. \subsubsection{Post-mortem diagnostics} "Post-mortem" diagnostics refer to characterizations performed well after the photo-induced phenomena have relaxed. In-bulk material modifications can be characterized only by optical microscopy, including phase contrast, polarized microscopy, Raman techniques. Optical characterization has however a poor spatial resolution ($\sim$0.5~$\mu$m in best cases, depending on probe wavelength and numerical aperture \index{numerical aperture} of the imaging). In addition, spherical aberration\index{spherical aberration} must be compensated to reach the high spatial resolution. In most cases, the sub-micron structures described in this chapter are not resolved by optical means. For higher resolution, only destructive characterization means are available. Mechanical cleavage, polishing or Focused Ion Beam (FIB) milling are used to provide physical access for Scanning Electron Microscopy (SEM). These techniques are extremely delicate because the processing technique should not affect the nano structure itself (filling of empty void by particles, modification of a channel by FIB, "curtain effect", etc). \subsubsection{Plasma dynamics characterization} Plasma dynamics is characterized by pump-probe measurement of the transient distribution of refractive index change. A number of different techniques have been implemented to measure this index change. Several of them are adaptations of techniques initially developed to characterize plasma plumes expanding in vacuum from the surface of solids. \begin{itemize} \item {\it Shadowgraphy} \index{shadowgraphy}is based on transversely illuminating the plasma with a probe pulse. In this case, the complex refractive index change $\Delta n$ is estimated from the transmission $T$ of the probe through the plasma of thickness $L$: $T(x,z)=exp \left[ -4\pi \texttt{Im}(\Delta n) L/\lambda_p \right]$ where $x$ and $z$ are the spatial coordinates in transverse and longitudinal directions respectively, and $\lambda_p$ the central wavelength of the probe \cite{Papazoglou2007,Grossmann2016}. This still requires to estimate a priori the plasma thickness $L$, and to assume that the probe propagation through the plasma is perfectly straight ({\it ie} diffraction effects negligible). Recently, a new tomography \index{tomography}approach was developed to enable the retrieval of the 3D distribution of the extinction coefficient. This approach removes the need to assume the value of the thickness $L$. This is based on multiple shadowgraphy experiments where the beam is rotated around the optical axis between each illumination \cite{Bergner2018}. Spectrally resolved shadowgraphy is a powerful technique providing access to laser deposited energy under certain approximations \cite{Minardi2014}. For instance, Hashimoto {\it et al} used time resolved micro-Raman spectroscopy to determine the evolution of temperature distribution after ultrafast excitation of glass \cite{Hashimoto2015}. \item {\it Pump-probe interferometry} \index{pump-probe measurement} is a technique that retrieves amplitude and phase variations. Depending on implementation, simplified versions of the setup provide access only to the phase measurement, hence to the real part of the refractive index change. The interferometry can be performed with a reference signal that does not cross the interaction medium. Spectral interferometry \index{spectral interferometry}is a technique where 2 probe pulses interfere within a spectrometer, as shown in Figure \ref{fig:SpectralInterferometry}. The reference pulse passes through the medium before the pump, and the second probe records amplitude and phase change with a variable delay with the pump, but fixed delay with the reference. Amplitude and phase can be retrieved from the spectral fringes. This technique is extremely precise, yet it is restricted to characterize a single spatial dimension \cite{Mao2004}. \begin{figure} \includegraphics[width=0.5\columnwidth]{SpectralInterferometry.jpg} \caption{ Example of pump-probe spectral interferometry setup.Reprinted figure with permission from \cite{Mao2004} with courtesy of Prof. S. S. Mao. Copyright (2004) by Springer Nature.} \label{fig:SpectralInterferometry} \end{figure} It is also possible to use the interference between the probe wave and the scattered wave emitted by the plasma to characterize the plasma density with holography \cite{Papazoglou2008,Papazoglou2014}. This provides quantitative measurements of phase and amplitude. Again, despite quantitative measurements can be performed, one must keep in mind that the characterization is convoluted by the optical response function of the imaging apparatus. This actually imposes a severe constraint on the effective spatial resolution of the measurements. After retrieving the distribution of complex index of refraction, Drude model is used to link index change with plasma density, following Eq. \eqref{eq:epsilon_of_omega}. \item {\it Two-color probing}. The retrieval of the plasma density from the index change distribution, using Eq. \eqref{eq:epsilon_of_omega} requires the assumption on the collision time $\tau_c$. Repeating the probe measurement with another probe wavelength removes the ambiguity on $\tau_c$ \cite{Velpula2016}. \item {\it Phase contrast microscopy} records images that are proportional to the variation of index of refraction \cite{Zernike1942}. This does not require a second reference probe, and makes straightforward to image the sign of index variations associated to densification or material expansion \cite{Mermillod-Blondin2011}. \end{itemize} \subsubsection{Characterization of the pulse distribution } The pump pulse can be also characterized after its interaction with the solid. This can be easily quantitatively compared to numerical simulation results. This has been performed as spatially resolved crosscorrelation \cite{Grazuleviciute2015}. To provide the evolution of the spatiotemporal dynamics along the propagation, the pulse has to be measured at intermediate propagation distances. This is however possible only if the nonlinear propagation stops. This is feasible for instance if the medium has a variable length, because further propagation in air is linear for sufficiently low pulse energies. Jarnac {\it et al} have used a variable size water cuvette \cite{Jarnac2014}. Xie {\it et al} have reconstructed the 3D evolution of the time-integrated fluence distribution by controlling the relative position of the beam with the exit side of the sample \cite{XieSR2015}. \subsubsection{Plasma luminescence} Since the temperature of the plasma phase is typically several thousands Kelvins, blackbody emission is in the visible region of the spectrum. Side imaging of the plasma luminescence provides a semiquantitative characterization of the plasma distribution and temperature. This also allows for characterizing the dynamics in the multiple shot regime to follow the drilling mechanism \cite{Wu2002,Hwang2008}. Plasma emission can also include fluorescence from two-photon absorption \cite{Tzortzakis2006}, and fluorescence from the relaxation of self trapped excitons (STE's) and transient color centers (NBOHCs) \cite{Papazoglou2011}. \subsubsection{Acoustic waves} Direct recording of the amplitude of acoustic waves also provides indications on the laser deposited energy that eventually was converted to mechanical waves, and on the dynamics of the shockwave \cite{Noack1998,Kudryashov2008}. Imaging of the dynamics of acoustic waves that follow shockwaves can be performed by shadowgraphy. The evolution of the wave speed in time provides estimations on the laser-deposited energy, using Sedov' theory. This is however restricted to very specific geometrical conditions. \section{Microstructuring of transparent materials in multiple shot regime} \label{sec:multishot} In-volume structuring of transparent materials is requested for a number of applications where channels, deep trenches need to be processed: micro- or nano- fluidics, biosensors, fabrication of mechanical pieces with micron resolution, microelectronics, MEMS, micro-optics are typical fields of application. When the typical transverse dimension exceeds 1~$\mu$m, multiple pulses are required for the structuration. In this regime, the propagation and absorption of a laser pulse is strongly affected by the modification of the material performed by the previous pulses. Indeed, the irradiation by an initial ultrashort laser pulse leaves index modifications, colored centers, highly absorbing defects, voids, ripples or a rough surface. On these structural defects, the next pulses can be scattered, absorbed, diffracted, guided, depending on the type of modification, and depending on numerical aperture and repetition rate. Microstructuring with femtosecond lasers in multiple shot has been realized either from front or rear side surface of transparent materials. Front surface processing corresponds usually to ablation, drilling, trepanning or, for much weaker modifications, waveguide writing (see next chapter). This corresponds to a delicate geometrical configuration because of the reasons mentioned above. Scattering by structural defects can be extremely deleterious because ablation can happen even in regions that were not supposed to be illuminated. In contrast, rear surface processing is very attractive because the problem of plasma plume shielding and the structural modifications induced by previous pulses have no impact on the propagation of the following pulses, except at the ablation site (see Figure \ref{fig:multishot}). The drilling strategy consists in illuminating the exit surface up to ablation and progressively translate the beam at the same speed as the one of debris removal. It has been successfully used to write high aspect ratio structures in glass or other transparent materials \cite{Zhao2011}. Processing with high-repetition rate trains of pulses, {\it ie} bursts, takes advantage of the \index{heat accumulation} process \cite{Eaton2005} to increase the efficiency of the ablation or modification, so as to reduce the total amount of energy deposited in the material. This can reduce the size of the heat affected zone. Comprehensive comparison between front and rear surface processing with picosecond pulses demonstrated interesting processing parameter windows where high aspect ratio structures could be drilled with high speed with reduced heat affected zone. The performance on drilling was evaluated to be better with picosecond pulse durations in comparison with femtosecond ones, with comparable channel quality \cite{Karimelahi2013}. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{multishot.jpg} \caption{Concepts of the different laser drilling methodologies for high aspect ratio processing of transparent materials, involving static or dynamic sample positioning, and front or rear side processing. Reprinted figure with permission from \cite{Karimelahi2013} with courtesy of Prof. P. Herman. Copyright (2013) by Springer Nature.} \label{fig:multishot} \end{figure} When the aspect ratio is high, the energy density of the explosions at the ablation sites might be not enough to eject material out of the channel\index{water-assisted drilling}. To solve this issue, water assistance can help and remove debris out of the channels. This also allows for drilling non-straight channels in three dimensions \cite{Li2001,Kim2005}. The aspect ratio of the channels drilled inside transparent solids can reach 1000:1 with diameters on the order of several microns, down to few hundreds of nanometers. In some configurations, the channel filled by water behaves as a resonance tube, and acoustic limitations are found when processing very high aspect ratio nanochannels. In this case, the length of the channel is limited by the node of the velocity inside the tube \cite{Lee2007}. As a remark, by using a high aspect ratio beam, such as a filament or a non-diffracting Bessel beam, the drilling can be performed without scanning. This technique is called self-guided drilling. This removes the constraint of translating the focal point at the same speed as the one of material removal \cite{Hwang2008,Bhuyan2010}. A different strategy is based on a 2 steps process\index{etching}. The first consists in femtosecond laser modification of the fused silica matrix. Then, wet etching with hydrofluoric acid (HF) or potassium hydroxide (KOH) removes the modified glass \cite{Marcinkevicius2001}. This technique allows for creating various shapes in 3D, including high aspect ratio micro and nanochannels \cite{Bellouard2004,Ke2005,An2008,Wortmann2008}. Some groups have also taken advantage of both regimes, by laser processing the rear-side of the transparent workpiece set in contact with an etching liquid (HF or KOH) \cite{Cao2018}. However, the two-step approach is for now restricted to fused silica, several glasses and sapphire. The capability of the process relies on the difference in etching rates between laser-processed and pristine material. In conclusion, multishot drilling regime is unavoidable for wide structures forming in transparent materials. Rear-side structuring removes some of the difficulties associated with the structural changes induced by previous pulses that are not easily predictable or controllable. For smaller scale structures, the situation is different and one can take benefit of generating voids in single shot regime. \section{Single shot void formation under high numerical aperture focusing} \label{sec:void} In 1997, Glezer {\it et al} demonstrated for the first time that single pulses could generate nanometric cavities in the bulk of fused silica, quartz, BK7 glass, sapphire, diamond and acrylic \cite{Glezer1997}\index{void}\index{nanovoid}. 100 fs, 0.5~$\mu$J pulses were focused with a numerical aperture (NA) of 0.65, below the surface of the transparent materials. The diameter of the cavities was on the order of 200~nm, and the material surrounding the cavity was compressed. The authors suggested that extreme TPa pressures reached after laser energy deposition, could explain the formation of such voids in hard solid materials. In silica glass, the voids could be moved and merged or be induced even with multiple shots \cite{Watanabe2000}. Besides the discovery of a new mechanism, these results were opening a new route for laser processing of transparent materials. Indeed, they demonstrate the opportunity to process materials directly from the bulk instead of progressively processing, shot after shot, channels or long structures, from one of the surfaces. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{voids.jpg} \caption{[left] SEM imaging of a pattern of nano-cavities created in sapphire by single 150 fs, 800 nm, 120 nJ pulses. The sample has been mechanically cleaved before imaging. Scale bar is 1~$\mu$m. Reprinted figure with permission from \cite{Juodkazis2006} with courtesy of Prof. S. Juodkazis. Copyright (2006) by the American Physical Society.[right] Concept of the micro-explosion, ion separation and ultrafast quenching developed by Vailionis {\it et al}. Reprinted figure with permission from \cite{Vailionis2011} with courtesy of Prof. A. Vailionis.} \label{fig:voids} \end{figure} Figure \ref{fig:voids}(left) shows an SEM image of such a nano-cavities produced in sapphire, visualized after mechanically cleaving the sample. A modified region can be observed around the spherical cavities. This modified region can be etched using an HF solution. The formation of a cavity was interpreted in terms of shockwave release after the generation of a dense plasma, followed by a rarefaction wave. Figure \ref{fig:voids}(right) illustrates the concept. Hydrodynamic numerical simulations based on equation of states for fused silica \cite{Gamaly2006,Hallo2007} show that the formation of the void occurs in a typical timescale of a few hundreds of picoseconds after illumination. The shockwave\index{shockwave} stops when the internal pressure equals the Young modulus of the material. Separate experimental results based on phase contrast microscopy characterizing the plasma dynamics were compatible with this theory \cite{Mermillod-Blondin2009}. Another potential formation mechanism is cavitation \index{cavitation}by material retraction under GPa pressure, in a similar way as what happens in water \cite{Vogel2008}. In the model of nano-cavity formation after high-pressure shockwave and rarefaction wave, the pressures transiently reach teraPascals (TPa), and the compression of material around the void leaves a densified material. The density increase reach typically 14\% in sapphire \cite{Juodkazis2006}. This value was confirmed later in another geometrical configuration \cite{Rapp2016}. The state corresponding to extreme pressures and temperatures is the Warm Dense Matter (WDM) state that lasts less than $\sim 1$~ns. The fast cooling can quench relaxation and can generate new material phases around the nano-cavity. Theoretical studies predict phase transitions of Aluminum into hcc-Al and bcc-Al at pressures in the multi-hundred GPa, confirmed recently by diamond anvil cell compression experiments \cite{Fiquet2018}. These phases of aluminium have been discovered around nano-cavities produced in Al$_2$O$_3$ \cite{Vailionis2011}, demonstrating compatibility of the high-pressure shockwave mechanism with experimental results. In conclusion for this section, the formation of voids inside transparent materials reflects the potential for high energy density deposition within the bulk of transparent materials. A wide range of different structures are then possible provided that the propagation and energy deposition can be controlled. This is what will be discussed in the following sections. \section{Filamentation and optical breakdown of Gaussian beams} \label{sec:filamentation} \index{filamentation}Filamentary structures, {\it i.e.} elongated damage tracks, were very early identified in the wake of high peak power laser illumination in dielectrics \cite{Hercher1964, Yablonovitch1972}. This was in fact a severe problem for ultrashort pulse amplification until the invention of Chirped Pulse Amplification \cite{Strickland1985}. During a long time however, optical breakdown and filamentation were opposed: optical breakdown regime was the mechanism where dielectrics undergo sufficient nonlinear ionization to induce a strong permanent modification \cite{Ashcom2006,Nguyen2003}. In contrast, filamentation was regarded as a dynamical mechanism that was transforming the initial Gaussian beam into a quasi-soliton. This regime was identified by a strong supercontinuum emission and low plasma density formation, such that the modifications generated are weak index changes, such as waveguides. \begin{figure} \centering \includegraphics[width =0.9\columnwidth]{DiverseForms.jpg} \caption{Diversity of damages produced in single shot by ultrashort pulses in the filamentation regime. (a)Non-periodic series of voids formed in fused silica. Reprinted figure with permission from \cite{Luo2001} with courtesy of Prof. Q. Gong. Copyright (2001) by the Institute Of Physics. (b-d) Void channel formation in PMMA. (b) Side view optical imaging of voids channels formed after single shot illumination for different input pulse energies. (c-d) Scanning Electron Microscopy (SEM) of the void formed for 2~$\mu$J: (c) transverse cross-section, (d) longitudinal cross-section. Reprinted figures (b-d) with permission from \cite{Sowa2005} with courtesy of Prof. W. Watanabe. Copyright (2005) by Springer Nature.} \label{fig:DiverseForms} \end{figure} However, filamentation has no precise definition \cite{Couairon2007}. Not only there is no precise boundary between optical breakdown\index{optical breakdown} and filamentation, but these regimes in fact do overlap \cite{Luo2001,Sowa2005}. It is specifically the regime of interest for this chapter. We can refer to filamentation as the regime of nonlinear pulse propagation in dielectrics, where dynamical reshaping of the pulse occurs in space and time under the influence of Kerr effects, nonlinear ionization and plasma defocusing, among others. Because Kerr effect\index{Kerr effect} in transparent dielectrics is three orders of magnitude higher than the one in air, and because plasma densities in solids easily reach 100 times the densities observed in air, filamentation in solids is much more confined than in air, and the filaments survive only some millimeters. While the diameter of plasma channels in gases is on the order of 100~$\mu$m, these are typically less than 10 ~$\mu$m in solid dielectrics \cite{Couairon2007}. Supercontinuum generation is not necessarily a condition for filamentation since this process is efficient only on very long propagation distances ($\sim$centimeters), when frequency mixing can become efficient. In transparent dielectrics, a very wide family of modifications can be generated when the irradiation regime is complex. Figure \ref{fig:DiverseForms} assembles typical results found in the literature. The morphology of these strong modifications (strong index changes, cavities) cannot be straightforwardly explained from the linear propagation of a Gaussian beam with which they have been produced. It is the filamentation that has reshaped the beam and induced these {\it a priori} unexpected morphologies. : elongated voids, channels, series of voids (nonperiodic, periodic) \cite{Luo2001}. The filamentation process can be understood from figure \ref{fig:Papazoglou}. The figure shows a measurement of the transient change of index of refraction in air during pulse propagation. We note that similar behavior can be observed in solids \cite{Papazoglou2014}. The positive index change is shown in purple, and it corresponds to Kerr self focusing at the rising edge of the pulse. Then, when the pulse intensity is sufficiently high, the medium is ionized and the plasma channel decreases the index of refraction. The negative index change tends to defocus the pulse, acting as a negative lens, so that the following part of the pulse generates slightly less plasma\index{plasma defocusing}. Because of the reduction of plasma generation, the defocusing effect is reduced and higher plasma density is generated. This process repeats itself as long as the intensity is sufficiently high \footnote{We note that this reasoning is somehow simplistic because it is based only on a spatial description, whereas in reality, the rising and trailing edges of the laser pulse do not experience the same effects.}. Depending on the exact spatial phase profile, the process of plasma generation might be quasi-periodic, very homogeneous or quite complex. It leaves a plasma channel which will relax in generating a modification of the material which morphology depends on the plasma density distribution. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{Papazoglou_holographicMeasurement.jpg} \caption{Holographic measurement of the spatial distribution of the plasma density at different pump-probe temporal intervals. Reprinted figure with permission from \cite{Papazoglou2008} with courtesy of Prof. P. Papazoglou and Prof. S. Tzorzakis. Copyright (2008) AIP Publishing.} \label{fig:Papazoglou} \end{figure} The competition between the different nonlinear effects that sustain the filamentation process can be evaluated with characteristic distances \cite{Couairon2007}. The nonlinear length is $L_{NL}=1/(n_2 k_0 I_0)$, where $n_2$ is the nonlinear Kerr index and $I_0$ the peak intensity of the pulse. The plasma characteristic length is $L_{plasma} = 2 n_0 \rho_c/(k_0\rho)$, where we use the notations of pages~\pageref{eq:epsilon_of_omega} and \pageref{eq:plasmaeq} and $\rho_c =\varepsilon _0 m_e \omega_0^2 /e^2 $ is the critical plasma density at the laser central frequency $\omega_0$. When these distances are on the same order of magnitude as the Rayleigh range inside the transparent material, then a rich dynamics can be induced. As an example for fused silica, for peak intensities 10$^{12}$ to 10$^{13}$~W.cm$^{-2}$, the characteristic nonlinear length is on the order of some tens of microns, the plasma length shrinks from some 40~$\mu$m to some microns when the plasma densities increases from 10$^{19}$ to 10$^{20}$ cm$^{-3}$ as it is the case during the plasma buildup. Therefore, focusing with a numerical aperture below ~0.4 will trigger a long filamentation process, when spherical aberration is neglected. These number for instance match the experimental results of reference \cite{Papazoglou2014}. We note that most of the times, the Marburger formula \index{Marburger} for filamentation is mostly unapplicable \footnote{Marburger formula is calculated for a collimated beam. It does not take into account any spatial phase, like spherical aberration or focusing conditions. Dawes and Marburger formula is also semi-empirical \cite{Couairon2007} and therefore has a very narrow range of applicability.}. Therefore, low focusing numerical apertures, short input pulse durations and high peak powers are prone to seed a filamentation regime with rich dynamics on long distance, where a number of four-wave mixing processes can take place, among others. Spherical aberration has an important contribution to trigger the filamentation process\index{spherical aberration}. This is particularly the case when a Gaussian beam is focused at the rear side of a thick sample. Under spherical aberration, paraxial rays are focused at a much farther point than the focal position of non-paraxial rays. This drastically elongates the effective linear scale length. In turn, filamentation process can be triggered. As an example, Ahmed {\it et al} inserted thick glass plates between the focusing microscope objective and the workpiece to induce long filaments in the glass workpiece \cite{Ahmed2013}. This is also the case for rear side focusing. In reference \cite{Luo2001}, the NA of 0.65 associated to rear side focusing formed a series of periodic voids (see Figure \ref{fig:PeriodicVoids_Song}). We note that it is with the same numerical aperture that Glezer et al used to generate single, well-confined, nano-voids \cite{Glezer1997}. Identically, Kanehira {\it et al} used NA 0.9 and focussing through 750~$\mu$m thick borosilicate glass and produced periodically spaced voids \cite{Kanehira2005}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PeriodicVoids_Song.jpg} \caption{Numerical simulation of fluence distribution of focusing a Gaussian beam with NA 0.9 through 200~$\mu$m fused silica. Several high fluence spots appear along the optical axis. Reprinted figure with permission from \cite{Song2008} with courtesy of Prof. J. Qiu and Prof. Z. Xu. Copyright (2008) AIP Publishing.} \label{fig:PeriodicVoids_Song} \end{figure} Filamentation in transparent materials was demonstrated for a number of different laser wavelengths, ranging from IR to UV \cite{Tzortzakis2006}. The operable window is limited by the transparency window. Shorter wavelengths tend to generate more dense plasmas and reduce the filament length. A detailed study compares filament formation for IR and visible wavelengths \cite{Karimelahi2013}. In the case of illumination with a pulse train, {\it ie} a burst, thermal effects play a role. Indeed, the typical cooling time after laser pulse irradiation is in the $\mu$s range (strongly depending on focussing conditions), such that the pulses within a burst of several MHz repetition rate regime influence themselves via thermo-optical effect. This effect increases the local index of refraction of glasses at the locations where the temperature is high \cite{Ghosh1995}. The heat accumulation\index{heat accumulation} can lead to laser beam trapping and guiding. With a low repetition rate laser, the photo-excitation has completely relaxed before the arrival of the subsequent pulse. The latter diffracts on the structures left by the previous pulses. This regime was used to induce periodic damages \cite{Luo2001,Zhu2005,Kanehira2005,Sowa2005}. In this regard, we note that even a surface crater does not hamper occurrence of filamentation \cite{Luo2001}. In conclusion of this section, a filamentation dynamics is a complex phenomenon, highly dependent on input conditions and on the precise dynamics of ionization process. It is therefore extremely difficult to predict and to scale. Filamentation can generate plasma tracks with diverse morphologies. In the field of applications such as "filamentation cutting" or "filamentation welding", as we will see in section \ref{sec:applications}, state of the art usually refers to filamentation for the formation of long and homogeneous plasma channels on high aspect ratio. Interestingly, the filamentation process, when it creates long uniform plasma channels, spontaneously transforms a Gaussian beam into a nonlinear conical wave \cite{Dubietis2004,Porras2008}. Nonlinear Bessel beams, characterized by a conical energy flow from the lateral rings to the central core, have been proposed as attractors to the filamentation regime \cite{Porras2004}. It is therefore natural to generate plasma filaments from Bessel beams, as we will describe in the next section. \section{Nonlinear propagation of ultrafast Bessel beams} \label{sec:Bessel} Zeroth order Bessel beams\index{Bessel beam}\index{nondiffracting beams}\index{diffraction-free beams} are invariant solutions to the Helmholtz equation. Bessel beams can seed the nonlinear propagation regime where a homogeneous plasma channel is generated \cite{Courvoisier2016,Duocastella2012}. In this section, we will review what are Bessel beams, highlight the properties of their propagation in the nonlinear regime which are the most relevant for laser materials processing. Then we will review basic applications, particularly high aspect ratio nanochannel processing. \begin{figure} \centering \includegraphics[width= 0.9\columnwidth]{BesselInterf.jpg} \caption{ (top) Intensity distribution of a Bessel-Gauss beam (bottom) corresponding ray-tracing representation, showing that the Bessel beam is an interference field with cylindrical symmetry.} \label{fig:Bessel_Interference} \end{figure} \subsection{Bessel beam structure} Within a scalar model of monochromatic light, Durnin demonstrated that the Helmholtz equation $\left(\nabla ^2+ (\omega/c)^2 \right) A=0$, has a solution that is propagation-invariant with a hot spot. This central hot spot that can be of diameter down to "a few wavelengths", as he wrote, but in fact even below the wavelength \cite{Durnin1987a,Durnin1987}. The solution found by Durnin is cylindrically symmetric: \begin{equation} A(r,z)=J_0(k_0 \sin \theta r) e^{i k_0 \cos \theta z} \end{equation} \noindent where $k_0$ is the wavevector and $\theta$ is the Bessel beam parameter, which is called the {\it cone angle}. This solution, as it is the case for plane waves, is of infinite energy. We can experimentally generate only the apodized solutions. Several types of apodizations exist, which depend on the mean of generating the finite energy Bessel beam. In the rest of this chapter, finite energy Bessel beams will be referred to as "Bessel beams" for sake of simplicity. The first experimental realization of Bessel beam was from an axicon \cite{McLeod1954}\index{axicon}, even before it was realized that this was corresponding to a "diffraction-free" solution. Durnin {\it et al} produced Bessel beams from a ring slit, placed at the focal plane of a converging lens which Fourier transformed the ring aperture into a Bessel beam. Indeed, in the spatial frequencies ($k_r$) space, {\it i.e.} the Fourier space, an ideal Bessel beam is a circle of amplitude $A(k_r)=\delta(k_r-k_0\sin\theta)$. Because of the properties of the Fourier transform, the thinner is the ring slit, the longer is the actual Bessel beam length. However, this mean of generation has poor energy throughput since most of the power is lost. In the field of laser materials processing, it is preferable to shape beams in the {\it direct} space, by opposition to the Fourier space. Bessel beam generation from the direct space can be performed using axicons \cite{Grunwald2004,Grosjean2007,Tsampoula2007,Akturk2009,Xie2012,Duocastella2012}, holograms \cite{Vasara1989}, or using Spatial Light Modulators \cite{Chattrapiban2003, Courvoisier2009} or, equivalently, Diffractive Optical Elements (DOEs)\cite{Amako2003,Khonina2004}. The shaping technique consists in applying a spatial phase $\phi(r) = k_0 r \sin \theta$. The application of such phase onto a Gaussian beam creates what is called a Bessel-Gauss beam\index{Bessel-Gauss beam}. The evolution of the intensity as a function of the propagation distance $z$ can be derived on the optical axis from the stationary phase approximation of Fresnel diffraction integral: \begin{equation} I(r=0,z)=4 P_0 k_0 z \sin^2\theta e^{-2(z \sin\theta/w_0)^2}/w_0^2 \end{equation} \noindent where $P_0$ and $w_0$ are respectively the power and the waist of the input Gaussian beam \cite{Roy1980,Jarutis2000}. High quality axicons enable the generation of high-power Bessel-Gauss beams without spatial filtering \cite{Boucher2018}. Figure \ref{fig:Bessel_Interference} shows a ray tracing representation of a Bessel-Gauss beam. A Bessel beam is an interference field, which longitudinal extent, the {\it Bessel zone}, is: $Z_{max}\sim w_0/\tan\theta$. It is apparent from this geometrical concept that the invariance of the waves crossing angle along the propagation, makes the fringes period invariant. In other words, the central spot size does not change along the interference field, hence the denomination "diffraction-free". In contrast with Gaussian beams, Bessel-Gauss beams have two free parameters to independently adjust the Bessel zone length and the diameter of the central spot. The latter is $d_{FWHM} \sim 0.36 \lambda_0/\sin \theta$ only determined by the cone angle, whereas the Bessel beam length can be independently adjusted by the input beam waist. It is important to realize that a Bessel beam corresponds to a line focus. Each point on the focused segment is topologically linked to a circle in the input plane \cite{Froehly2014}. In this regard, the energy does not flow along the optical axis, but instead the energy flow is conical. The polarization state of a Bessel beam is close to the one of the input beam, since, with sub-30$^{\circ}$ cone angle, the propagation is close to paraxial \cite{Zhang2007}. The longitudinal component of the electric field is mostly negligible in the experiments described below. We note that upon refraction from air to a dielectric medium, both the wavelength and cone angle are corrected by the index of refraction of the dielectric. But these corrections cancel out and do not change the value of central spot size in the material \cite{Brandao2017}. In contrast, the length of the Bessel zone is increased by the factor $n_0$. This is similar to the case of a Gaussian beam where the Rayleigh range is increased while the waist remains identical upon refraction \cite{Nemoto1988}. Up to here, we have described monochromatic Bessel beams. In principle, the bandwidth of ultrashort pulses has to be taken into account in the description. But since it is less than 1\% for pulses of 100~fs, spatio-temporal aspects of the pulse generation can be neglected. Apart from the fact that the on-axis wavepacket created by Bessel-X-Pulses and Pulsed Bessel beams does not travel at the same speed (respectively $c/\cos \theta$ and $c \cos\theta$), no impact in terms of plasma generation efficiency has been reported up to here. More details can be found in references \cite{Klewitz1998,Froehly2014}. \subsection{Filamentation of Bessel beams} The nonlinear propagation of Bessel beams can be described in terms of 3 different families \cite{Polesana2008}. {\it Weakly nonlinear Bessel beams} generate negligible plasma density and are only characterized by central lobe shrinking due to Kerr effect; {\it nonstationary Bessel beams}, as for their denomination by Polesana {\it et al}, generate a quasiperiodic plasma distribution along the propagation in the material. The third family, {\it stationary Bessel beams} is characterized by a quasi-invariant propagation, that generates a homogeneous plasma channel. In more detail, the second regime is largely driven by Kerr nonlinearity which generates via four wave mixing processes, two secondary waves with transverse wavevectors $k_r = \sqrt{2} k_{r0}$ and $k_r = 0$, where $k_{r0}=k_0 \sin \theta$ is the radial wavevector of the initial Bessel beam \cite{Gaizauskas2006,Ouadghiri-Idrissi2017}. The interference between these secondary waves and the initial Bessel beam create periodic oscillations \cite{Polesana2008, Norfolk2010}. Periodic modifications in glass have been demonstrated in this regime by Gaizauskas et al \cite{Gaizauskas2006}. The third regime is the most interesting for micro-fabrication. It is indeed the regime where enough losses occur in the central lobe to stabilize the dynamics. A conical flow of energy, oriented from the lateral lobes to the central one, compensates the energy loss. This regime corresponds to the monochromatic Nonlinear Unbalanced Bessel Beams (NLUBB). Indeed, a Bessel beam can be seen as a superposition of two cylindrical Hankel waves with equal weights, one propagating inward and the other propagating outward \cite{Porras2004}. In a NLUBB, energy loss within the central lobe reduces the weighting coefficient for the outward component. This reduces the contrast of the fringes and implies a net energy flow towards the center. Noticeably, in this regime, the spatio-temporal shape remains quasi-invariant all along the propagation, hence the denomination {\it stationary Bessel beam}. This is in contrast with the second regime, {\it non-stationary Bessel beam}, where a periodic reshaping strongly modifies the spatial temporal shape of the pulse \cite{Polesana2008}. The propagation-invariant NLUBB solution was proposed as an attractor to the filamentation regime \cite{Porras2004}. This solution cannot be found for all input parameters, but the operating window, in terms of peak power, is wider when the cone angle is increased. Indeed, for higher peak powers, nonlinear losses are higher which tends to reduce the impact of Kerr nonlinear dynamics. This has an important impact for applications to laser materials processing: high powers are needed to generate high plasma densities. A stationary regime can be reached for a given high peak power if the Bessel cone angle is sufficiently large. \subsection{High aspect ratio processing with propagation-invariant ultrafast Bessel beams} The stationary filamentation regime was early recognized to have strong potential applications \cite{Porras2004}. Early works with ultrafast Bessel beams in glass - before the theoretical work on Bessel filamentation - have shown that it is possible to write index modifications without the need for translating the sample \cite{Marcinkevicius2001a,Amako2005}. With much higher cone angles, we demonstrated the ability to create high aspect ratio nanochannels\index{nanochannel} with a single femtosecond pulse \cite{Bhuyan2010}. In borosilicate glass, the channels could be drilled either from an entrance or exit surface, with diameters ranging from $\sim$200~nm to 800~nm. The diameter could be tuned quasi-linearly with input pulse energy. The aspect ratio reached at that time 100:1, which was in line with the aspect ratio of the beam. Through channels could be drilled also with a single pulse, and periodic arrangements of nanochannels could be realized. \begin{figure} \centering \includegraphics[width=\columnwidth]{ChannelsBessel.jpg} \caption{ From . Open channels formed after femtosecond illumination ($\sim$230~fs) by Bessel beams with cone angle $\theta_{\texttt{glass}} = 17^{\circ}$ in Corning 0211 glass, for two pulse energies. SEM imaging is performed after mechanical cleaving. Reprinted figure with permission from \cite{Bhuyan2010}. Copyright (2010) AIP Publishing } \label{fig:ChannelsBessel} \end{figure} In borosilicate glass, it was possible to generate a channel only if the Bessel beam was crossing one of the sample surfaces, {\it i.e.} only if the channel was opened on one of the sides. In contrast, in sapphire, it was possible to create a high aspect ratio nano-void fully enclosed within the materials bulk. In this case, the void is formed only by compression of the material around. Yet the void formation process in this configuration is not yet fully understood, we infer that the 10- fold higher thermal diffusion coefficient of sapphire allows for fast cooling. This can prevent cavity closing, in contrast with the case of borosilicate glass. Further investigations with picosecond pulses have been independently performed by several groups in a number of different glass materials. Interestingly, it seems that picosecond pulses generate channels that are more visible under optical microscopy than the ones created by shorter femtosecond pulses (see Figure \ref{fig:ChannelsPico}(left)). A parametric study of the channel morphology as a function of input pulse energy and pulse duration has been reported in reference \cite{Bhuyan2014}. The pulse duration was adjusted by temporally stretching a femtosecond pulse and the Bessel beam aspect ratio was $\sim$1000:1. It was found that in this case, multi-picosecond pulse durations could create uniform voids. For too high pulse energies, fragmentation of voids was observed. For very short pulses, less than 100~fs, the formation of empty channels was less clear. In this case, we stress that characterization techniques are at limit of resolution. \begin{figure} \centering \includegraphics[width=\columnwidth]{ChannelsPico.png} \caption{(left) Phase contrast images of high aspect ratio structures formed after illumination by Bessel beams with cone angle $\theta_{\texttt{glass}} = 8^{\circ}$ in 7980-5 F Corning glass, for different pulse durations. Note the large difference in cone angle with reference \cite{Bhuyan2010}. Reprinted figure with permission from \cite{Bhuyan2014} with courtesy of Dr. R. Stoian. Copyright (2014) AIP Publishing. (right) High aspect ratio void formed in sapphire after illumination by a Bessel beam of cone angle $\theta_{\texttt{sapphire}} = 15^{\circ}$ and pulse duration 3~ps. The heat affected zone is far more pronounced than in the femtosecond case and the sides of the channel evidence the formation of phase transformations during the cavity formation process in this case. From \cite{Rapp2016}, Creative Commons licence.} \label{fig:ChannelsPico} \end{figure} In parallel, nanovoids induced by 3~ps pulses in sapphire\index{sapphire} were characterized by FIB milling process. The result is shown in figure \ref{fig:ChannelsPico}(right). It is apparent that the morphology of the cavity is highly different from the case of femtosecond pulse illumination. Nanoparticles accumulated on the walls of the cavity are clearly observable, as well as a very wide heat affected zone \cite{Rapp2016}. It is too early to determine if the more apparent damages produced by picosecond pulses with respect to femtosecond ones, arise from a different energy density deposition and/or from a different photo-excitation pathway. Experimental time resolved phase contrast microscopy opened new perspectives on the formation of the void. Bhuyan {\it et al} imaged the transient index distribution at times case ranging from nanoseconds to microseconds \cite{Bhuyan2017}. They conclude that the void opening is slow in comparison with the shockwave theory. They infer that the void formation in this 2D case arises from cavitation of a low viscosity liquid phase. The main difference in comparison with the shockwave\index{shockwave} theory is that the estimated deposited energy density is of $\sim$7~kJ.cm$^{-3}$, which is in high contrast with the values estimated in the case of spherical void formation, on the order of 90~kJ.cm$^{-3}$ \cite{Hallo2007}. Wang {\it et al} have investigated by shadowgraphy the mechanical wave ejected after the plasma formation in PMMA. They observed a wave with speed corresponding to the speed of sound in PMMA, whatever the input pulse energy. This is compatible with both theories on cavity formation since in the shockwave case, the latter is supposed to propagate only over less than a few microns, {\it i.e.} below the resolution of the shadowgraphy experiment \cite{Wang2017}. \section{Applications} \label{sec:applications} \index{filamentation} \index{Bessel beam} Single shot generation of plasma columns of $\sim$1$\mu$m diameter and length several tens to hundreds of micrometers has a number of different applications that we will review here. As mentioned earlier, the plasma channel generated by a smooth regime of filamentation from a Gaussian beam is quite close to the one generated by a Bessel ultrafast pulse. As we have seen above, the difference lies in the ability to control independently the parameters (length, diameter, pulse duration) that makes the Bessel beams attractive. We will treat the applications of both types of filaments in a single section. Most of the applications were started with Gaussian filaments and refined more recently with Bessel or Bessel-like beams. \subsection{High aspect ratio refractive index modifications} At relatively low peak power, long plasma channels have been used to write a series of index modifications in glass and polymers. The process is applicable in most of the transparent materials. However, as for Gaussian beam focusing, the positive or negative sign of the photo-induced refractive index change depends on the material itself.\index{grating} Long plasmas tracks have been used for instance to fabricate gratings in a number of different materials: PMMA \index{PMMA}, silica glass \index{glass}, or even chalcogenides\index{chalcogenides} (see Figure \ref{fig:grating}) \cite{Mikutis2013,Matushiro2016,Zhang2018} where the empty channels formed by Bessel beam illumination were used as scatterers in the vicinity of a waveguide \cite{Martin2017}. \begin{figure} \centering \includegraphics[width = \columnwidth]{grating.jpg} \caption{(left) Concept of the Bragg grating writing by a Bessel-Gauss beam. Several layers form a thick grating. (right) Optical view of gratings written in fused silica with different parameters.Reprinted figure with permission from \cite{Mikutis2013} with courtesy of M. Mikutis, Workshop of Photonics. Copyright (2013) by the Optical Society of America.} \label{fig:grating} \end{figure} \subsection{Ultrafast laser welding} \index{welding} Joining transparent materials such as two types of glasses, or joining a transparent material on silicon or metal, is a need for a very large number of applications fields: opto-fluidics, biological analysis, microelectronics, MEMS, MOEMS require that structured glasses or silicon and metals have to be sold together after the microstructuration. Despite a number of different joining techniques exist, none allows for joining over only a few micrometers width. Ultrashort pulse lasers are ideal tools for this application, because they can melt the transparent material with very high spatial resolution, while preserving the optical, mechanical, electrical properties of the surrounding components. Before welding, the two parts to be welded have to be set in tight contact. Then, laser illumination is used to melt the transparent material which expands and fills the empty space. After cooling, which scale is in microseconds, the two pieces are welded together. The filamentation welding technique benefits of the relatively high volume of heated material in the plasma column, together with the relaxation of the positioning constraint \cite{Tamaki2005}, as shown in Figure \ref{fig:welding}(left). Dissimilar materials have been welded \cite{Watanabe2006}, even glass on silicon or metals \cite{Tamaki2006}. Welding with gaps up to $\sim$3~$\mu$m has been successfully achieved using bursts and heat accumulation effect \cite{Richter2015}, as shown in Figure \ref{fig:welding}(right). \begin{figure} \centering \includegraphics[width=\columnwidth]{welding.jpg} \caption{(Left) From Concept of ultrafast laser welding of glass with filamentation. Reprinted figure with permission from \cite{Tamaki2006} with courtesy of Prof. W. Watanabe. Copyright (2006) by the Optical Society of America.(Right) From . Example of side view imaging of welded glasses. Depending on the melted pool position, the melt glass could fill the gap even between irradiation sites. Reprinted figure with permission from \cite{Richter2015} with courtesy of Prof. S. Nolte. Copyright (2015) by Springer Nature.} \label{fig:welding} \end{figure} Mechanical characterizations demonstrate that this technique is extremely powerful because, in terms of traction, the weld parts can be as strong as the bulk material itself. The strength of the weld depends on the difference of the thermal and mechanical properties of the two materials. Large differences obviously have a negative impact on the strengths of the bonding\index{bonding}. We note that the use of burst and heat accumulation\index{heat accumulation} effect tends to relax the stresses left in the material and provide stronger welding \cite{Richter2015}. \subsection{Stealth dicing of brittle transparent materials} \index{stealth dicing} \index{glass cutting} \index{cutting} High speed separation of materials is a key important technology for a number of applications, specifically for mass fabrication such as screen covers, touchscreens, electronics or lightning technologies. A specific need is to separate at high speed glass sheets with thickness of several hundreds of micrometers. In order to preserve the resistance of glass to bending and other types of stresses (thermal shocks), the cut needs to be free of chipping, with limited defects in the vicinity of the cut surface. "Stealth-dicing" is a technology initially developed for high speed, ablation-free silicon wafer cutting for the microelectronics industry \cite{Kumagai2007}. The concept is that a laser, which wavelength is chosen in the transparency window of the material ({\it i.e.} IR laser for silicon), generates a plane of defects within the depth of the material. When the material is set under thermal or mechanical stress, it cleaves along this plane. The initial technology was based on nanosecond IR lasers, and the morphological damages in silicon were extending typically on the scale of tens of micrometers. \begin{figure} \centering \includegraphics[width =\columnwidth]{StealthDicing2.jpg} \caption{(left) Concept of stealth dicing : in a first step, high speed laser processing creates a series of nanochannels aligned in a plane, which guides cleaving under mechanical stress. Courtesy of R. Meyer, {\it FEMTO-ST Institute}, France. (right) Example of optical microscopy view of gleaved glass after processing. With courtesy of J. Safioui, {\it FEMTO-Engineering}, France } \label{fig:StealthDicing} \end{figure} A similar technique was developed to separate glass, based on filamentation and plasma channel formation, leaving high aspect ratio nanovoids in glass. A periodic pattern of voids, separated by $\sim$5 to $\sim$25~$\mu$m, allows the material to be mechanically cleaved. This can be performed also with Bessel beams \cite{Bhuyan2015,Mishchik2017}. Using commercial ultrafast lasers with multi-100 kHz repetition rate, it is feasible to irradiate at a speed on the order or exceeding 1m.s$^{-1}$. A small mechanical stress is enough to separate glass pieces, as shown in figure \ref{fig:StealthDicing}. This technology is particularly attractive in the case of chemically strengthened glass, such as the glasses used for cover-screens of smartphones, since the glass self-cleaves after laser processing. After cleaving, the walls are straight and free from chipping. The technique is mostly non-ablative, which avoids the issues of cleaning debris. Noticeably, it is also possible to cleave along curved paths. To shape the processed glass or cut with angles different from 90$^{\circ}$, illumination at non-perpendicular direction is desirable. But in this case, the non-uniform optical path difference over the beam cross-section restricts the length of a Bessel beam inside the transparent workpiece. Jenne {\it et al} have developed an approach where the optical phase profile of an initial Bessel beam is compensated by a secondary mask. Cut with tilted angles were demonstrated up to 30 degrees \cite{Jenne2018}. At high input pulse energies, the energy stored in the material is sufficient to generate cracks. A slight asymmetry in the input beam is sufficient to make the crack direction deterministic instead of random. This property has been exploited by Dudutis {\it et al}, by using an imperfect axicon\index{axicon}, which generates a non-circularly symmetric Bessel beam. It was used to generate cracks extending transversely up to 100~$\mu$m away from the central nanochannel \cite{Dudutis2016}. This brings the potential to increase the inter-channel distance for stealth dicing of glass at even higher speeds. Heat accumulation using burst mode with Bessel beams was also used to initiate the cracks \index{cracks} \cite{Mishchik2017}. Instead of using crack formation guided by an imperfection in the axicon, it is also possible to create an asymmetry in the Bessel beam, using spatial filtering, so that the generated non-diffracting beam has an elliptical cross section. Using $\sim$3~ps single pulse illumination, such beams generate nanochannels in glass also with elliptical cross-sections, whose ratio major/minor axis is the same as the beam ratio \cite{Meyer2017}. The elliptical cross section allows for enhancing the mechanical stress at the tips of the ellipses and increases the reliability of stealth dicing. A detailed statistical study also demonstrated that the cleaving was requiring less mechanical deformation in this case, with the second benefit of leaving less defects in the processed glass, since all laser-induced channels are perfectly cleaved through \cite{Meyer2017a}, see figure \ref{fig:EllipticalBessel}. \index{Bessel beam, elliptical} \begin{figure} \centering \includegraphics[width = 0.9\columnwidth]{EllipticalBessel.jpg} \caption{(top row) Transverse cross-section of an elliptical Bessel beams and corresponding SEM image of elliptical channel produced by the beam with single pulse illumination in glass. (bottom) The beam and red arrow show the laser scanning configuration. SEM image : top-view of a glass sample cleaved by stealthdicing technique, where it is apparent that all elliptical channels were processed. With courtesy of R. Meyer, {\it FEMTO-ST Institute}, France.} \label{fig:EllipticalBessel} \end{figure} \subsection{Separation of sapphire} \index{sapphire} Sapphire is an important technological material. Its high hardness, just below the one of diamond, make it an ideal cover for screens or for watches. This crystal is even more importantly used as a substrate for the growth of LEDs. Sapphire was also processed with the same stealth dicing technique as described in the previous sub-section. A complementary approach is to take benefit of the crystalline structure of sapphire to guide the fractures. As in the case of glass, laser illumination with high pulse energies generate cracks even in the single shot regime. For C-cut sapphire, 3 crack directions are usually observed with Bessel beam illumination along the c-axis. However, below a pulse duration of $\sim$600~fs, the fracture can occur in a single direction, jointly determined by the laser pulse polarization and the scanning direction. This was exploited to initiate a series of cracks with very large inter-pulse distance (25 ~$\mu$m) paving the way for higher speed cutting \cite{Rapp2017}.\index{cracks} \subsection{Structuration of diamond} \index{diamond} \index{graphitization} \index{electrode} Diamond is the hardest material and it is extremely difficult to process. It has a number of applications, particularly because it is bio-compatible. It is also increasingly used in quantum photonics . Ablation of diamond is for now still performed from the surface \cite{Kumar2018}, no high aspect ratio void formation has been yet reported to the best of the author's knowledge. Diamond has been also proposed as a new material to build high-energy particle detectors. For this application, conductive graphite wires are needed in the bulk of the material. Graphitization of the bulk material has been successfully achieved with ultrafast Bessel beams. A single, 10~$\mu$J pulse was sufficient to create a conductive column through 500~$\mu$m diamond sample \cite{Canfield2017}. We remark that surface and bulk graphitization is a phenomenon that builds up from pulse to pulse as described in reference \cite{Kumar2017}. \begin{figure}[htb] \centering \includegraphics[width =\columnwidth]{diamondGraphitization.jpg} \caption{From . Optical microscopy views of graphitization marks created in the bulk of 500~$\mu$m thick diamond, with Bessel pulses of energy 3.5~$\mu$J. The number of pulses at 20~Hz repetition rate is indicated below each graphitized column. Reprinted figure with permission from \cite{Kumar2017} with courtesy of Dr. O. Jedrkiewicz. Copyright (2017) by Springer Nature.} \label{fig:diamondGraphitization} \end{figure} \subsection{Processing of silicon} \index{silicon} Silicon is a material of major interest for microelectronics and has a immense field of applications. Specifically, there are needs in the fields of creating waveguides for silicon photonics as well as micro and nanochannels for the cooling of silicon chips or to insert conducting electrodes transmitting signals from one side to the other. These Through Silicon Vias (TSV)\index{Through Silicon Via (TSV)}\index{TSV, Through Silicon via} are particularly important for next generation 3D microelectronic chips. Silicon is transparent in the infrared region of the spectrum, for wavelengths higher than $\sim$1.1~$\mu$m. In this context, attempts on reproducing the results obtained in dielectrics were performed with femtosecond Bessel beams with a central wavelength of 1.3 ~$\mu$m. However, an absence of morphological modification was observed for bulk focussing with ultrafast pulses. This was explained by the authors as originating from the strong two-photon absorption \cite{Grojo2015}. Recently, Tokel {\it et al}, processed modifications in 3D, opening routes to similar processing as in glass, but this was with nanosecond pulse durations and requires a nonlinear feedback mechanism involving the rear surface of silicon \cite{Tokel2017}. Bessel beams were also investigated for TSV drilling for a laser central wavelength of 1.5~$\mu$m (Figure \ref{fig:silicon_Bessel}. As the drilling with conventional Bessel beam was showing not enough contrast between the lobes, an apodized version of Bessel beam was developed and 10~$\mu$m diameter TSV in 100~$\mu$m thick silicon wafer was drilled with $\sim$1200 laser pulses at a repetition rate of 1~kHz \cite{He2017}. More recently, bulk modifications in more conventional Gaussian-beam approach were demonstrated based on three different processes. In reference \cite{Chanal2017}, illumination by a numerical aperture close to 3 could induce an index change in the bulk of silicon with a single pulse. In the multiple shot regime, 250~kHz repetition rate illumination with 350~fs pulses enabled to produce waveguides in silicon \cite{Pavlov2017}. The buildup of index modification was shown to be more reliable with 10~ps pulses instead of shorter pulses \cite{Kaemmer2018}. The mechanisms leading to modification of the silicon bulk are still incomplete and more experiments are needed to provide a clear overview for bulk laser processing of silicon. \begin{figure} \centering \includegraphics[width = \columnwidth]{SiliconBessel.jpg} \caption{Silicon processing with Bessel beams in the multishot regime. (left) SEM view of a Through Silicon via (TSV) in silicon processed with a conventional Bessel beam (CBB). (right) same view for a Tailored Bessel beam (TBB) where the lobes of the Bessel beam have been removed. Reprinted figure with permission from \cite{He2017} with courtesy of Prof. K. Sugioka and Prof. Y. Cheng. Copyright (2017) Creative Commons licence.} \label{fig:silicon_Bessel} \end{figure} \newpage \section*{Conclusion} In conclusion, the extremely high peak power of ultrashort laser pulses makes it possible to deposit energy with 3D control inside the bulk of transparent materials. This can be used to generate waveguides, nanogratings or even nano-cavities. Ultrashort laser pulses are therefore well-suited to answer the needs for high aspect ratio micro- and nano- processing, for drilling, cutting, producing channels for micro-nano fluidics or microelectronics. We have reviewed the basic mechanisms of pulse propagation and plasma formation inside transparent materials, as well as the experimental characterizations. For wide structures, high aspect ratio laser processing requires the multiple shot illuminations regime. The best condition corresponds generally to process from the exit surface of the workpiece, potentially with assistance of a liquid or an etchent. Breakthroughs in the field have been made with single shot or single burst processing with filamented beams, which create long and thin homogeneous plasma channels. The structures that are generated therefore possess a very high aspect ratio. Predictable filamentation is made possible with "nondiffracting" Bessel and Bessel-like beams. Control on single shot filamentation has enabled a number of novel applications, ranging from index modification writing, high precision welding to high speed cutting of transparent materials. A number of efforts are still required to understand the physical processes generating the cavities. This is particularly relevant for silicon. The propagation-invariant properties of Bessel beams are fundamentally at the origin of the possibility to homogeneously deposit energy inside transparent materials with high aspect ratio. We expect that other beam shapes, which are also propagation invariant in the nonlinear regime, will be very attractive in the future to process materials with other geometries and develop novel applications of high-intensity ultrashort pulses. \bibliographystyle{apalike} \section*{Introduction} \label{sec:Introduction} Ultrafast lasers were very early recognized as a powerful tool to process transparent materials. This is because the transfer of laser energy to the solid starts from nonlinear absorption by ionization. Nonlinear absorption is very efficient only around the focal point of the beam. This way, absorption can be completely negligible on the surface of a dielectric while a large amount of the input pulse energy is absorbed within the bulk of the material. After a number of different physical processes \cite{Mao2004, Gattass2008}, this eventually yields a modification of the bulk material that can range from a simple index change to the formation of an empty cavity. Longer laser pulses, such as nanosecond pulses, can also be used to process transparent materials, but in this case, absorption of the laser pulse initiates from defects. This is random and poorly reproducible from shot to shot. In contrast, ultrafast pulses, {\it i.e.} sub $\sim$10~ps pulses, modify materials with high reproducibility and with very reduced heat affected zone. In the last decade, structuring with high aspect ratio has emerged as a new field in ultrafast laser processing of transparent materials. In early works, most of the attention was focused on creating extremely localized damages in 3 dimensions (index change, nanogratings, nano-voids) with high numerical aperture illumination. Nonlinear propagation effects, in the general sense "filamentation", were regarded as highly detrimental. However, it progressively appeared that filamentation of Gaussian and shaped beams, could create extended modifications along the propagation direction. This was even possible in single shot illumination regime. Controlling these phenomena is a great challenge but allows for creating modifications that are impossible to generate with conventional point-by-point processing. These propagation regimes create high aspect ratio modifications: their length that is much longer than their typical diameter. High aspect ratio processing is required for a number of applications such as microfluidics, photonics or material separation. As we will describe in section \ref{sec:applications}, an important field of application is glass separation with a non-ablative technique. This technique is based on generating a high aspect ratio structure in the bulk of a transparent brittle material, organized on a plane, which allows the material to cleave along this weakened plane. The process is high speed and is obviously very well suited to mass fabrication of cover glass for touchscreens and consumer electronics. However, structuring with high aspect ratio is challenging because the propagation of an intense ultrafast pulses inside a transparent solid is nonlinear by essence. This chapter is intended as a guide in this field of research and technology. We will first briefly review the phenomena occurring during high-intensity pulse propagation in dielectrics as well as the means to characterize plasma formation inside their bulk. We will point out the specificity of bulk propagation and associated numerical simulations. In section \ref{sec:multishot}, a review of high aspect ratio processing in the multiple shot regime will point out the difficulties to face with bulk processing. In contrast, single shot void formation seems more simple and faster. In the related section \ref{sec:void}, we will review the main results which open new routes for laser processing of transparent materials. Increasing the length of the empty cavities can be done with filamentation of Gaussian beams, which we will review in section \ref{sec:filamentation}. But due to the nonlinearities, this process is difficult to predict or scale. In section \ref{sec:Bessel}, we will show that a specific class of beam shapes allows for seeding filamentation such that the propagation is free from distortion in space and time and is therefore much more predictable. This beam shape is the zeroth-order Bessel beam. It induces high aspect ratio index modifications, nanochannels or nano-voids in the bulk of a number of transparent materials. In section \ref{sec:applications}, we will review the applications of both filamentation of Gaussian-like beams and of Bessel beams. These are numerous and we will describe the technologies as a function of the processed material (glass, sapphire, diamond, or silicon). \newpage \section{Ultrafast phenomena and nonlinear propagation in transparent materials} \label{sec:phenomena} In this section, we will briefly review the main physical phenomena occurring during high intensity ultrashort laser pulse propagation in the bulk of transparent materials. Our objective is to point out that there are a number of differences in the experimental techniques and numerical modelling in comparison with surface ablation, which has been described in detail in the preceding chapters. In brief, the physical sequence of material modification by an ultrafast laser pulse can be split in two main steps: first, nonlinear propagation generates a plasma of free electrons and holes via nonlinear ionization. This is the energy deposition step, which is terminated at the end of the laser pulse, typically in the sub-picosecond range. Then, the second step is the relaxation. It involves a sequence of different physical phenomena extending up to microsecond scale. These phenomena are identical to the ones occurring at the surface of dielectrics. We will therefore focus on the first step, where the nonlinear propagation plays a determinant role for the modifications of the material. For the numerical modelling, the propagation in the bulk of transparent materials imposes a number of additional constraints in comparison with surface ablation. First, the propagation distances considered in bulk processing of materials are orders of magnitude longer than those simulated for surface ablation. Second, while surface ablation modeling can sometimes be reduced to 0 or 1 dimension, bulk processing requires at least models in two dimensions. We will emphasize in the following how the physics of optical breakdown over long distances can be simulated with reasonably powerful computers. As for experimental characterizations, specific techniques have been developed to characterize plasma formation within the bulk of the materials. We will describe them in the second part of this section. \subsection{Linear and Kerr effects} Propagation in transparent materials is determined by several linear and nonlinear effects. As for the linear contributions, the pulse shape is affected by diffraction, dispersion and aberrations such as chromatism or spherical aberration\index{spherical aberration}. Diffraction and aberrations are important effects which explain a number of experimental results \cite{Song2008}. We note that dispersion in the material can be generally safely neglected because thicknesses of materials on the order of a few millimeters only weakly affect ultrashort pulses of duration longer than 100~fs. Kerr\index{Kerr effect} effect mainly contributes to the transient index of refraction, as well as for the self steepening of the pulse. Cross phase modulation effects have usually a negligible impact on the pulse intensity. For sufficiently long pulse duration, Raman contribution to Kerr effect can be included. Kerr self-focusing \index{self-focusing} shifts backwards the focusing point as the peak intensity increases. \subsection{Nonlinear absorption, plasma absorption and plasma defocusing } The interaction between a laser pulse and a photo-excited solid dielectric is threefold. i) Nonlinear absorption\index{nonlinear absorption} occurs because of high intensity field. The excited electrons interact ii) with the field and iii) in the field via collisions\index{collisions}. From first principles, description of nonlinear absorption and interaction with the laser pulse should be based on a quantum model of the band system in the periodic high electric field \cite{Otobe2008}. Despite the number of advances in theoretical chemistry, it is still a challenge to accurately describe the ground state of solids such as sapphire, and it is obviously even more difficult for amorphous solids like fused silica. Transition rates between excited states are mostly out of experimental reach, yet a number of highly excited states should be taken into account \cite{Barilleau2016}. Time-Domain Density Functional Theory (TD-DFT) can model the high-field ionization, but collisional absorption is still difficult to describe in this framework \cite{Yabana2012}. In the framework of numerical simulation of laser pulse propagation of several 100's fs over a few millimeters of propagation, these approaches are computationally too demanding. In first approximation, the excited electrons in the upper bands can be considered as free electrons with effective masses\index{effective mass} (parabolic band). This is why in the rest of the chapter, the excited electrons will be referred to as "free electrons", and nonlinear transitions to the excited states will be referred to as nonlinear ionization\index{ionization}. Thus, the ionization phenomena in dielectrics are described in similar way as ionization of atoms. The modeling of pulse propagation will therefore follow the work that was developed for filamentation in air. The description of Keldysh\index{Keldysh model} for ionization is computationally efficient. In this framework, multiphoton and tunnel ionization are asymptotic behaviors \cite{Sudrie2002, Couairon2005}. In basic models, the distribution of the free electrons\index{free electrons} in the excited levels is neglected and the free electrons are described only via the number density $\rho$\index{plasma density}. More refined models insert additional spectroscopic levels to describe the energy distribution of the free electrons. This is the multiple rate equations model\index{multiple rate equations} (MRE) \cite{Rethfeld2006}. The interaction of the laser pulse with the plasma is twofold: i) absorption by the free-electron gas excites the free-electrons to upper levels. When the energy of the free electrons is sufficiently large, impact ionization occurs and the free-electron density increases. ii) the presence of plasma effectively reduces the effective index of refraction, yielding defocusing effect on the laser pulse. Drude model\index{Drude model} can be used to efficiently describe this interaction. The plasma conductivity $\sigma (\omega)$ is derived from the plasma number density $\rho$. The plasma can be described as a contribution to the complex permittivity. A frequency-dependent description is valid as long as the number density does not vary too fast in time. Meanwhile, the evolution of the free-electron distribution and impact ionization effects can be either described in the MRE model or by simply considering that every K photons absorbed by the plasma will contribute to the ionization of an additional free electron (see Equation \ref{eq:plasmaeq}). In detail, the \index{plasma susceptibility} is: \begin{equation} \chi(\omega) = -\frac{\rho e^2}{\varepsilon_0 m} \end{equation} \noindent with $\varepsilon _0$ the vacuum permittivity, $e$ the unsigned electron charge, $m$ the effective electron mass. If the plasma-free permittivity of the medium is $\varepsilon_{SiO_2}$, the combined permittivity\index{plasma permittivity} of the medium and plasma reads \cite{Mao2004}: \begin{equation} \label{eq:epsilon_of_omega} \varepsilon(\omega) = \varepsilon_{SiO_2}(\omega)- \omega_p^2 \bigg[ \frac{\tau_c^2}{1+\omega^2\tau_c^2}-i \frac{\tau_c^2}{\omega\tau (1+\omega^2\tau_c^2)}\bigg] =n^2(\omega) \end{equation} \noindent $\omega_p=\sqrt{\frac{e^2 \rho}{\varepsilon_0 m}}$ is the plasma frequency and $n$ the complex index of refraction of the medium with the plasma. We see in Equation \ref{eq:epsilon_of_omega} that plasma contribution reduces the permittivity, and therefore reduces the index of refraction. This is why plasma defocuses an incoming laser pulse. The collision time\index{collision time} $\tau_c$ is a parameter that should be derived from the free-electron plasma density and distribution in momentum space (or temperature for a Maxwellian distribution). In practice, the collision time is usually considered as fixed with values typically ranging from 0.5~fs (dense plasmas \cite{Velpula2016}) to $\sim$10~fs (low density plasmas \cite{Sudrie2002}). Yet imperfect, this model still describes with reasonable accuracy the absorption of the plasma and the change of the local permittivity due to the presence of plasma \cite{Gamaly2011}. Finally, other mechanisms such as recombination, Auger decay do exist and can also be included in models depending on computational capability. \subsection{Numerical simulations of pulse propagation in transparent dielectrics} A number of different physical models for ultrafast laser pulse propagation have been developed. Here, we discuss a basic model of propagation to provide the reader a first view on how the different mechanisms described above can be numerically simulated. More detailed models can be found for instance in references \cite{Couairon2011,Bulgakova2014}. An early model is based on solving simultaneously a NonLinear Schr\"odinger Equation (NLSE\index{NLSE}) and a rate equation for the plasma density \cite{Feit1974,Tzortzakis2001,Sudrie2002,Wu2002,Couairon2005,Winkler2006}. The following NLSE is derived from Maxwell's equations using a scalar, paraxial, unidirectional model for the field envelope $A(x,y,z,\tau)$ describing the laser pulse with central frequency $\omega_0$ with a temporal reference frame defined by $\tau = t-z/v_g$. $t$ and $z$ are time and propagation distance in the laboratory reference frame and $v_g =c/n_{SiO_2}$ the group velocity. The NLSE reads \index{NLSE} : \begin{multline} \label{eq:NLSE} \frac{\partial A}{\partial z} = \frac{i}{2k}\bigg( \frac{\partial^2}{\partial r^2}+\frac{1}{r}\frac{\partial}{\partial r}\bigg) A-\frac{ik''}{2}\frac{\partial^2 A}{\partial^2\tau^2}+ik_0 n_2 |A|^2A \\ -\frac{\sigma}{2}(1+i\omega_0\tau_c)\rho A-\frac{W_{PI}(|A|) U_{gap}}{2|A|^2}A \end{multline} \noindent and it has to be solved together with the plasma equation: \begin{equation} \label{eq:plasmaeq} \frac{\partial \rho}{\partial t} = \bigg(W_{PI}(|A|)+ \frac{\sigma}{U_{gap}}\bigg) \bigg(1-\frac{\rho}{\rho_{nt}}\bigg) -\frac{\rho}{\tau_t} \end{equation} \noindent where $k=n_0k_0$ is the wavevector inside the medium of refractive index $n_0=\sqrt{\varepsilon_{SiO_2}}$, and $k_0$ the wavevector in vacuum, $k'' = \partial^2 k(\omega) /\partial \omega ^2 |_{\omega=\omega_0}$ is the group velocity dispersion coefficient, $U_{gap}$ is the bandgap of the dielectric medium, $n_2$ is the nonlinear Kerr index, $W_{PI}$ is the nonlinear photoionization rate. $\sigma$ is the plasma conductivity evaluated by Drude model: \begin{equation} \sigma(\omega) = \frac{k\omega_0 e^2 \tau_c}{n_0^2 \varepsilon_0 m \omega_0^2 (1+\omega_0^2\tau_c^2)} \end{equation} The first term in equation \ref{eq:NLSE} corresponds to diffraction, the second to dispersion, the third is Kerr effect, the fourth is plasma absorption (real part) and plasma defocusing (imaginary part) and the last one is nonlinear absorption. NLSE is usually numerically solved via a split-step algorithm to calculate the pulse shape in $(x,y,t)$ at each point. Simultaneously, the plasma rate equation is solved for the free electron number density $\rho(x,y,z,t)$. Multiple rate equations can be added to describe more accurately the avalanche process \cite{Rethfeld2006}. In addition, for pulses on the order of a several 100's fs to some ps, the transient generation of self trapped excitons (STEs), at timescales of $\sim$150~fs \cite{Guizard1996,Mao2004}, as well as structural defects left by previous pulses, can be described by including the rate equations for new spectroscopic levels \cite{Bulgakova2014}. We note that the terms $(1-\rho / \rho_{nt})$ describe the saturation because of the finite number of available electrons to promote to the conduction band. In practice the number density $\rho_nt$ is estimated by the density of states (typically $2.2\times 10^{22}$ cm $^{-3}$ for fused silica). \begin{figure} \centering \includegraphics[width =\columnwidth]{Simul_filament.jpg} \caption{Simulation of ultrafast IR pulse propagating in fused silica. (left) Evolution of the filament diameter along the propagation (right) evolution of the pulse temporal profile on the first 4~mm of propagation. Reprinted figure with permission from \cite{Tzortzakis2001} with courtesy of A. Couairon. Copyright (2001) by the American Physical Society. } \label{fig:Simul_filament} \end{figure} Figure \ref{fig:Simul_filament} shows such simulation result for the evolution of the beam diameter and the pulse temporal profile. The temporal distorsion of the pulse is particulary apparent. Finally, we note that there is no straightforward link between the plasma density and the final modification in the transparent material. This is because a number of physical effects occur after the energy deposition: electron-electron scattering, electron-phonon scattering, recombinations, structural changes, phase changes, shockwaves, thermal diffusion, etc... \cite{Gattass2008}. Void formation occurs approximately when the plasma density approaches the critical plasma density, but this is a crude estimate \cite{Papazoglou2011, Gamaly2006}. Several other parameters have been used to predict the threshold for melting or vaporisation \cite{Grehn2014,Bulgakova2015}. \subsection{Experimental diagnostics} Experimental characterization is crucial to understand the physics and to identify the regimes in which the modifications are created. Here, we review experimental diagnostics so as to make the reader aware of the potential limitations of the techniques and of the conclusions drawn out of the results obtained. \subsubsection{Post-mortem diagnostics} "Post-mortem" diagnostics refer to characterizations performed well after the photo-induced phenomena have relaxed. In-bulk material modifications can be characterized only by optical microscopy, including phase contrast, polarized microscopy, Raman techniques. Optical characterization has however a poor spatial resolution ($\sim$0.5~$\mu$m in best cases, depending on probe wavelength and numerical aperture \index{numerical aperture} of the imaging). In addition, spherical aberration\index{spherical aberration} must be compensated to reach the high spatial resolution. In most cases, the sub-micron structures described in this chapter are not resolved by optical means. For higher resolution, only destructive characterization means are available. Mechanical cleavage, polishing or Focused Ion Beam (FIB) milling are used to provide physical access for Scanning Electron Microscopy (SEM). These techniques are extremely delicate because the processing technique should not affect the nano structure itself (filling of empty void by particles, modification of a channel by FIB, "curtain effect", etc). \subsubsection{Plasma dynamics characterization} Plasma dynamics is characterized by pump-probe measurement of the transient distribution of refractive index change. A number of different techniques have been implemented to measure this index change. Several of them are adaptations of techniques initially developed to characterize plasma plumes expanding in vacuum from the surface of solids. \begin{itemize} \item {\it Shadowgraphy} \index{shadowgraphy}is based on transversely illuminating the plasma with a probe pulse. In this case, the complex refractive index change $\Delta n$ is estimated from the transmission $T$ of the probe through the plasma of thickness $L$: $T(x,z)=exp \left[ -4\pi \texttt{Im}(\Delta n) L/\lambda_p \right]$ where $x$ and $z$ are the spatial coordinates in transverse and longitudinal directions respectively, and $\lambda_p$ the central wavelength of the probe \cite{Papazoglou2007,Grossmann2016}. This still requires to estimate a priori the plasma thickness $L$, and to assume that the probe propagation through the plasma is perfectly straight ({\it ie} diffraction effects negligible). Recently, a new tomography \index{tomography}approach was developed to enable the retrieval of the 3D distribution of the extinction coefficient. This approach removes the need to assume the value of the thickness $L$. This is based on multiple shadowgraphy experiments where the beam is rotated around the optical axis between each illumination \cite{Bergner2018}. Spectrally resolved shadowgraphy is a powerful technique providing access to laser deposited energy under certain approximations \cite{Minardi2014}. For instance, Hashimoto {\it et al} used time resolved micro-Raman spectroscopy to determine the evolution of temperature distribution after ultrafast excitation of glass \cite{Hashimoto2015}. \item {\it Pump-probe interferometry} \index{pump-probe measurement} is a technique that retrieves amplitude and phase variations. Depending on implementation, simplified versions of the setup provide access only to the phase measurement, hence to the real part of the refractive index change. The interferometry can be performed with a reference signal that does not cross the interaction medium. Spectral interferometry \index{spectral interferometry}is a technique where 2 probe pulses interfere within a spectrometer, as shown in Figure \ref{fig:SpectralInterferometry}. The reference pulse passes through the medium before the pump, and the second probe records amplitude and phase change with a variable delay with the pump, but fixed delay with the reference. Amplitude and phase can be retrieved from the spectral fringes. This technique is extremely precise, yet it is restricted to characterize a single spatial dimension \cite{Mao2004}. \begin{figure} \includegraphics[width=0.5\columnwidth]{SpectralInterferometry.jpg} \caption{ Example of pump-probe spectral interferometry setup.Reprinted figure with permission from \cite{Mao2004} with courtesy of Prof. S. S. Mao. Copyright (2004) by Springer Nature.} \label{fig:SpectralInterferometry} \end{figure} It is also possible to use the interference between the probe wave and the scattered wave emitted by the plasma to characterize the plasma density with holography \cite{Papazoglou2008,Papazoglou2014}. This provides quantitative measurements of phase and amplitude. Again, despite quantitative measurements can be performed, one must keep in mind that the characterization is convoluted by the optical response function of the imaging apparatus. This actually imposes a severe constraint on the effective spatial resolution of the measurements. After retrieving the distribution of complex index of refraction, Drude model is used to link index change with plasma density, following Eq. \eqref{eq:epsilon_of_omega}. \item {\it Two-color probing}. The retrieval of the plasma density from the index change distribution, using Eq. \eqref{eq:epsilon_of_omega} requires the assumption on the collision time $\tau_c$. Repeating the probe measurement with another probe wavelength removes the ambiguity on $\tau_c$ \cite{Velpula2016}. \item {\it Phase contrast microscopy} records images that are proportional to the variation of index of refraction \cite{Zernike1942}. This does not require a second reference probe, and makes straightforward to image the sign of index variations associated to densification or material expansion \cite{Mermillod-Blondin2011}. \end{itemize} \subsubsection{Characterization of the pulse distribution } The pump pulse can be also characterized after its interaction with the solid. This can be easily quantitatively compared to numerical simulation results. This has been performed as spatially resolved crosscorrelation \cite{Grazuleviciute2015}. To provide the evolution of the spatiotemporal dynamics along the propagation, the pulse has to be measured at intermediate propagation distances. This is however possible only if the nonlinear propagation stops. This is feasible for instance if the medium has a variable length, because further propagation in air is linear for sufficiently low pulse energies. Jarnac {\it et al} have used a variable size water cuvette \cite{Jarnac2014}. Xie {\it et al} have reconstructed the 3D evolution of the time-integrated fluence distribution by controlling the relative position of the beam with the exit side of the sample \cite{XieSR2015}. \subsubsection{Plasma luminescence} Since the temperature of the plasma phase is typically several thousands Kelvins, blackbody emission is in the visible region of the spectrum. Side imaging of the plasma luminescence provides a semiquantitative characterization of the plasma distribution and temperature. This also allows for characterizing the dynamics in the multiple shot regime to follow the drilling mechanism \cite{Wu2002,Hwang2008}. Plasma emission can also include fluorescence from two-photon absorption \cite{Tzortzakis2006}, and fluorescence from the relaxation of self trapped excitons (STE's) and transient color centers (NBOHCs) \cite{Papazoglou2011}. \subsubsection{Acoustic waves} Direct recording of the amplitude of acoustic waves also provides indications on the laser deposited energy that eventually was converted to mechanical waves, and on the dynamics of the shockwave \cite{Noack1998,Kudryashov2008}. Imaging of the dynamics of acoustic waves that follow shockwaves can be performed by shadowgraphy. The evolution of the wave speed in time provides estimations on the laser-deposited energy, using Sedov' theory. This is however restricted to very specific geometrical conditions. \section{Microstructuring of transparent materials in multiple shot regime} \label{sec:multishot} In-volume structuring of transparent materials is requested for a number of applications where channels, deep trenches need to be processed: micro- or nano- fluidics, biosensors, fabrication of mechanical pieces with micron resolution, microelectronics, MEMS, micro-optics are typical fields of application. When the typical transverse dimension exceeds 1~$\mu$m, multiple pulses are required for the structuration. In this regime, the propagation and absorption of a laser pulse is strongly affected by the modification of the material performed by the previous pulses. Indeed, the irradiation by an initial ultrashort laser pulse leaves index modifications, colored centers, highly absorbing defects, voids, ripples or a rough surface. On these structural defects, the next pulses can be scattered, absorbed, diffracted, guided, depending on the type of modification, and depending on numerical aperture and repetition rate. Microstructuring with femtosecond lasers in multiple shot has been realized either from front or rear side surface of transparent materials. Front surface processing corresponds usually to ablation, drilling, trepanning or, for much weaker modifications, waveguide writing (see next chapter). This corresponds to a delicate geometrical configuration because of the reasons mentioned above. Scattering by structural defects can be extremely deleterious because ablation can happen even in regions that were not supposed to be illuminated. In contrast, rear surface processing is very attractive because the problem of plasma plume shielding and the structural modifications induced by previous pulses have no impact on the propagation of the following pulses, except at the ablation site (see Figure \ref{fig:multishot}). The drilling strategy consists in illuminating the exit surface up to ablation and progressively translate the beam at the same speed as the one of debris removal. It has been successfully used to write high aspect ratio structures in glass or other transparent materials \cite{Zhao2011}. Processing with high-repetition rate trains of pulses, {\it ie} bursts, takes advantage of the \index{heat accumulation} process \cite{Eaton2005} to increase the efficiency of the ablation or modification, so as to reduce the total amount of energy deposited in the material. This can reduce the size of the heat affected zone. Comprehensive comparison between front and rear surface processing with picosecond pulses demonstrated interesting processing parameter windows where high aspect ratio structures could be drilled with high speed with reduced heat affected zone. The performance on drilling was evaluated to be better with picosecond pulse durations in comparison with femtosecond ones, with comparable channel quality \cite{Karimelahi2013}. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{multishot.jpg} \caption{Concepts of the different laser drilling methodologies for high aspect ratio processing of transparent materials, involving static or dynamic sample positioning, and front or rear side processing. Reprinted figure with permission from \cite{Karimelahi2013} with courtesy of Prof. P. Herman. Copyright (2013) by Springer Nature.} \label{fig:multishot} \end{figure} When the aspect ratio is high, the energy density of the explosions at the ablation sites might be not enough to eject material out of the channel\index{water-assisted drilling}. To solve this issue, water assistance can help and remove debris out of the channels. This also allows for drilling non-straight channels in three dimensions \cite{Li2001,Kim2005}. The aspect ratio of the channels drilled inside transparent solids can reach 1000:1 with diameters on the order of several microns, down to few hundreds of nanometers. In some configurations, the channel filled by water behaves as a resonance tube, and acoustic limitations are found when processing very high aspect ratio nanochannels. In this case, the length of the channel is limited by the node of the velocity inside the tube \cite{Lee2007}. As a remark, by using a high aspect ratio beam, such as a filament or a non-diffracting Bessel beam, the drilling can be performed without scanning. This technique is called self-guided drilling. This removes the constraint of translating the focal point at the same speed as the one of material removal \cite{Hwang2008,Bhuyan2010}. A different strategy is based on a 2 steps process\index{etching}. The first consists in femtosecond laser modification of the fused silica matrix. Then, wet etching with hydrofluoric acid (HF) or potassium hydroxide (KOH) removes the modified glass \cite{Marcinkevicius2001}. This technique allows for creating various shapes in 3D, including high aspect ratio micro and nanochannels \cite{Bellouard2004,Ke2005,An2008,Wortmann2008}. Some groups have also taken advantage of both regimes, by laser processing the rear-side of the transparent workpiece set in contact with an etching liquid (HF or KOH) \cite{Cao2018}. However, the two-step approach is for now restricted to fused silica, several glasses and sapphire. The capability of the process relies on the difference in etching rates between laser-processed and pristine material. In conclusion, multishot drilling regime is unavoidable for wide structures forming in transparent materials. Rear-side structuring removes some of the difficulties associated with the structural changes induced by previous pulses that are not easily predictable or controllable. For smaller scale structures, the situation is different and one can take benefit of generating voids in single shot regime. \section{Single shot void formation under high numerical aperture focusing} \label{sec:void} In 1997, Glezer {\it et al} demonstrated for the first time that single pulses could generate nanometric cavities in the bulk of fused silica, quartz, BK7 glass, sapphire, diamond and acrylic \cite{Glezer1997}\index{void}\index{nanovoid}. 100 fs, 0.5~$\mu$J pulses were focused with a numerical aperture (NA) of 0.65, below the surface of the transparent materials. The diameter of the cavities was on the order of 200~nm, and the material surrounding the cavity was compressed. The authors suggested that extreme TPa pressures reached after laser energy deposition, could explain the formation of such voids in hard solid materials. In silica glass, the voids could be moved and merged or be induced even with multiple shots \cite{Watanabe2000}. Besides the discovery of a new mechanism, these results were opening a new route for laser processing of transparent materials. Indeed, they demonstrate the opportunity to process materials directly from the bulk instead of progressively processing, shot after shot, channels or long structures, from one of the surfaces. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{voids.jpg} \caption{[left] SEM imaging of a pattern of nano-cavities created in sapphire by single 150 fs, 800 nm, 120 nJ pulses. The sample has been mechanically cleaved before imaging. Scale bar is 1~$\mu$m. Reprinted figure with permission from \cite{Juodkazis2006} with courtesy of Prof. S. Juodkazis. Copyright (2006) by the American Physical Society.[right] Concept of the micro-explosion, ion separation and ultrafast quenching developed by Vailionis {\it et al}. Reprinted figure with permission from \cite{Vailionis2011} with courtesy of Prof. A. Vailionis.} \label{fig:voids} \end{figure} Figure \ref{fig:voids}(left) shows an SEM image of such a nano-cavities produced in sapphire, visualized after mechanically cleaving the sample. A modified region can be observed around the spherical cavities. This modified region can be etched using an HF solution. The formation of a cavity was interpreted in terms of shockwave release after the generation of a dense plasma, followed by a rarefaction wave. Figure \ref{fig:voids}(right) illustrates the concept. Hydrodynamic numerical simulations based on equation of states for fused silica \cite{Gamaly2006,Hallo2007} show that the formation of the void occurs in a typical timescale of a few hundreds of picoseconds after illumination. The shockwave\index{shockwave} stops when the internal pressure equals the Young modulus of the material. Separate experimental results based on phase contrast microscopy characterizing the plasma dynamics were compatible with this theory \cite{Mermillod-Blondin2009}. Another potential formation mechanism is cavitation \index{cavitation}by material retraction under GPa pressure, in a similar way as what happens in water \cite{Vogel2008}. In the model of nano-cavity formation after high-pressure shockwave and rarefaction wave, the pressures transiently reach teraPascals (TPa), and the compression of material around the void leaves a densified material. The density increase reach typically 14\% in sapphire \cite{Juodkazis2006}. This value was confirmed later in another geometrical configuration \cite{Rapp2016}. The state corresponding to extreme pressures and temperatures is the Warm Dense Matter (WDM) state that lasts less than $\sim 1$~ns. The fast cooling can quench relaxation and can generate new material phases around the nano-cavity. Theoretical studies predict phase transitions of Aluminum into hcc-Al and bcc-Al at pressures in the multi-hundred GPa, confirmed recently by diamond anvil cell compression experiments \cite{Fiquet2018}. These phases of aluminium have been discovered around nano-cavities produced in Al$_2$O$_3$ \cite{Vailionis2011}, demonstrating compatibility of the high-pressure shockwave mechanism with experimental results. In conclusion for this section, the formation of voids inside transparent materials reflects the potential for high energy density deposition within the bulk of transparent materials. A wide range of different structures are then possible provided that the propagation and energy deposition can be controlled. This is what will be discussed in the following sections. \section{Filamentation and optical breakdown of Gaussian beams} \label{sec:filamentation} \index{filamentation}Filamentary structures, {\it i.e.} elongated damage tracks, were very early identified in the wake of high peak power laser illumination in dielectrics \cite{Hercher1964, Yablonovitch1972}. This was in fact a severe problem for ultrashort pulse amplification until the invention of Chirped Pulse Amplification \cite{Strickland1985}. During a long time however, optical breakdown and filamentation were opposed: optical breakdown regime was the mechanism where dielectrics undergo sufficient nonlinear ionization to induce a strong permanent modification \cite{Ashcom2006,Nguyen2003}. In contrast, filamentation was regarded as a dynamical mechanism that was transforming the initial Gaussian beam into a quasi-soliton. This regime was identified by a strong supercontinuum emission and low plasma density formation, such that the modifications generated are weak index changes, such as waveguides. \begin{figure} \centering \includegraphics[width =0.9\columnwidth]{DiverseForms.jpg} \caption{Diversity of damages produced in single shot by ultrashort pulses in the filamentation regime. (a)Non-periodic series of voids formed in fused silica. Reprinted figure with permission from \cite{Luo2001} with courtesy of Prof. Q. Gong. Copyright (2001) by the Institute Of Physics. (b-d) Void channel formation in PMMA. (b) Side view optical imaging of voids channels formed after single shot illumination for different input pulse energies. (c-d) Scanning Electron Microscopy (SEM) of the void formed for 2~$\mu$J: (c) transverse cross-section, (d) longitudinal cross-section. Reprinted figures (b-d) with permission from \cite{Sowa2005} with courtesy of Prof. W. Watanabe. Copyright (2005) by Springer Nature.} \label{fig:DiverseForms} \end{figure} However, filamentation has no precise definition \cite{Couairon2007}. Not only there is no precise boundary between optical breakdown\index{optical breakdown} and filamentation, but these regimes in fact do overlap \cite{Luo2001,Sowa2005}. It is specifically the regime of interest for this chapter. We can refer to filamentation as the regime of nonlinear pulse propagation in dielectrics, where dynamical reshaping of the pulse occurs in space and time under the influence of Kerr effects, nonlinear ionization and plasma defocusing, among others. Because Kerr effect\index{Kerr effect} in transparent dielectrics is three orders of magnitude higher than the one in air, and because plasma densities in solids easily reach 100 times the densities observed in air, filamentation in solids is much more confined than in air, and the filaments survive only some millimeters. While the diameter of plasma channels in gases is on the order of 100~$\mu$m, these are typically less than 10 ~$\mu$m in solid dielectrics \cite{Couairon2007}. Supercontinuum generation is not necessarily a condition for filamentation since this process is efficient only on very long propagation distances ($\sim$centimeters), when frequency mixing can become efficient. In transparent dielectrics, a very wide family of modifications can be generated when the irradiation regime is complex. Figure \ref{fig:DiverseForms} assembles typical results found in the literature. The morphology of these strong modifications (strong index changes, cavities) cannot be straightforwardly explained from the linear propagation of a Gaussian beam with which they have been produced. It is the filamentation that has reshaped the beam and induced these {\it a priori} unexpected morphologies. : elongated voids, channels, series of voids (nonperiodic, periodic) \cite{Luo2001}. The filamentation process can be understood from figure \ref{fig:Papazoglou}. The figure shows a measurement of the transient change of index of refraction in air during pulse propagation. We note that similar behavior can be observed in solids \cite{Papazoglou2014}. The positive index change is shown in purple, and it corresponds to Kerr self focusing at the rising edge of the pulse. Then, when the pulse intensity is sufficiently high, the medium is ionized and the plasma channel decreases the index of refraction. The negative index change tends to defocus the pulse, acting as a negative lens, so that the following part of the pulse generates slightly less plasma\index{plasma defocusing}. Because of the reduction of plasma generation, the defocusing effect is reduced and higher plasma density is generated. This process repeats itself as long as the intensity is sufficiently high \footnote{We note that this reasoning is somehow simplistic because it is based only on a spatial description, whereas in reality, the rising and trailing edges of the laser pulse do not experience the same effects.}. Depending on the exact spatial phase profile, the process of plasma generation might be quasi-periodic, very homogeneous or quite complex. It leaves a plasma channel which will relax in generating a modification of the material which morphology depends on the plasma density distribution. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{Papazoglou_holographicMeasurement.jpg} \caption{Holographic measurement of the spatial distribution of the plasma density at different pump-probe temporal intervals. Reprinted figure with permission from \cite{Papazoglou2008} with courtesy of Prof. P. Papazoglou and Prof. S. Tzorzakis. Copyright (2008) AIP Publishing.} \label{fig:Papazoglou} \end{figure} The competition between the different nonlinear effects that sustain the filamentation process can be evaluated with characteristic distances \cite{Couairon2007}. The nonlinear length is $L_{NL}=1/(n_2 k_0 I_0)$, where $n_2$ is the nonlinear Kerr index and $I_0$ the peak intensity of the pulse. The plasma characteristic length is $L_{plasma} = 2 n_0 \rho_c/(k_0\rho)$, where we use the notations of pages~\pageref{eq:epsilon_of_omega} and \pageref{eq:plasmaeq} and $\rho_c =\varepsilon _0 m_e \omega_0^2 /e^2 $ is the critical plasma density at the laser central frequency $\omega_0$. When these distances are on the same order of magnitude as the Rayleigh range inside the transparent material, then a rich dynamics can be induced. As an example for fused silica, for peak intensities 10$^{12}$ to 10$^{13}$~W.cm$^{-2}$, the characteristic nonlinear length is on the order of some tens of microns, the plasma length shrinks from some 40~$\mu$m to some microns when the plasma densities increases from 10$^{19}$ to 10$^{20}$ cm$^{-3}$ as it is the case during the plasma buildup. Therefore, focusing with a numerical aperture below ~0.4 will trigger a long filamentation process, when spherical aberration is neglected. These number for instance match the experimental results of reference \cite{Papazoglou2014}. We note that most of the times, the Marburger formula \index{Marburger} for filamentation is mostly unapplicable \footnote{Marburger formula is calculated for a collimated beam. It does not take into account any spatial phase, like spherical aberration or focusing conditions. Dawes and Marburger formula is also semi-empirical \cite{Couairon2007} and therefore has a very narrow range of applicability.}. Therefore, low focusing numerical apertures, short input pulse durations and high peak powers are prone to seed a filamentation regime with rich dynamics on long distance, where a number of four-wave mixing processes can take place, among others. Spherical aberration has an important contribution to trigger the filamentation process\index{spherical aberration}. This is particularly the case when a Gaussian beam is focused at the rear side of a thick sample. Under spherical aberration, paraxial rays are focused at a much farther point than the focal position of non-paraxial rays. This drastically elongates the effective linear scale length. In turn, filamentation process can be triggered. As an example, Ahmed {\it et al} inserted thick glass plates between the focusing microscope objective and the workpiece to induce long filaments in the glass workpiece \cite{Ahmed2013}. This is also the case for rear side focusing. In reference \cite{Luo2001}, the NA of 0.65 associated to rear side focusing formed a series of periodic voids (see Figure \ref{fig:PeriodicVoids_Song}). We note that it is with the same numerical aperture that Glezer et al used to generate single, well-confined, nano-voids \cite{Glezer1997}. Identically, Kanehira {\it et al} used NA 0.9 and focussing through 750~$\mu$m thick borosilicate glass and produced periodically spaced voids \cite{Kanehira2005}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PeriodicVoids_Song.jpg} \caption{Numerical simulation of fluence distribution of focusing a Gaussian beam with NA 0.9 through 200~$\mu$m fused silica. Several high fluence spots appear along the optical axis. Reprinted figure with permission from \cite{Song2008} with courtesy of Prof. J. Qiu and Prof. Z. Xu. Copyright (2008) AIP Publishing.} \label{fig:PeriodicVoids_Song} \end{figure} Filamentation in transparent materials was demonstrated for a number of different laser wavelengths, ranging from IR to UV \cite{Tzortzakis2006}. The operable window is limited by the transparency window. Shorter wavelengths tend to generate more dense plasmas and reduce the filament length. A detailed study compares filament formation for IR and visible wavelengths \cite{Karimelahi2013}. In the case of illumination with a pulse train, {\it ie} a burst, thermal effects play a role. Indeed, the typical cooling time after laser pulse irradiation is in the $\mu$s range (strongly depending on focussing conditions), such that the pulses within a burst of several MHz repetition rate regime influence themselves via thermo-optical effect. This effect increases the local index of refraction of glasses at the locations where the temperature is high \cite{Ghosh1995}. The heat accumulation\index{heat accumulation} can lead to laser beam trapping and guiding. With a low repetition rate laser, the photo-excitation has completely relaxed before the arrival of the subsequent pulse. The latter diffracts on the structures left by the previous pulses. This regime was used to induce periodic damages \cite{Luo2001,Zhu2005,Kanehira2005,Sowa2005}. In this regard, we note that even a surface crater does not hamper occurrence of filamentation \cite{Luo2001}. In conclusion of this section, a filamentation dynamics is a complex phenomenon, highly dependent on input conditions and on the precise dynamics of ionization process. It is therefore extremely difficult to predict and to scale. Filamentation can generate plasma tracks with diverse morphologies. In the field of applications such as "filamentation cutting" or "filamentation welding", as we will see in section \ref{sec:applications}, state of the art usually refers to filamentation for the formation of long and homogeneous plasma channels on high aspect ratio. Interestingly, the filamentation process, when it creates long uniform plasma channels, spontaneously transforms a Gaussian beam into a nonlinear conical wave \cite{Dubietis2004,Porras2008}. Nonlinear Bessel beams, characterized by a conical energy flow from the lateral rings to the central core, have been proposed as attractors to the filamentation regime \cite{Porras2004}. It is therefore natural to generate plasma filaments from Bessel beams, as we will describe in the next section. \section{Nonlinear propagation of ultrafast Bessel beams} \label{sec:Bessel} Zeroth order Bessel beams\index{Bessel beam}\index{nondiffracting beams}\index{diffraction-free beams} are invariant solutions to the Helmholtz equation. Bessel beams can seed the nonlinear propagation regime where a homogeneous plasma channel is generated \cite{Courvoisier2016,Duocastella2012}. In this section, we will review what are Bessel beams, highlight the properties of their propagation in the nonlinear regime which are the most relevant for laser materials processing. Then we will review basic applications, particularly high aspect ratio nanochannel processing. \begin{figure} \centering \includegraphics[width= 0.9\columnwidth]{BesselInterf.jpg} \caption{ (top) Intensity distribution of a Bessel-Gauss beam (bottom) corresponding ray-tracing representation, showing that the Bessel beam is an interference field with cylindrical symmetry.} \label{fig:Bessel_Interference} \end{figure} \subsection{Bessel beam structure} Within a scalar model of monochromatic light, Durnin demonstrated that the Helmholtz equation $\left(\nabla ^2+ (\omega/c)^2 \right) A=0$, has a solution that is propagation-invariant with a hot spot. This central hot spot that can be of diameter down to "a few wavelengths", as he wrote, but in fact even below the wavelength \cite{Durnin1987a,Durnin1987}. The solution found by Durnin is cylindrically symmetric: \begin{equation} A(r,z)=J_0(k_0 \sin \theta r) e^{i k_0 \cos \theta z} \end{equation} \noindent where $k_0$ is the wavevector and $\theta$ is the Bessel beam parameter, which is called the {\it cone angle}. This solution, as it is the case for plane waves, is of infinite energy. We can experimentally generate only the apodized solutions. Several types of apodizations exist, which depend on the mean of generating the finite energy Bessel beam. In the rest of this chapter, finite energy Bessel beams will be referred to as "Bessel beams" for sake of simplicity. The first experimental realization of Bessel beam was from an axicon \cite{McLeod1954}\index{axicon}, even before it was realized that this was corresponding to a "diffraction-free" solution. Durnin {\it et al} produced Bessel beams from a ring slit, placed at the focal plane of a converging lens which Fourier transformed the ring aperture into a Bessel beam. Indeed, in the spatial frequencies ($k_r$) space, {\it i.e.} the Fourier space, an ideal Bessel beam is a circle of amplitude $A(k_r)=\delta(k_r-k_0\sin\theta)$. Because of the properties of the Fourier transform, the thinner is the ring slit, the longer is the actual Bessel beam length. However, this mean of generation has poor energy throughput since most of the power is lost. In the field of laser materials processing, it is preferable to shape beams in the {\it direct} space, by opposition to the Fourier space. Bessel beam generation from the direct space can be performed using axicons \cite{Grunwald2004,Grosjean2007,Tsampoula2007,Akturk2009,Xie2012,Duocastella2012}, holograms \cite{Vasara1989}, or using Spatial Light Modulators \cite{Chattrapiban2003, Courvoisier2009} or, equivalently, Diffractive Optical Elements (DOEs)\cite{Amako2003,Khonina2004}. The shaping technique consists in applying a spatial phase $\phi(r) = k_0 r \sin \theta$. The application of such phase onto a Gaussian beam creates what is called a Bessel-Gauss beam\index{Bessel-Gauss beam}. The evolution of the intensity as a function of the propagation distance $z$ can be derived on the optical axis from the stationary phase approximation of Fresnel diffraction integral: \begin{equation} I(r=0,z)=4 P_0 k_0 z \sin^2\theta e^{-2(z \sin\theta/w_0)^2}/w_0^2 \end{equation} \noindent where $P_0$ and $w_0$ are respectively the power and the waist of the input Gaussian beam \cite{Roy1980,Jarutis2000}. High quality axicons enable the generation of high-power Bessel-Gauss beams without spatial filtering \cite{Boucher2018}. Figure \ref{fig:Bessel_Interference} shows a ray tracing representation of a Bessel-Gauss beam. A Bessel beam is an interference field, which longitudinal extent, the {\it Bessel zone}, is: $Z_{max}\sim w_0/\tan\theta$. It is apparent from this geometrical concept that the invariance of the waves crossing angle along the propagation, makes the fringes period invariant. In other words, the central spot size does not change along the interference field, hence the denomination "diffraction-free". In contrast with Gaussian beams, Bessel-Gauss beams have two free parameters to independently adjust the Bessel zone length and the diameter of the central spot. The latter is $d_{FWHM} \sim 0.36 \lambda_0/\sin \theta$ only determined by the cone angle, whereas the Bessel beam length can be independently adjusted by the input beam waist. It is important to realize that a Bessel beam corresponds to a line focus. Each point on the focused segment is topologically linked to a circle in the input plane \cite{Froehly2014}. In this regard, the energy does not flow along the optical axis, but instead the energy flow is conical. The polarization state of a Bessel beam is close to the one of the input beam, since, with sub-30$^{\circ}$ cone angle, the propagation is close to paraxial \cite{Zhang2007}. The longitudinal component of the electric field is mostly negligible in the experiments described below. We note that upon refraction from air to a dielectric medium, both the wavelength and cone angle are corrected by the index of refraction of the dielectric. But these corrections cancel out and do not change the value of central spot size in the material \cite{Brandao2017}. In contrast, the length of the Bessel zone is increased by the factor $n_0$. This is similar to the case of a Gaussian beam where the Rayleigh range is increased while the waist remains identical upon refraction \cite{Nemoto1988}. Up to here, we have described monochromatic Bessel beams. In principle, the bandwidth of ultrashort pulses has to be taken into account in the description. But since it is less than 1\% for pulses of 100~fs, spatio-temporal aspects of the pulse generation can be neglected. Apart from the fact that the on-axis wavepacket created by Bessel-X-Pulses and Pulsed Bessel beams does not travel at the same speed (respectively $c/\cos \theta$ and $c \cos\theta$), no impact in terms of plasma generation efficiency has been reported up to here. More details can be found in references \cite{Klewitz1998,Froehly2014}. \subsection{Filamentation of Bessel beams} The nonlinear propagation of Bessel beams can be described in terms of 3 different families \cite{Polesana2008}. {\it Weakly nonlinear Bessel beams} generate negligible plasma density and are only characterized by central lobe shrinking due to Kerr effect; {\it nonstationary Bessel beams}, as for their denomination by Polesana {\it et al}, generate a quasiperiodic plasma distribution along the propagation in the material. The third family, {\it stationary Bessel beams} is characterized by a quasi-invariant propagation, that generates a homogeneous plasma channel. In more detail, the second regime is largely driven by Kerr nonlinearity which generates via four wave mixing processes, two secondary waves with transverse wavevectors $k_r = \sqrt{2} k_{r0}$ and $k_r = 0$, where $k_{r0}=k_0 \sin \theta$ is the radial wavevector of the initial Bessel beam \cite{Gaizauskas2006,Ouadghiri-Idrissi2017}. The interference between these secondary waves and the initial Bessel beam create periodic oscillations \cite{Polesana2008, Norfolk2010}. Periodic modifications in glass have been demonstrated in this regime by Gaizauskas et al \cite{Gaizauskas2006}. The third regime is the most interesting for micro-fabrication. It is indeed the regime where enough losses occur in the central lobe to stabilize the dynamics. A conical flow of energy, oriented from the lateral lobes to the central one, compensates the energy loss. This regime corresponds to the monochromatic Nonlinear Unbalanced Bessel Beams (NLUBB). Indeed, a Bessel beam can be seen as a superposition of two cylindrical Hankel waves with equal weights, one propagating inward and the other propagating outward \cite{Porras2004}. In a NLUBB, energy loss within the central lobe reduces the weighting coefficient for the outward component. This reduces the contrast of the fringes and implies a net energy flow towards the center. Noticeably, in this regime, the spatio-temporal shape remains quasi-invariant all along the propagation, hence the denomination {\it stationary Bessel beam}. This is in contrast with the second regime, {\it non-stationary Bessel beam}, where a periodic reshaping strongly modifies the spatial temporal shape of the pulse \cite{Polesana2008}. The propagation-invariant NLUBB solution was proposed as an attractor to the filamentation regime \cite{Porras2004}. This solution cannot be found for all input parameters, but the operating window, in terms of peak power, is wider when the cone angle is increased. Indeed, for higher peak powers, nonlinear losses are higher which tends to reduce the impact of Kerr nonlinear dynamics. This has an important impact for applications to laser materials processing: high powers are needed to generate high plasma densities. A stationary regime can be reached for a given high peak power if the Bessel cone angle is sufficiently large. \subsection{High aspect ratio processing with propagation-invariant ultrafast Bessel beams} The stationary filamentation regime was early recognized to have strong potential applications \cite{Porras2004}. Early works with ultrafast Bessel beams in glass - before the theoretical work on Bessel filamentation - have shown that it is possible to write index modifications without the need for translating the sample \cite{Marcinkevicius2001a,Amako2005}. With much higher cone angles, we demonstrated the ability to create high aspect ratio nanochannels\index{nanochannel} with a single femtosecond pulse \cite{Bhuyan2010}. In borosilicate glass, the channels could be drilled either from an entrance or exit surface, with diameters ranging from $\sim$200~nm to 800~nm. The diameter could be tuned quasi-linearly with input pulse energy. The aspect ratio reached at that time 100:1, which was in line with the aspect ratio of the beam. Through channels could be drilled also with a single pulse, and periodic arrangements of nanochannels could be realized. \begin{figure} \centering \includegraphics[width=\columnwidth]{ChannelsBessel.jpg} \caption{ From . Open channels formed after femtosecond illumination ($\sim$230~fs) by Bessel beams with cone angle $\theta_{\texttt{glass}} = 17^{\circ}$ in Corning 0211 glass, for two pulse energies. SEM imaging is performed after mechanical cleaving. Reprinted figure with permission from \cite{Bhuyan2010}. Copyright (2010) AIP Publishing } \label{fig:ChannelsBessel} \end{figure} In borosilicate glass, it was possible to generate a channel only if the Bessel beam was crossing one of the sample surfaces, {\it i.e.} only if the channel was opened on one of the sides. In contrast, in sapphire, it was possible to create a high aspect ratio nano-void fully enclosed within the materials bulk. In this case, the void is formed only by compression of the material around. Yet the void formation process in this configuration is not yet fully understood, we infer that the 10- fold higher thermal diffusion coefficient of sapphire allows for fast cooling. This can prevent cavity closing, in contrast with the case of borosilicate glass. Further investigations with picosecond pulses have been independently performed by several groups in a number of different glass materials. Interestingly, it seems that picosecond pulses generate channels that are more visible under optical microscopy than the ones created by shorter femtosecond pulses (see Figure \ref{fig:ChannelsPico}(left)). A parametric study of the channel morphology as a function of input pulse energy and pulse duration has been reported in reference \cite{Bhuyan2014}. The pulse duration was adjusted by temporally stretching a femtosecond pulse and the Bessel beam aspect ratio was $\sim$1000:1. It was found that in this case, multi-picosecond pulse durations could create uniform voids. For too high pulse energies, fragmentation of voids was observed. For very short pulses, less than 100~fs, the formation of empty channels was less clear. In this case, we stress that characterization techniques are at limit of resolution. \begin{figure} \centering \includegraphics[width=\columnwidth]{ChannelsPico.png} \caption{(left) Phase contrast images of high aspect ratio structures formed after illumination by Bessel beams with cone angle $\theta_{\texttt{glass}} = 8^{\circ}$ in 7980-5 F Corning glass, for different pulse durations. Note the large difference in cone angle with reference \cite{Bhuyan2010}. Reprinted figure with permission from \cite{Bhuyan2014} with courtesy of Dr. R. Stoian. Copyright (2014) AIP Publishing. (right) High aspect ratio void formed in sapphire after illumination by a Bessel beam of cone angle $\theta_{\texttt{sapphire}} = 15^{\circ}$ and pulse duration 3~ps. The heat affected zone is far more pronounced than in the femtosecond case and the sides of the channel evidence the formation of phase transformations during the cavity formation process in this case. From \cite{Rapp2016}, Creative Commons licence.} \label{fig:ChannelsPico} \end{figure} In parallel, nanovoids induced by 3~ps pulses in sapphire\index{sapphire} were characterized by FIB milling process. The result is shown in figure \ref{fig:ChannelsPico}(right). It is apparent that the morphology of the cavity is highly different from the case of femtosecond pulse illumination. Nanoparticles accumulated on the walls of the cavity are clearly observable, as well as a very wide heat affected zone \cite{Rapp2016}. It is too early to determine if the more apparent damages produced by picosecond pulses with respect to femtosecond ones, arise from a different energy density deposition and/or from a different photo-excitation pathway. Experimental time resolved phase contrast microscopy opened new perspectives on the formation of the void. Bhuyan {\it et al} imaged the transient index distribution at times case ranging from nanoseconds to microseconds \cite{Bhuyan2017}. They conclude that the void opening is slow in comparison with the shockwave theory. They infer that the void formation in this 2D case arises from cavitation of a low viscosity liquid phase. The main difference in comparison with the shockwave\index{shockwave} theory is that the estimated deposited energy density is of $\sim$7~kJ.cm$^{-3}$, which is in high contrast with the values estimated in the case of spherical void formation, on the order of 90~kJ.cm$^{-3}$ \cite{Hallo2007}. Wang {\it et al} have investigated by shadowgraphy the mechanical wave ejected after the plasma formation in PMMA. They observed a wave with speed corresponding to the speed of sound in PMMA, whatever the input pulse energy. This is compatible with both theories on cavity formation since in the shockwave case, the latter is supposed to propagate only over less than a few microns, {\it i.e.} below the resolution of the shadowgraphy experiment \cite{Wang2017}. \section{Applications} \label{sec:applications} \index{filamentation} \index{Bessel beam} Single shot generation of plasma columns of $\sim$1$\mu$m diameter and length several tens to hundreds of micrometers has a number of different applications that we will review here. As mentioned earlier, the plasma channel generated by a smooth regime of filamentation from a Gaussian beam is quite close to the one generated by a Bessel ultrafast pulse. As we have seen above, the difference lies in the ability to control independently the parameters (length, diameter, pulse duration) that makes the Bessel beams attractive. We will treat the applications of both types of filaments in a single section. Most of the applications were started with Gaussian filaments and refined more recently with Bessel or Bessel-like beams. \subsection{High aspect ratio refractive index modifications} At relatively low peak power, long plasma channels have been used to write a series of index modifications in glass and polymers. The process is applicable in most of the transparent materials. However, as for Gaussian beam focusing, the positive or negative sign of the photo-induced refractive index change depends on the material itself.\index{grating} Long plasmas tracks have been used for instance to fabricate gratings in a number of different materials: PMMA \index{PMMA}, silica glass \index{glass}, or even chalcogenides\index{chalcogenides} (see Figure \ref{fig:grating}) \cite{Mikutis2013,Matushiro2016,Zhang2018} where the empty channels formed by Bessel beam illumination were used as scatterers in the vicinity of a waveguide \cite{Martin2017}. \begin{figure} \centering \includegraphics[width = \columnwidth]{grating.jpg} \caption{(left) Concept of the Bragg grating writing by a Bessel-Gauss beam. Several layers form a thick grating. (right) Optical view of gratings written in fused silica with different parameters.Reprinted figure with permission from \cite{Mikutis2013} with courtesy of M. Mikutis, Workshop of Photonics. Copyright (2013) by the Optical Society of America.} \label{fig:grating} \end{figure} \subsection{Ultrafast laser welding} \index{welding} Joining transparent materials such as two types of glasses, or joining a transparent material on silicon or metal, is a need for a very large number of applications fields: opto-fluidics, biological analysis, microelectronics, MEMS, MOEMS require that structured glasses or silicon and metals have to be sold together after the microstructuration. Despite a number of different joining techniques exist, none allows for joining over only a few micrometers width. Ultrashort pulse lasers are ideal tools for this application, because they can melt the transparent material with very high spatial resolution, while preserving the optical, mechanical, electrical properties of the surrounding components. Before welding, the two parts to be welded have to be set in tight contact. Then, laser illumination is used to melt the transparent material which expands and fills the empty space. After cooling, which scale is in microseconds, the two pieces are welded together. The filamentation welding technique benefits of the relatively high volume of heated material in the plasma column, together with the relaxation of the positioning constraint \cite{Tamaki2005}, as shown in Figure \ref{fig:welding}(left). Dissimilar materials have been welded \cite{Watanabe2006}, even glass on silicon or metals \cite{Tamaki2006}. Welding with gaps up to $\sim$3~$\mu$m has been successfully achieved using bursts and heat accumulation effect \cite{Richter2015}, as shown in Figure \ref{fig:welding}(right). \begin{figure} \centering \includegraphics[width=\columnwidth]{welding.jpg} \caption{(Left) From Concept of ultrafast laser welding of glass with filamentation. Reprinted figure with permission from \cite{Tamaki2006} with courtesy of Prof. W. Watanabe. Copyright (2006) by the Optical Society of America.(Right) From . Example of side view imaging of welded glasses. Depending on the melted pool position, the melt glass could fill the gap even between irradiation sites. Reprinted figure with permission from \cite{Richter2015} with courtesy of Prof. S. Nolte. Copyright (2015) by Springer Nature.} \label{fig:welding} \end{figure} Mechanical characterizations demonstrate that this technique is extremely powerful because, in terms of traction, the weld parts can be as strong as the bulk material itself. The strength of the weld depends on the difference of the thermal and mechanical properties of the two materials. Large differences obviously have a negative impact on the strengths of the bonding\index{bonding}. We note that the use of burst and heat accumulation\index{heat accumulation} effect tends to relax the stresses left in the material and provide stronger welding \cite{Richter2015}. \subsection{Stealth dicing of brittle transparent materials} \index{stealth dicing} \index{glass cutting} \index{cutting} High speed separation of materials is a key important technology for a number of applications, specifically for mass fabrication such as screen covers, touchscreens, electronics or lightning technologies. A specific need is to separate at high speed glass sheets with thickness of several hundreds of micrometers. In order to preserve the resistance of glass to bending and other types of stresses (thermal shocks), the cut needs to be free of chipping, with limited defects in the vicinity of the cut surface. "Stealth-dicing" is a technology initially developed for high speed, ablation-free silicon wafer cutting for the microelectronics industry \cite{Kumagai2007}. The concept is that a laser, which wavelength is chosen in the transparency window of the material ({\it i.e.} IR laser for silicon), generates a plane of defects within the depth of the material. When the material is set under thermal or mechanical stress, it cleaves along this plane. The initial technology was based on nanosecond IR lasers, and the morphological damages in silicon were extending typically on the scale of tens of micrometers. \begin{figure} \centering \includegraphics[width =\columnwidth]{StealthDicing2.jpg} \caption{(left) Concept of stealth dicing : in a first step, high speed laser processing creates a series of nanochannels aligned in a plane, which guides cleaving under mechanical stress. Courtesy of R. Meyer, {\it FEMTO-ST Institute}, France. (right) Example of optical microscopy view of gleaved glass after processing. With courtesy of J. Safioui, {\it FEMTO-Engineering}, France } \label{fig:StealthDicing} \end{figure} A similar technique was developed to separate glass, based on filamentation and plasma channel formation, leaving high aspect ratio nanovoids in glass. A periodic pattern of voids, separated by $\sim$5 to $\sim$25~$\mu$m, allows the material to be mechanically cleaved. This can be performed also with Bessel beams \cite{Bhuyan2015,Mishchik2017}. Using commercial ultrafast lasers with multi-100 kHz repetition rate, it is feasible to irradiate at a speed on the order or exceeding 1m.s$^{-1}$. A small mechanical stress is enough to separate glass pieces, as shown in figure \ref{fig:StealthDicing}. This technology is particularly attractive in the case of chemically strengthened glass, such as the glasses used for cover-screens of smartphones, since the glass self-cleaves after laser processing. After cleaving, the walls are straight and free from chipping. The technique is mostly non-ablative, which avoids the issues of cleaning debris. Noticeably, it is also possible to cleave along curved paths. To shape the processed glass or cut with angles different from 90$^{\circ}$, illumination at non-perpendicular direction is desirable. But in this case, the non-uniform optical path difference over the beam cross-section restricts the length of a Bessel beam inside the transparent workpiece. Jenne {\it et al} have developed an approach where the optical phase profile of an initial Bessel beam is compensated by a secondary mask. Cut with tilted angles were demonstrated up to 30 degrees \cite{Jenne2018}. At high input pulse energies, the energy stored in the material is sufficient to generate cracks. A slight asymmetry in the input beam is sufficient to make the crack direction deterministic instead of random. This property has been exploited by Dudutis {\it et al}, by using an imperfect axicon\index{axicon}, which generates a non-circularly symmetric Bessel beam. It was used to generate cracks extending transversely up to 100~$\mu$m away from the central nanochannel \cite{Dudutis2016}. This brings the potential to increase the inter-channel distance for stealth dicing of glass at even higher speeds. Heat accumulation using burst mode with Bessel beams was also used to initiate the cracks \index{cracks} \cite{Mishchik2017}. Instead of using crack formation guided by an imperfection in the axicon, it is also possible to create an asymmetry in the Bessel beam, using spatial filtering, so that the generated non-diffracting beam has an elliptical cross section. Using $\sim$3~ps single pulse illumination, such beams generate nanochannels in glass also with elliptical cross-sections, whose ratio major/minor axis is the same as the beam ratio \cite{Meyer2017}. The elliptical cross section allows for enhancing the mechanical stress at the tips of the ellipses and increases the reliability of stealth dicing. A detailed statistical study also demonstrated that the cleaving was requiring less mechanical deformation in this case, with the second benefit of leaving less defects in the processed glass, since all laser-induced channels are perfectly cleaved through \cite{Meyer2017a}, see figure \ref{fig:EllipticalBessel}. \index{Bessel beam, elliptical} \begin{figure} \centering \includegraphics[width = 0.9\columnwidth]{EllipticalBessel.jpg} \caption{(top row) Transverse cross-section of an elliptical Bessel beams and corresponding SEM image of elliptical channel produced by the beam with single pulse illumination in glass. (bottom) The beam and red arrow show the laser scanning configuration. SEM image : top-view of a glass sample cleaved by stealthdicing technique, where it is apparent that all elliptical channels were processed. With courtesy of R. Meyer, {\it FEMTO-ST Institute}, France.} \label{fig:EllipticalBessel} \end{figure} \subsection{Separation of sapphire} \index{sapphire} Sapphire is an important technological material. Its high hardness, just below the one of diamond, make it an ideal cover for screens or for watches. This crystal is even more importantly used as a substrate for the growth of LEDs. Sapphire was also processed with the same stealth dicing technique as described in the previous sub-section. A complementary approach is to take benefit of the crystalline structure of sapphire to guide the fractures. As in the case of glass, laser illumination with high pulse energies generate cracks even in the single shot regime. For C-cut sapphire, 3 crack directions are usually observed with Bessel beam illumination along the c-axis. However, below a pulse duration of $\sim$600~fs, the fracture can occur in a single direction, jointly determined by the laser pulse polarization and the scanning direction. This was exploited to initiate a series of cracks with very large inter-pulse distance (25 ~$\mu$m) paving the way for higher speed cutting \cite{Rapp2017}.\index{cracks} \subsection{Structuration of diamond} \index{diamond} \index{graphitization} \index{electrode} Diamond is the hardest material and it is extremely difficult to process. It has a number of applications, particularly because it is bio-compatible. It is also increasingly used in quantum photonics . Ablation of diamond is for now still performed from the surface \cite{Kumar2018}, no high aspect ratio void formation has been yet reported to the best of the author's knowledge. Diamond has been also proposed as a new material to build high-energy particle detectors. For this application, conductive graphite wires are needed in the bulk of the material. Graphitization of the bulk material has been successfully achieved with ultrafast Bessel beams. A single, 10~$\mu$J pulse was sufficient to create a conductive column through 500~$\mu$m diamond sample \cite{Canfield2017}. We remark that surface and bulk graphitization is a phenomenon that builds up from pulse to pulse as described in reference \cite{Kumar2017}. \begin{figure}[htb] \centering \includegraphics[width =\columnwidth]{diamondGraphitization.jpg} \caption{From . Optical microscopy views of graphitization marks created in the bulk of 500~$\mu$m thick diamond, with Bessel pulses of energy 3.5~$\mu$J. The number of pulses at 20~Hz repetition rate is indicated below each graphitized column. Reprinted figure with permission from \cite{Kumar2017} with courtesy of Dr. O. Jedrkiewicz. Copyright (2017) by Springer Nature.} \label{fig:diamondGraphitization} \end{figure} \subsection{Processing of silicon} \index{silicon} Silicon is a material of major interest for microelectronics and has a immense field of applications. Specifically, there are needs in the fields of creating waveguides for silicon photonics as well as micro and nanochannels for the cooling of silicon chips or to insert conducting electrodes transmitting signals from one side to the other. These Through Silicon Vias (TSV)\index{Through Silicon Via (TSV)}\index{TSV, Through Silicon via} are particularly important for next generation 3D microelectronic chips. Silicon is transparent in the infrared region of the spectrum, for wavelengths higher than $\sim$1.1~$\mu$m. In this context, attempts on reproducing the results obtained in dielectrics were performed with femtosecond Bessel beams with a central wavelength of 1.3 ~$\mu$m. However, an absence of morphological modification was observed for bulk focussing with ultrafast pulses. This was explained by the authors as originating from the strong two-photon absorption \cite{Grojo2015}. Recently, Tokel {\it et al}, processed modifications in 3D, opening routes to similar processing as in glass, but this was with nanosecond pulse durations and requires a nonlinear feedback mechanism involving the rear surface of silicon \cite{Tokel2017}. Bessel beams were also investigated for TSV drilling for a laser central wavelength of 1.5~$\mu$m (Figure \ref{fig:silicon_Bessel}. As the drilling with conventional Bessel beam was showing not enough contrast between the lobes, an apodized version of Bessel beam was developed and 10~$\mu$m diameter TSV in 100~$\mu$m thick silicon wafer was drilled with $\sim$1200 laser pulses at a repetition rate of 1~kHz \cite{He2017}. More recently, bulk modifications in more conventional Gaussian-beam approach were demonstrated based on three different processes. In reference \cite{Chanal2017}, illumination by a numerical aperture close to 3 could induce an index change in the bulk of silicon with a single pulse. In the multiple shot regime, 250~kHz repetition rate illumination with 350~fs pulses enabled to produce waveguides in silicon \cite{Pavlov2017}. The buildup of index modification was shown to be more reliable with 10~ps pulses instead of shorter pulses \cite{Kaemmer2018}. The mechanisms leading to modification of the silicon bulk are still incomplete and more experiments are needed to provide a clear overview for bulk laser processing of silicon. \begin{figure} \centering \includegraphics[width = \columnwidth]{SiliconBessel.jpg} \caption{Silicon processing with Bessel beams in the multishot regime. (left) SEM view of a Through Silicon via (TSV) in silicon processed with a conventional Bessel beam (CBB). (right) same view for a Tailored Bessel beam (TBB) where the lobes of the Bessel beam have been removed. Reprinted figure with permission from \cite{He2017} with courtesy of Prof. K. Sugioka and Prof. Y. Cheng. Copyright (2017) Creative Commons licence.} \label{fig:silicon_Bessel} \end{figure} \newpage \section*{Conclusion} In conclusion, the extremely high peak power of ultrashort laser pulses makes it possible to deposit energy with 3D control inside the bulk of transparent materials. This can be used to generate waveguides, nanogratings or even nano-cavities. Ultrashort laser pulses are therefore well-suited to answer the needs for high aspect ratio micro- and nano- processing, for drilling, cutting, producing channels for micro-nano fluidics or microelectronics. We have reviewed the basic mechanisms of pulse propagation and plasma formation inside transparent materials, as well as the experimental characterizations. For wide structures, high aspect ratio laser processing requires the multiple shot illuminations regime. The best condition corresponds generally to process from the exit surface of the workpiece, potentially with assistance of a liquid or an etchent. Breakthroughs in the field have been made with single shot or single burst processing with filamented beams, which create long and thin homogeneous plasma channels. The structures that are generated therefore possess a very high aspect ratio. Predictable filamentation is made possible with "nondiffracting" Bessel and Bessel-like beams. Control on single shot filamentation has enabled a number of novel applications, ranging from index modification writing, high precision welding to high speed cutting of transparent materials. A number of efforts are still required to understand the physical processes generating the cavities. This is particularly relevant for silicon. The propagation-invariant properties of Bessel beams are fundamentally at the origin of the possibility to homogeneously deposit energy inside transparent materials with high aspect ratio. We expect that other beam shapes, which are also propagation invariant in the nonlinear regime, will be very attractive in the future to process materials with other geometries and develop novel applications of high-intensity ultrashort pulses. \bibliographystyle{apalike}
proofpile-arXiv_065-3919
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A sequence of motions such as aerial reorientation and stable landing have been investigated in both animals \cite{KaneFallingCatPhenomenon1969,FukushimaSquirrels2021} and robots \cite{KurtzMiniCheetah2022, RudinCatLike2022}. Such motions are critical for safety and survival when animals or robots are subject to unexpected falls. For example, a falling cat can rotate its front and back bodies, swing its tail and legs to self-right before safe landing with four feet pointing downwards \cite{KaneFallingCatPhenomenon1969}. Squirrels that were catapulted off a track could stabilize themselves using tail motion, allowing them to land successfully \cite{FukushimaSquirrels2021}. In robots, especially for medium-size quadruped robots like Mini Cheetah and Unitree A1, they may suffer from the same safety issues in falling. Referring to the famous falling cat problem, we can call it \textit{falling quadruped robot problem}. Thus, landing the quadruped robots safely needs to be solved. \begin{figure} \centering \includegraphics[width=50mm]{Figures/Frontpage_V2_Refine.pdf} \caption{A combined motion snapshot of Unitree A1 performing 3D aerial righting and safe landing by utilizing a 3-DoF morphable inertial tail in a fall from 1 m height.} \label{fig:system_integration} \end{figure} There are two paradigms to approach such a problem: 1) designing landing strategies to search optimal contact sequence and optimize contact forces for landing impact; 2) using limbs or extra appendages to right the body to a horizontal pose and then applying a simple landing controller. In the state-of-the-art work \cite{SHJ}, the first paradigm has been implemented, but their results show that Mini Cheetah can only handle \textit{horizontal} drops in hardware. Besides, the robot may be damaged because of uneven leg force distribution at touch-down when the body has significant orientation offset from the horizontal. The second paradigm aims to reorient the body to the horizontal even the desired pose (accommodating the terrain and environment) before touching down. This alleviates the burden of landing control and mitigates robots' mechanical damage. In this paper, we will focus on the second paradigm and integrate a 3-DoF tail into a quadruped robot to enhance the capability of safe landing. Although, recently, a few efforts have been made to acquire the reorientation-for-land capability in quadruped robots, their performance is still far from that of their biological counterparts. The work \cite{KurtzMiniCheetah2022} has enabled, a big robotic cat, Mini-Cheetah to land on its feet from falls with initial pitch within $\pm 90^{\degree}$, but the motion was constrained in sagittal plane. Similarly, \cite{RudinCatLike2022} presented a combination of 2D reorientation and landing locomotion behaviors based on a physical quadruped robot SpaceBok, although the same behaviors in 3D were implemented in simulation. Within the mentioned work, the leg swinging is not effective in inducing angular momentum change, because 1) the Moment of Inertia (MoI) of the legs is relatively small, compared with that of bodies; 2) the workspace of the legs is limited and it may consume more time, compared with that of tails which are common in many quadruped animals. These limitations also explain why \cite{KurtzMiniCheetah2022} added additional mass to the foot for increasing the leg's MoI and \cite{RudinCatLike2022} assumed the drop happened in a low gravity environment for increasing aerial duration. Except for using legs \cite{KurtzMiniCheetah2022,RudinCatLike2022, GosselinReorientation2022}, reaction wheels and tails have been included in quadruped robots for enhancing locomotion capability in both flight and stance phases. \cite{KolvenbachMoon2019, ZacharyCMU2022, Roscia2022} used reaction wheels to assist locomotion, however, \cite{KolvenbachMoon2019} can only stabilize the pitch direction in the flight phase, and \cite{ZacharyCMU2022, Roscia2022} showed the aerial reorientation capability in simulations although they built prototypes. In terms of the tails, a simple application is using a tail to reject disturbance along pitch \cite{YangCMU2021}, yaw \cite{FawcettArticulatedTails2021}, roll \cite{BriggsTails2012} directions. \cite{YangCMU2022} used a 2-DoF tail with pitch and yaw control capabilities to react to elevation changes; more specifically, the tail's cone motion protected the robot from tipping when falling off a cliff. Besides, some researchers used a tail for airborne righting and successful land. \cite{NorbyAerodynamic2021} designed an inertial tail with aerodynamic drag to allow a quadruped robot Minitaur to reorient from a $90$ degree pitch angle before landing. \cite{LiuSerpentine2021} proposed to use a serpentine robotic tail to stabilize body's pitch and roll to zero while landing. Among the work related to the tailed quadruped robots, only a few of them have provided hardware verification, especially focusing on planar (aerial) motion \cite{BriggsTails2012, NorbyAerodynamic2021}. Although \cite{LiuICRA2022} only built a reduced complexity quadruped robot designed for studying the serpentine robotic tail, no experimental results on the tailed robot have been provided. Considering multiple functions of the tail in quadruped robots, here we will constrain our attention on the reorientation-for-land capability and the study of improving forward velocity (e.g., in \cite{Heim2016}) or facilitating sharp turning (e.g., in \cite{Patelturning2013}) will be our future interest. In this paper, we propose to integrate a 3-DoF morphable inertial tail (pitch, yaw, and telescoping) into a quadruped robot for enabling 3D aerial reorientation and then inducing safe landing. To our best knowledge, only a few 3-DoF tails have been designed \cite{AnMorphable2020, AnMorphable2022}, one of which \cite{AnMorphable2022} investigated the use of the 3-DoF tail for somersault motion with a twist, but only a small-size tethered monopod robotic platform was used. Although a 2-DoF tail is commonly used for 3D aerial reorientation, e.g., \cite{ChuNSA2019}, there is a conflict between \textit{aerial reorientation} and \textit{landing balance} using a 2-DoF tail, because the 2-DoF tail configuration (or location) at the end of reorientation is uncontrolled and varies as different initial body angles. However, the tail configuration during stance phase has preference such that the tail's collision with the ground should be avoided and minimal disturbance would be imposed on body balance. To this end, for the first time, we introduce the 3-DoF tail to a quadrupedal robot where the tail with the maximal length (degenerating to a 2-DoF tail) can be used for self-righting in 3D effectively. Also, the tail can be retracted before touch-down for impressing the tail's side-effect and increasing the landing success. What we want to emphasize is the 3-DoF tail is designed to be modular and potentially available for other robots. The contributions of this paper are: 1) We integrate a 3-DoF morphable inertia tail into a quadrupedal robot and the tail can increase the quadrupedal robot's 3D aerial righting capability for safe landing. In experiments, the tail can help the robot adjust from a large 3D inclined posture to a desired posture during falling, which provides good preparation for the following safe landing task. 2) To reduce the potential damage to the quadrupedal robot, we design a flight-phase test platform that has a similar size and weight to the quadrupedal robot (Unitree A1) for initial experiments. Experimental results on the platform show the tail's effectiveness on 3D body reorientation and its fast retraction speed ($\sim 2$ m/s) before touch-down. 3) We complete a consecutive large 3D reorientation (zeroing $30^{\degree}$ pitch and $30^{\degree}$ roll offsets, and keeping yaw zero) and safe landing motion on the tailed A1 robot from $1$ m height. \section{System Modelling} \begin{figure}[t] \centering \includegraphics[width=60mm]{Figures/tsrbd_sketch_crop.pdf} \caption{Simplified system model of the tailed quadrupedal robot.} \label{fig:tSRBD} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=130mm]{Figures/tSRBD_crop.pdf} \caption{Planning and control framework for the tailed quadruped robot. An offline trajectory optimization is employed for the aerial reorientation and an additional flight tracking controller is designed as shown in the \textbf{Flight Phase} block. $h_s$ is the remained height after reoreintation. A PD stance controller is used to keep the robot balance as shown in the \textbf{Stance Phase} block.} \label{Control_Overview} \end{figure*} The motion patterns of a tailed quadrupedal robot in the flight phase and stance phase are different. As mentioned before, the tail will keep its maximum length for effective body reorientation in most of the flight phase. The 3-DoF morphable tail actually degenerates to a 2-DoF tail in airborne and thus we can simplify the system as a low-dimensional model, \textit{tailed Single Rigid Body Dynamics} model (tSRBD) as shown in Fig.~\ref{fig:tSRBD}. $L$ is the maximum tail length. $\theta^t_{pitch}$ and $\theta^t_{yaw}$ are the tail swing angles along the pitch and yaw directions, respectively. We only focus on the tail's usage and assume the leg joints are kept at a proper configuration for landing since the legs' small weight and MoI are ineffective in aerial righting. Referring to the conventions in \cite{RDLN}, the system state of tSRBD is defined as, \begin{equation} \begin{aligned} \boldsymbol{q}_f &:= \begin{bmatrix} \boldsymbol{p} & \boldsymbol{\Theta} &\boldsymbol{q}_t \end{bmatrix} \in SE(3) \times \mathbb{R}^2,\\ \quad \boldsymbol{u}_f &:=\begin{bmatrix} \boldsymbol{\dot{p}} &\boldsymbol{\omega} &\boldsymbol{\dot{q}}_t \end{bmatrix}\in \mathbb{R}^8, \end{aligned} \end{equation} where $\boldsymbol{p}=[\boldsymbol{p}_x,\boldsymbol{p}_y,\boldsymbol{p}_z]$ is the position of the body's center of mass (CoM) and $\boldsymbol{q}_t=[\theta^t_{pitch},\theta^t_{yaw}]$ is the tail's joint positions. $\boldsymbol{\Theta}=[\boldsymbol{q}_x,\boldsymbol{q}_y,\boldsymbol{q}_z,\boldsymbol{q}_w]$ is the unit quaternion representation of the body orientation. Note that the body's position, orientation, and linear velocity are represented in the inertial frame $\{I\}$. The body's angular velocity $\boldsymbol{\omega}$ is expressed in the base coordinates $\{B\}$. The equations of motion (EoM) can be written as, \begin{equation}{} \boldsymbol{M}_f(\boldsymbol{q}_f)\boldsymbol{\dot{u}}_f+\boldsymbol{b}_f(\boldsymbol{q}_f,\boldsymbol{u}_f) +\boldsymbol{g}_f(\boldsymbol{q}_f)=\boldsymbol{S}^T\boldsymbol{\tau}, \end{equation} where $\boldsymbol{M}_f$ is the inertia matrix, $\boldsymbol{b}_f$ is the Coriolis and centrifugal terms, and $\boldsymbol{g}_f$ is the gravitational term. $\boldsymbol{\tau}$ is the joint torques of the tail. $\boldsymbol{S}$ is the selection matrix representing the under-actuation of the base, $$ \boldsymbol{S}=\begin{bmatrix} \boldsymbol{0}_{2\times 6} & \boldsymbol{I}_{2 \times 2} \end{bmatrix} \in \mathbb{R}^{2 \times 8}. $$ After an effective aerial reorientation, the tail will quickly retract to $1/4$ of its maximum length (Fig.~\ref{fig:tSRBD}) before landing and the robot has similar mass distribution to its original state. In this paper, we focus on showing the paradigm of reducing landing control burden via effective flight phase control and thus a controller specific to landing (e.g., \cite{SHJ}) will not resort. Therefore, a stance-phase dynamic model is not specified as a simple PD leg controller works for the stance phase safe landing task. \section{Control Framework} As a single controller is challenging in controlling a hybrid system, we develop a control framework to achieve the \textit{falling quadruped robot} task in this section. The planning and control framework is shown in Fig.~\ref{Control_Overview}. The whole motion is divided into two phases: flight phase and stance phase. In the reorientation phase, the robot adjusts its body orientation via swinging the tail with its maximum length. Then, the tail will be retracted close to the body for landing preparation after body self-righting. When contact is detected, the robot mainly uses its legs instead of the tail to keep balance on the ground in the stance phase. Each phase can use different controllers in corresponding blocks (green blocks in Fig.~\ref{Control_Overview}). In this paper, we select a trajectory optimization based controller for the flight phase and a compliant joint PD controller for the stance phase. \subsection{Trajectory Optimization for Aerial Reorientation} To realize the reorientation task of the tailed quadruped robot, the internal dynamics (conversation of angular momentum) can be utilized to adjust the body orientation in airborne. Trajectory optimization (TO) is an effective way to plan trajectories or design controllers by exploiting system dynamics and incorporating state/control constraints. Here, we adopt the TO method to obtain an optimized trajectory offline given the height and initial configuration of the robot, and the optimal trajectories provides a safe reorientation reference due to the satisfaction of physical constraints. Specifically, a custom-made differential dynamic programming (DDP) solver is employed in the offline stage. More details of the solver can be found in \cite{HM-DDP}. The trajectory optimization problem is formed as \subsubsection{Objective Function} \begin{equation} J(\boldsymbol{x}_{0},\boldsymbol{\tau}_{0:N-1}) = \sum_{k=0}^{N-1} \ell (\boldsymbol{x}_k,\boldsymbol{\tau}_{k}) + \ell _{f}(\boldsymbol{x}_N), \end{equation} where $N$ is the horizon length and $\boldsymbol{x}_0$ is the given initial state. State $\boldsymbol{x}_k$ at each time step is, $$ \boldsymbol{x}_k=\begin{bmatrix} \boldsymbol{p} &\boldsymbol{\Theta} &\boldsymbol{q}_t & \boldsymbol{\dot{p}} &\boldsymbol{\omega} &\boldsymbol{\dot{q}}_t \end{bmatrix}_k, $$ and the running and terminal objective objective functions, $\ell (\boldsymbol{x}, \boldsymbol{\tau})$ and $\ell_f(\boldsymbol{x})$, are smooth functions which encode the reorientation tasks. To reorient the body, the running/terminal cost can be chosen as, \begin{equation} \begin{aligned} \ell(\boldsymbol{x}, \boldsymbol{\tau})&=e(\boldsymbol{\Theta}_d,\boldsymbol{\Theta})+\frac{1}{2}\boldsymbol{u}_f^T\boldsymbol{Q}_{u_{f}}\boldsymbol{u}_f + \frac{1}{2}\boldsymbol{\tau}^T\boldsymbol{R}_{\tau}\boldsymbol{\tau}\\ \ell_f(\boldsymbol{x})&=w \cdot e(\boldsymbol{\Theta}_d,\boldsymbol{\Theta}), \end{aligned} \end{equation} where the attitude error $e(\cdot,\cdot)$ function (as used in \cite{Taeyoung2010}) between the current body orientation and the desired orientation $\boldsymbol{\Theta}_d$ is defined as \begin{equation} e(\boldsymbol{\Theta}_d,\boldsymbol{\Theta}) = \frac{1}{2}\text{tr}(\boldsymbol{I}-\boldsymbol{R}^T(\boldsymbol{\Theta}_d)\boldsymbol{R}(\boldsymbol{\Theta})), \end{equation} where $\boldsymbol{R}(\boldsymbol{\Theta})$ is the rotation matrix corresponding to the quaternion and $w$ is the weight for the final cost of the orientation ($w=500$ in this paper). $\boldsymbol{Q}_{u_{f}}$ and $\boldsymbol{R}_{\tau}$ are positive semi-definite matrices for the regularization on the velocities and tail torques, respectively. As the translation of body CoM is not of interest during reorientation, the diagonal elements of $\boldsymbol{Q}_{u_{f}}$ corresponding to $\boldsymbol{\dot{p}}$ are set as zeros. \subsubsection{System Dynamics Constraint} The dynamical feasibility is enforced by the forward Runge-Kutta (\textit{RK4}) integration of the system dynamics in the rollout of the DDP method. \subsubsection{Tail Joint Limitations} In the tailed quadrupedal system, the workspace of the tail is limited within a cone zone in Cartesian space to avoid the self-collision with the body and legs. Hence the joint limitations of tail must be considered to avoid self-collisions, \begin{equation} \boldsymbol{f}(\boldsymbol{q}_t) \in \mathcal{X}_t, \end{equation} where $\boldsymbol{f}$ is the forward kinematics of the tail and $\mathcal{X}_t$ is the feasible set of tail positions. \subsubsection{Tail Actuation Limitations} The motor torques of the tail are also limited, which are piece-wise box constraints in the inputs \begin{equation} \tau_{min} \leq \boldsymbol{\tau}_k \leq \tau_{max}. \end{equation} DDP is able to efficiently solve trajectory optimization problem through the parameterized control trajectory. The constraints in the problem are handled with Augmented Lagrangian approach and relaxed barrier function sequentially in a hybrid framework \cite{HM-DDP}. The linear feedback policy along the optimal solution returned by the DDP solver can also be used to stabilize the trajectory tracking in the flight phase. \subsection{Flight Controller} In the flight phase, the optimized trajectory will be tracked with a time-varying linear feedback controller. The feedback tracking controller is in form of \begin{equation} \boldsymbol{\tau} = \boldsymbol{\tau}_{f}^{ref} + \boldsymbol{K}_{p}(\boldsymbol{q}_f-\boldsymbol{q}_{f}^{ref})+\boldsymbol{K}_{d}(\boldsymbol{u}_f-\boldsymbol{u}_{f}^{ref}), \end{equation} where $\boldsymbol{\tau}_{f}^{ref}$, $\boldsymbol{q}_{f}^{ref}$ and $\boldsymbol{u}_{f}^{ref}$ are the optimized reference torques and reference joint trajectories obtained in the offline TO stage. $\boldsymbol{K}_p$ and $\boldsymbol{K}_d$ are proportional and derivative gains obtained from the feedback terms returned by DDP approach as mentioned in previous subsection. In addition, joint PD controllers are used to maintain leg configuration in airborne. After the body orientation is adjusted to the neighborhood of the desired orientation or the body descends to a certain height, the tail will be retracted to its minimum length quickly. Within the time duration of tail retraction, the robot legs are controlled by joint position controllers for landing preparation. Compared with the extended tail, the tail retractability makes the system CoM stay close to the geometric center of the support polygon, which alleviates the uneven force distribution of the feet in contact. Hence, the telescoping DoF turns to be important for the practical usage of appendages in falling quadruped robots. \subsection{Stance Controller} Once contact is detected, the system will switch to the stance controller. When the body orientation is well adjusted near the horizontal, less effort is needed to design a stance control strategy. To verify the feasibility of the proposed control framework, we employ a simple compliant joint PD controller to maintain each leg's configuration and keep the system balance in the stance phase. More advanced stance control (e.g., \cite{SHJ}) will be our interest in the future work. \section{Simulation Validation} We first evaluated the proposed system integration and control framework in MuJoCo \cite{mujoco}, which is a high-fidelity physic engine. The simulator run at $1000$ Hz, where the system forward dynamics was simulated and the contact between the foot and ground was detected. The friction coefficient was set as $\mu=0.8$. In simulation, the tailed A1 robot was simulated to reorient its body angle and land safely from various initial orientations with a falling height of $1.85$ m ($\boldsymbol{p}_z = 1.85$ m). We assumed that the tailed A1 robot started from a static state in all simulations. \subsection{Aerial Reorientation} \begin{figure}[tb] \centering \includegraphics[width=80mm]{Figures/bunch_sim_crop.pdf} \caption{(a-b) Simulation results of aerial reorientation in the flight phase within $0.4$ s and the initial orientation is $[15^{\degree},25^{\degree},35^{\degree}]$. (c-d) Body orientation and joint torques in a bunch of simulations with different initial orientations and curves in the same color correspond to the same simulation.} \label{fig:aerial_reorientation} \end{figure} In the offline trajectory optimization stage, the desired orientation $\boldsymbol{\Theta}_d$ was set as $[0,0,0,1]$ and the time budget for the reorientation task was $0.4$ s with $N=200$. The forward dynamics of tSRBD in TO was implemented using the \textit{spatial-v2} package \cite{spatialv2} in {MATLAB}. The nonlinear optimization problem was then solved with a custom-made DDP solver \cite{HM-DDP}, where \textit{casadi} \cite{Andersson2019} was used as a tool of auto-differentiation for the computations of derivatives of the forward dynamics, objective functions, and constraints. Solving such an optimization problem with zero controls as initial guess usually took between 100 and 200 iterations. The optimized results ($\boldsymbol{\tau}_f^{ref}$, $\boldsymbol{q}_f^{ref}$ and $\boldsymbol{u}_f^{ref}$) were then interpolated with polynomials as reference inputs for the tracking controller. To verify the aerial reorientation capability, the tailed A1 robot fell with various initial body orientations in simulator. To give an intuitive visualization, the orientation was plotted in Euler angles (in \textit{yaw-pitch-roll}). The simulation results with an initial orientation of $[15^{\degree},25^{\degree},35^{\degree}]$ were shown in Fig.~\ref{fig:aerial_reorientation}(a-b). The optimized trajectory (dashed line) was well tracked and the body attitude was adjusted to the desired one, even though model errors (e.g., $1.4\times$ tail mass) were introduced manually. Simulation results with other different initial body orientations were presented in Fig.~\ref{fig:aerial_reorientation}(c-d). These results demonstrated the robot's 3D reorientation capability. \subsection{Consecutive Motion} To validate the consecutive motions of aerial orientation and safe landing, one trial was shown in Fig. \ref{fig:full_res}. The robot was dropped with an initial Euler angle of $[40^{\degree},40^{\degree},30^{\degree}]$. The tailed robot adjusted its body orientation by swinging the tail within the first $0.4$ s then retracted the tail for landing. The contact was detected at $0.56$ s and the system switched to the stance controller for keeping balance. From Fig.~\ref{fig:full_res}(b), there were body orientation errors at touch-down because of the disturbance caused by tail retraction. Small errors in pitch and roll (several degrees) can be eliminated by the landing control after the robot was settled down. To eliminate the error in yaw, three DoFs of the tail can be activated together under proper control, which will be our future study. \begin{figure}[t] \centering \includegraphics[width=85mm]{Figures/video_snap_crop.pdf} \caption{(a) Snapshots of the consecutive motion in MuJoCo environment. (b) Body orientation and tail motion over time.} \label{fig:full_res} \end{figure} \begin{table}[b] \centering \caption{Tailed A1 Robot and Test Platform Parameters}\label{table:a1param} \begin{tabular}{p{2.4cm} p{0.8cm} p{3.5cm}} \toprule \textbf{Parameters} &\textbf{Symbol} &\textbf{Value} \\ \midrule A1 Robot Mass &$m_b$ &$12.45$ kg\\ \midrule A1 Robot Inertia &$\boldsymbol{I}_b$ &$[0.12, 0.39, 0.45]$ kg$\cdot$m$^2$\\ \midrule Test Body Mass &$m^t_{b}$ &$11.5$ kg \\ \midrule Test Body Inertia &$\boldsymbol{I}^t_b$ &$[0.05,0.25,0.22]$ kg$\cdot$m$^2$\\ \midrule Tail Mass &$m_t$ &$1.25$ kg\\ \midrule Tail Length Range &$\ell_t$ &$[0.12,\;0.49]$ m\\ \bottomrule \end{tabular} \end{table} \section{Experimental Results} \subsection{Experimental Setup} A $2.3$ kg 3-DoF robotic tail prototype ($850$ g tail base package, $820$ g tail scissor linkages, and $630$ g tail mass end) was integrated into the Uniree A1 robot. The tail base was placed above the middle of the robot's two hinder leg hip actuators. The tail can provide a large range of motion (${- 90^{\degree}\sim180^{\degree}}$ in pitch, $\pm 180^{\degree}$ in yaw, and $0.12$ $\sim$ $0.49$ m in length). The tail end mass includes a T-motor Antigravity 5008 KV170 ($128$ g Incl. Cable, open-source hardware VESC as the motor driver) and a worm gearbox (gear ratio $10:1$), which were in charge of controlling the tail length. The tail's pitch/yaw motion was controlled by the tail base that consists of two T-motor AK60-6 ($315$ g) and a differential bevel gear gearbox. The differential actuation mechanism can provide a large range of motion and large output torque. Other electrical components (a $400$ g battery, a Raspberry Pi 3B+ control board, and an LPMS-BE1 IMU module) were placed on the bottom of Unitree A1. \begin{figure}[bt] \centering \includegraphics[width=70mm]{Figures/experimental_setup_crop.pdf} \caption{Experimental platforms including a flight-phase test platform and a tailed A1 robot. (a) Flight-phase test platform. (b) The test platform is suspended for drop tests. (c) A1 robot with an extended tail. (d) The tailed A1 robot is suspended for drop tests.} \label{fig:experimental_setup} \end{figure} To verify the tailed quadruped robot's aerial reorientation and landing capability safely and repeatably, we built up an auxiliary truss-structured platform for hanging and releasing the robot as shown in Fig.~\ref{fig:experimental_setup}(d). The tailed robotic system was suspended by four cables via electromagnetic holders. By adjusting the length of each cable, the initial body orientation and height can be set as desired. Once the start button (Fig.~\ref{fig:experimental_setup}(d)) was pressed, the electromagnets de-energized and the controller was activated at the same time. Since repeated dropping experiments may damage the motors of the quadrupedal robot, we designed a flight-phase test platform (Fig.~\ref{fig:experimental_setup}(a)) to test the aerial reorientation function initially. The flight-phase test platform consisted of a cuboid body and the same 3-DoF tail. The physical parameters of the test platform were given in Table \ref{table:a1param}. We mainly repeated the aerial reorientation experiments on the test platform and then transferred to the tailed A1 robot with fine tuning. The body orientation and angular velocity were estimated from the internal IMU. A contact was detected by a sudden acceleration change in the vertical direction. A soft cushion was also laid on the ground to protect the robots. \subsection{Flight-Phase Test Platform Experiments} To validate the tail can increase the quadrupedal robot's 3D aerial righting capability for safe landing, we dropped the test platform from various initial orientations onto the cushion from $1.85$ m height. The initial body orientation was manually adjusted and the tail was kept in its zero joint configuration with maximal length. With the initial orientation, an optimized trajectory and tracking controller were offline planned as in discussed in Section III. To handle the model uncertainties, an additional feedback PD controller is hand-tuned to improve the tracking performance. We show the experimental results of three trials in Fig.~\ref{fig:aerial_exp}(a). The platform fell from three totally different initial orientations and the final orientations were successfully adjusted to the neighbourhood of the desired orientation at the end of the flight phase. The observed errors are tolerable (within $\pm10^{\degree}$) for quadrupedal robots' landing. Especially, in the third trial, the initial roll offset was up to $-50^{\degree}$, which was a challenging orientation for common quadrupedal robots to recover from in falling. The motion snapshots of trial 3 was shown in Fig.~\ref{fig:aerial_exp}(b). In the experimental results, we can see the tail started to retract its length at $0.35$ s in airborne and kept retracting till touch down. The tail retraction speeds in experiments were repeatable ($\sim 2$ m/s, estimated from Fig.~\ref{fig:aerial_exp}(a)) . In the experiments, the tail retraction sometimes got stuck because the tail's fast swing speed created large centrifugal force and the tail telescoping motor attained its torque limits. \begin{figure}[tb] \centering \includegraphics[width=85mm]{Figures/orientation_exp_data_crop.pdf} \caption{(a) Experimental results of three trials on the flight-phase test platform: Euler angle of body orientation and tail length variations versus time. (b) Motion snapshots of trial $\#3$.} \label{fig:aerial_exp} \end{figure} \subsection{Tailed A1 Robot Experiments} To validate a consecutive large 3D reorientation and safe landing on a tailed quadruped robot, the tailed A1 robot was dropped from a non-negligible initial body angle (Fig.~1). The initial body angle was $[0^{\degree},30^{\degree},30^{\degree}]$ and the desired orientation was $[0^{\degree},0^{\degree},0^{\degree}]$. A relatively low height, 1 m, was selected for dropping, because the robot would not suffer from too large motor current. This safety-oriented height selection did not affect our goal of demonstrating feasibility. Limited by the flight duration, the tail control strategy was slightly different and the tail would retract earlier for a trade-off between the functions of tail swing and retraction. At $0.38$ s, the robot touched the ground and the tail retracted to its minimum length (Fig. \ref{fig:full_exp}). The body orientation almost converged to the horizontal plane, although the yaw angle was around $10^{\degree}$. One of reasons is that the tail retraction was not considered during reorienting the body. As shown in Fig. 1, the robot can land stably even with the small orientation error. The same trial without retracting the tail was conducted and a robot falling was observed (see video). This further emphases the importance of the telescoping DoF. \begin{figure}[tb] \centering \includegraphics[width=85mm]{Figures/hardware_full_crop.pdf} \caption{Experimental results on the tailed A1 robot, including body Euler angle and tail length change. A snapshot is shown in Fig. 1.} \label{fig:full_exp} \end{figure} \section{Discussion} We have successfully used the 3-DoF tail for quadruped robot's 3D aerial reorientation and further safe landing. However, the tail usage in this paper is still straightforward. To show the proof of concept, the 3-DoF tail degenerated to a 2-DoF one before body self-righting and the telescoping function was only used for landing preparation. Actually, the telescoping DoF can be involved in the whole flight phase, which may generate more effective reorientation trajectories towards more robust and safe landing. Although this practice requires a new model, it can eliminate the disturbance of tail retraction that occurs currently. In the landing control, we also used a simple PD controller since small orientation offset was achieved. More advanced landing planning/control like contact-aware trajectory optimization can be introduced and it can fully make use of legs' control authority to improve landing success. Lastly, we admit that the introduction of the tail would increase the total mass and may affect robot walking, but we did not directly change the foot design as \cite{KurtzMiniCheetah2022}. The tail package weight can be further reduced by optimizing the tail scissor linkage structures. \section{Conclusion and Future Work} In this paper, we proposed to integrate a 3-DoF tail module into a falling quadruped robot, enabling 3D aerial reorientation capability for safe landing. The simplified robot dynamic model was presented and we also proposed a simple but effective control framework to demonstrate the feasibility of the system integration. A flight-phase test platform with comparable inertial properties to the quadrupedal robot (Unitree A1) was built for initial experimental verification, demonstrating the tail's effectiveness on 3D body reorientation and fast retractability during falling. A consecutive large 3D reorientation and safe landing motion were successfully completed on the tailed A1 robot. In the future, besides addressing the concerns mentioned in Discussion, we plan to investigate the advantages of the 3-DoF tail on assisting the quadrupedal locomotion, such as accelerating/decelerating and turning sharply. Besides, the tail's telescoping function can be used for simple interactions with the environment, facilitating the deployment in real world.
proofpile-arXiv_065-3922
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Background \& Summary} Raman spectroscopy is a widely used, powerful, and nondestructive tool for analysis and identification of materials as well as assessing material quality. It is based on characterization of the vibrational modes of materials and provides rich atom- or chemical bond-specific information about the crystal structure and chemical composition. When used in assessing material quality, Raman spectra contains information about grain sizes, defect densities, and strain, among others \cite{DAS_2011,schrader2008infrared,parker1983applications,vavskova2011powerful}. In other fields, Raman spectroscopy has been used to, e.g., detect counterfeit medicines, identify plastic types in recycling flows, to detect hazardous chemicals, or to measure temperature \cite{SCOTTER1997285,Bicchieri2006,Orlando2021,Adya2020,Taghizadeh2020}. Raman spectrum provides a fingerprint of the material, but it is usually not possible to directly interpret e.g. the material composition from the spectrum. In order to use Raman in the above-mentioned material classification and identification applications, a database of known reference spectra is needed. To this end, databases of experimental spectra have been collected, such as the RRUFF Project \cite{Lafuente2016} that contains a large set of experimental Raman spectra of minerals (4112 public samples), KnowItAll Raman Spectral Library \cite{KnowItAll} that include Raman spectra of different organic and inorganic compounds, polymers and monomers (over 25000 records), and Raman Open Database(ROD) \cite{El_Mendili_2019} which complements the crystallographic information found in the Crystallographic Open Database (COD) \cite{cod} (1133 entries). A Raman Spectrum database made via ab initio, density-functional theory (DFT) electronic structure calculations could be highly useful in providing supplementary information that is difficult to obtain from experiments. For instance, some materials can be difficult to synthesize in a pure form, or their purity or phase content is unknown. The calculated results are also free of any instrumental contributions. Computational studies can also be faster and cheaper to carry out than experiments. Such a database would also be useful to computational researchers, e.g., by providing a reference spectra. Moreover, large datasets can be used in material informatics for material screening or for training models via machine-learning. Still, compared to the experimental ones, the computational databases are of very limited size. This is due to the computational cost of these calculations, which makes them limited to small systems and/or a small number of materials. A few open-access libraries of computational Raman spectra already exist such as: (i) Computational 2D Materials Database (C2DB) \cite{Haastrup_2018, Taghizadeh2020} that contains properties of a large number of 2D materials but only 733 structures have Raman spectra, (ii) WURM project \cite{wurm} is a database of computed Raman and infrared spectra for 461 minerals, and (iii) in developing high-throughput computational methods, Liang et al. calculated 55 inorganic compounds \cite{Liang2019}. In this paper, we report on our research to develop optimized high-throughput workflow to carry out these calculations and build a large database of computational Raman spectra. For selected systems, the calculated spectra are compared to those obtained using previous computational methods as well as to the experimental ones reported in the literature. The database of Raman spectra and vibrational properties reported along with this paper consists of 5099 compounds from many different material classes, far surpassing in size the previous computational databases and comparable to the experimental ones. \section*{Methods} \subsection*{Simulation of Raman spectra} In Raman spectroscopy measurements, incident laser photons with a specific frequency $\omega_L$ interact with lattice vibrations, described in the form of phonons in crystalline materials, and the spectrum of inelastically scattered photons are recorded. Scattered photons exhibit either a decrease in frequency $\omega_S$ upon creation of phonon or increase in frequency upon annihilation of a phonon, denoted as Stokes or Anti-Stokes shifts, respectively. The intensity of the peaks is related to the Raman scattering cross section, which can be challenging to calculate since the ion (and electron) dynamics in the material need to be described concurrently with the light-matter interaction \cite{Cardona82,Reichardt19_PRB}. There are several approaches for calculating the Raman spectra: (i) scattering probability from third-order perturbation theory (absorption, electron-phonon coupling, and emission) \cite{Lee_1979,Long2002-qo,Taghizadeh2020}, (ii) from the gradient of the electronic susceptibility (usually via finite-differences) in Placzek approximation \cite{placzek1934,Porezag1996,Long2002-qo}, and (iii) from the auto-correlation function of time-dependent susceptiblity \cite{GORDON19681,Thomas13}. Methods (i) and (ii) only yield the Raman tensor, but the phonon eigenvectors and frequencies need to be determined first in a separate calculation step. In method (iii), the peak positions and intensities are obtained at once, but it is computationally highly demanding. Method (ii) is computationally most affordable and easy to implement in high-throughput setting \cite{Liang2019} and thus adopted in this work. The method is briefly described below. In the first step, the phonons are calculated as described in depth in many previous publications \cite{YuCardona,BaroniRMP}. Within harmonic approximation, the potential energy surface is written as a Taylor expansion $U=U_0 + \Phi_{\alpha\beta}(ki, lj) u_{\alpha}(ki) u_{\beta}(lj)$, where $U_0$ is the ground state energy and force constant matrix $\Phi$ describes the second-order change in potential energy, \begin{equation} \Phi_{\alpha\beta}(ki, lj) = \frac{\partial^2 U}{\partial u_{\alpha}(ki)\partial u_{\beta}(lj)} = \frac{\partial F_\alpha(ki)}{\partial u_\beta(lj)} \label{FC} \end{equation} In Eq.~\eqref{FC}, $u_{\alpha}(ki)$ is the displacement of the $k$th atom in the $i$th unit cell in the cartesian direction $\alpha$. $F_\alpha(ki)$ is the force in atom $ki$, and in the equation above its change is induced by the displacement of atom $lj$. After harmonic ansatz for the temporal evolution of the vibrational modes $v$, the classical equations of motion for atoms in unit cell "$0$" become \begin{equation} M_{k}\omega^2 v_\alpha(k0) = \sum_{l,j,\beta}\Phi_{\alpha,\beta}(k0,lj)v_\beta(lj) \label{motions} \end{equation} where $M_k$ is the mass of atom $k$. The infinite sums over unit cells $l$ in periodic crystals can be avoided by moving to reciprocal space and, after rescaling $v$ and $\Phi$ by $\sqrt{M}$, Eq.~\ref{motions} is cast into an eigenvalue equation \begin{equation} \sum_{l\beta}D_{\alpha \beta}(kl,q)e_{\beta}(l,q\nu) = [\omega(q\nu)]^2 e_{\alpha}(k,q\nu) \label{eigenvalue} \end{equation} where $D$ is the mass-scaled Fourier-transformed $\Phi$ (denoted dynamical matrix), $q$ is the wave vector, $e$ is the eigenvector of the band index $\nu$, and $\omega^2$ are the eigenvalues. To obtain $D$, force constants $\Phi$ need to be evaluated from the forces induced at atoms $lj$ by displacing each atom $k0$ in the unit cell. To guarantee sufficiently large distance between atoms $k0$ and $lj$, supercell calculations are usually required. If the crystal symmetry is not considered, the construction of the force constant matrix requires performing $3N$ DFT calculations when each of the $N$ atoms in the unit cell is displaced in each of the three cartesian directions. Differential cross section for the Stokes component of Raman scattering from the $\nu$th eigenmode far from resonance is given as \cite{Cardona82,Porezag1996} \begin{equation} \frac{d\sigma_{\nu}}{d\Omega} = \frac{\omega_S^4 V^2}{(4\pi)^2c^4}\mid \hat{E}_{S}\frac{\partial \chi}{\partial \xi_{\nu}}\hat{E}_{L} \mid^2 \frac{\hbar(n+1)}{2\omega_{\nu}} \label{Stokes} \end{equation} where $\hat{E}_{S}$ and $\hat{E}_{L}$ are the unit vectors of the polarization for the scattered and the incident light, V is scattering volume, and $\xi$ is a normal-mode coordinate along the mass-scaled eigenvector $e'_\alpha(k) = e_\alpha(k)/\sqrt{M_k} \sim v_\alpha(k)$ and $\chi$ is the electronic susceptibility tensor. The directional derivative can be written out as \begin{equation} \frac{\partial \chi}{\partial \xi} = \nabla \chi \cdot e' =\sum_k^{unit cell}\frac{\partial \chi}{\partial u_\alpha(k)} M_k^{-\frac{1}{2}}e_\alpha(k) \approx \frac{\chi(R_0+ h'e')-\chi(R_0-h'e')}{2h'} = \frac{\chi(R_0+ h\hat{e'})-\chi(R_0-h\hat{e'})}{2h}|e'| \label{susceptibility} \end{equation} The first two forms involve calculation of derivatives of $\chi$ with respect to displacement of each atom $u(k)$, whereas in the last two forms all atoms are displaced simultaneously along $e'$ and explicitly written in the finite-difference approximation as implemented in the code (displacing the atoms in both positive and negative directions). Normalized $\hat{e'}=e'/|e'|$ (and $h=h'|e'|$) is used in order to have consistent step size $h$ in systems and modes with different masses (and in units of {\AA}). Specifically, the Raman tensor is defined as \cite{Porezag1996} \begin{equation} R_{\nu\beta\gamma} = \frac{V_c}{4\pi} \frac{\partial \chi_{\beta\gamma}}{\partial \xi_\nu} \label{tensor} \end{equation} incorporating $V^2/(4\pi)^2$ from Eq.~\eqref{Stokes}. To evaluate the change in $\chi$, we used the macroscopic dielectric constant $\varepsilon_{\beta\gamma}$ containing only the electronic contribution with clamped ions (sometimes denoted as the high-frequency dielectric constant $\varepsilon_\infty$), which is readily provided by most DFT codes. While the expression in Eq.~\eqref{Stokes} yields complete information, quite often experimental results are obtained for polycrystalline mineral specimens or powdered samples, in which case the intensity must be averaged over all possible orientations of the crystals. When the direction of incident light, its polarization, and the direction of outgoing light are all perpendicular, the Raman intensity becomes \cite{Porezag1996,Long2002-qo} \begin{equation} \frac{d\sigma_{\nu}}{d\Omega} = \frac{\omega_S^4}{c^4} \frac{\hbar(n+1)}{2\omega_{\nu}} \frac{I_{\rm Raman}}{45} \label{Stokes2} \end{equation} where \begin{align} I_{\rm Raman} &= 45 a^2 + 7\gamma^2 \\ a &= \frac{1}{3}(R_{\nu xx} +R_{\nu yy}+R_{\nu zz}) \\ \gamma^2 &= \frac{1}{2}[(R_{\nu xx}-R_{\nu yy})^2 +(R_{\nu xx}-R_{\nu zz})^2 + (R_{\nu yy}-R_{\nu zz})^2 +6 (R_{\nu xy}^{2} + R_{\nu xz}^{2} + R_{\nu yz}^{2})] \label{Ramanintensity} \end{align} $I_{\rm Raman}$ is Raman activity that is independent of experimental factors such as temperature and incoming photon energy and thus used when comparing our results to other calculations, whereas Eq.\ \ref{Stokes2} is used (and must be used) when comparing to experimental spectra. \subsection*{Workflow} We now describe how the theory described above is turned to an efficient computational workflow. As mentioned, the computational procedure involves two sets of calculations: (i) force constants to get the vibrational modes and (ii) the Raman tensors for each mode. While the phonons at $\Gamma$-point can be calculated efficiently, we would like to have access to the full force constant matrix. This allows calculation of phonon dispersion and also, e.g., estimation of isotope effects and line broadening due to defects or grains via phonon confinement model \cite{Cardona82,Hashemi_2019,Kou2020,Gillet2017}. Both steps can be computationally demanding for systems with large number of atoms in the unit cell, which has hindered previous efforts to building such databases in the past. The most important design decisions that distinguish our work from the previous ones are the following. First, we have decided to build our database on top of the Atsushi Togo's Phonon database \cite{Togo2015}, that contains the calculated full force constant matrix, and our work only focuses on calculating the Raman tensors. We are using the same computational parameters, and thus our database is fully consistent with the Phonon database, which is further linked to the Materials project database \cite{mp} via the material-IDs. Second, to reduce calculation time and make the workflow more efficient compared to existing methods, Raman-active modes are found based on group theory and the Raman tensors are calculated only for modes that are known to be active or whose activity could not be determined. Known inactive modes and the three zero-frequency acoustic modes are ignored. For this purpose, the symmetry information about Raman activity was implemented. The workflow developed for automatic Raman tensors calculations is illustrated in Fig.~\ref{fig:wf}. At the conceptual level, the workflow steps are following: \begin{enumerate} \item Select material from Phonon database, read in optimized structure, computational parameters, and force constant matrix. \item Calculate the eigenvectors and eigenvalues at $\Gamma$-point. \item Determine the irreducible representation (irrep) of the modes and whether they are Raman and/or infrared active. \item Perform prescreening to check that the material is dynamically and thermodynamically stable and the material is not metallic or near-metallic. \item Calculate the Raman tensors for Raman-active modes and the dielectric tensors for the optimized structure. \item All the results (structure, eigenvalues, irreducible representation, Raman tensors, etc.) are collected in a database. \end{enumerate} The softwares used in each step are also indicated in Fig.~\ref{fig:wf}. Atsushi Togo's Phonon database contains the optimized structures, calculated force constants, and all the computational parameters used to obtain them. These are calculated using VASP software \cite{Kresse1996, Kresse_1999}. The eigenvalues and eigenvectors at $\Gamma$-point, as well as the irreducible representations of the modes are calculated using Phonopy \cite{Togo2015}. All of this information together with selected material properties obtained from the Materials Project database are collected in a database for prescreening. For this, we adopted to use the database tools in atomic simulation environment (ASE) \cite{Hjorth_Larsen_2017}. In the last step, the calculated Raman tensors are added to this database, which is then also served through a web app implemented in ASE. For automating the computationally intensive part, i.e., the calculation of the Raman tensors, we used the Atomate \cite{Ceriotti2006} that is a Python-based package for constructing complex materials science computational workflows. The workflow objects generated by Atomate are given to Fireworks workflow software \cite{CPE:CPE3505} for managing, storing, and executing them with the help of Custodian package for error management \cite{ONG2013314}. As the DFT calculator we used here VASP, with the parameters taken from the Phonon database. During these calculations, all the input parameters and results are stored in a Mongo database, which are afterwards transferred to the database (Computational Raman Database, CRD). \subsection*{Prescreening} Before Raman tensor calculations we performed the following prescreening, also illustrated in Fig.~\ref{fig:selected}: (i) We check that the material has Raman active mode(s) based on the symmetry analysis. (ii) We check that the material is dynamically stable, i.e., there are no modes with imaginary frequencies at the $\Gamma$-point. (iii) We check that the material is thermodynamically stable by requiring that the energy above the convex hull is less than 0.1 eV/atom, as materials with the energy $>0.1$ eV are unlikely to be experimentally synthesized\cite{Sun16_SciAdv}. (iv) We check that the bandgap is larger than 0.5 eV, since our computational approach is strictly valid only for non-resonant conditions (i.e., photon energy smaller than the band gap), and metallic systems require very large k-point meshes which will increase the computational cost. For (iii) and (iv) we use information from the Materials Project database at the same material ID \cite{mp}. Finally, we have 8382 (83.55\%) materials satisfying these conditions and flagged for calculation. It is also worth noting that Phonon database contains only materials that are non-metallic, non-magnetic, and non-triclinic. The workflow first performs calculation of dielectric tensors of the optimized structure, which can be compared to that provided in Phonon database. Additionally, the maximum forces are checked in this step and the calculation terminated if the forces are $>0.001$ eV/{\AA}, but no such case was encountered. \subsection*{Computational parameters} All density-functional theory (DFT) calculations are carried out using VASP (Vienna Ab initio Simulation Package) \cite{Kresse1996,PhysRevB.54.11169} with projector-augmented wave method \cite{PhysRevB.50.17953}. PBEsol exchange-correlation functional \cite{PhysRevLett.100.136406} and other computational parameters were taken to be the same as used in Phonon database. In particular, plane wave cutoff is set to 1.3 times the maximum cutoff listed in PAW setups. In Phonon database, the structures of standardized unit cells are given, whereas we adopt to use the primitive cell in Raman tensor calculations to save computational time. The primitive cell can be readily obtained using Phonopy \cite{Togo2015}. In the calculation of eigenvectors, non-analytic corrections are not included, as the eigenvectors would then depend also on the direction from which $q \to 0$ is approached and thereby complicate the calculations significantly. Fortunately, this mostly happens for the IR-active modes and less for the Raman-active modes. Moreover, the induced change in eigenvectors and in Raman tensors is expected to be small and the splitting of the modes can be determined a posteriori. There are then only two parameters left to decide: the k-point mesh and the magnitude of the atomic displacements in evaluation of the Raman tensor by finite differences. In Phonon database, the Brillouin zone of the unit cell is sampled by a mesh whose density is defined by the $R_k$ parameter in VASP. We adopt the same approach, but it is worth noting that since we use primitive cell, the exact density and positions of mesh points can be slightly different. Moreover, metals and small-gap semiconductors usually require higher density k-point mesh than large-gap insulators. All calculations in the Phonon database used $R_k=20$, which should be sufficient for the structural optimization of materials included in the database (band gap > 0.5 eV). Determination of Raman tensor may, however, require a higher value. In order to benchmark this, we selected two materials from the Phonon database with different band gaps: the largest band gap material among the common III-V semiconductors is AlN (4.05 eV) and Si is a small band gap material (0.85 eV). As illustrated in Fig. S1, $R_k = 40$ is needed to achieve converged results for dielectric constant and Raman intensity of a small band gap material Si, whereas for a large band gap material AlN $R_k=20$ is sufficient. See Benchmark section in SI for more details. In our workflow, we have chosen to use the following values: $R_k=20$ for the structures with a band gap more than the 2 eV, $R_k=30$ for band gaps in the range of 1--2 eV, and $R_k=40$ for band gaps smaller than 1 eV. In order to benchmark the displacement, we chose materials with heavy and light elements, PbO and Cd(HO)$_2$. As shown in Fig.~S2, varying the displacement from 0.001 {\AA} to 0.04 {\AA} (default value being 0.005 {\AA}), we found little change in the Raman tensors or the dielectric constants. Therefore, we chose to use the default value. Finally, we verified the computational workflow in Atomate by comparing the Raman spectra of few structures to those obtained using VASP\_Raman code \cite{vaspraman}. As shown in Fig.~S3, a good agreement is found. We note that Atomate had wrong normalization of eigenvectors which in some cases resulted in overestimation of the Raman intensities, but was fixed in the version used here. \section*{Data Records} \subsection*{Computational Raman Database} The final database contains vibrational information and Raman tensors stored in JSON document that can be downloaded directly from the Materials Cloud Archive \cite{mparchive} and queried with a simple python script. The Table~\ref{table:1} shows all the database keys with their related descriptions. The data can also be browsed online in Computational Raman Database website (\hyperlink{http://ramandb.oulu.fi/}{ramandb.oulu.fi}). \subsection*{Database statistics} As shown in Fig.~\ref{fig:selected}, there were 10032 materials in the Phonon database and 8382 of them were flagged for calculation. Since each structure contains several vibrational modes, the total number of modes in our database was 725163, and 428081 modes of them are Raman active or the activity is unknown. Figs.~\ref{fig:mp}(a,b) shows the number of materials in the database (before prescreening) grouped by the calculated band gaps and the number of atoms in their structures, respectively. The histogram with respect to the number of atoms, peaks at around 20--30. There are some materials with very large primitive cells containing more than 100 atoms, but many of these appear to be disordered/alloyed/defective variants of the small primitive cell systems and thus of limited interest. Since the Phonon database only includes non-metallic materials, the number of materials with a band gap smaller than 0.5 eV is small, and therefore neglecting those materials in our prescreening step has small impact. We proceeded to carry out the Raman tensor calculations in the order of increasing number of atoms in the primitive cell. The database included here contains 5099 calculated structures. We calculated all materials with less than 10 atoms in the primitive cell and all experimentally observed materials (as indicated by MP) less than 40 atoms in the primitive cell. For this, we used about 9.5 million CPU hours. We estimate that for calculating the remaining 3283 structures would require more than 20 million CPU hours, owing to the much larger cell sizes. In Fig.~\ref{fig:mp}(c) we compare the number of materials considered in this work and in Materials Project database as grouped by the type of compound (oxides, halides, etc.). "MP" denotes the full Materials Project database, whereas "MP*" includes the same conditions (band gap larger than 0.5 eV and energy above hull less than 0.1 eV) as used in our material set (PhDB*). "CRD" refers to the calculated set of materials. In total, almost 20\% of the MP* structures are contained in the PhDB* dataset and about 12\% are calculated. Also, the different types of compounds are included in our database with similar statistics as in Materials Project. As an example, the percentage of oxides and halogenides are 52 \% and 27 \% in our database, compared to 67 \% and 26 \% in MP*. Finally, we used the algorithm proposed by Larsen et al. \cite{Larsen_2019} for identifying the dimensionality of the structures in our database: 4137 structures (more than 80 \%) are three-dimensional, 385 structures are two-dimensional, 72 structures are one-dimensional, 277 structures are 0D and others are a mixture of different dimensionality, such as 0D+1D, 0D+2D, 0D+3D, etc. This shows that our database covers most different material classes. \section*{Technical Validation} \subsection*{Comparison to experiments} Selected computational benchmarks were already presented in the Computational parameters section. In this section, we compare the calculated spectra from our approach with experimental results extracted from the RRUFF database to validate our method and calculations. RRUFF contains only (estimated) chemical formula and lattice parameters but not atomic positions, and thus we cannot guarantee exact structural match. Based on mineral names, there are 703 entries in RRUFF database that matched with 288 structures of our database. The Table S1 contains mineral names, formula, and their RRUFF IDs for structures with the same formula as found in Phonon database, 92 in total. 27 of these were found to have the similar lattice parameters compared to the matched structure in our database and thus very likely to be the same structure. Moreover, in most cases, the energy above hull is zero or very small, the maximum being 40 meV/atom. Fig.~\ref{fig:intensity} shows a comparison between calculated spectra and experimental Raman spectra of few selected minerals: HgO, MgCO$_3$, CaMg(CO$_3$)$_2$, and SiO$_2$. Overall good agreement between computational and experimental results is found. In most cases, the frequencies (Raman shifts) differ from the experimental values by less than few percent. The variation in peak intensities is somewhat larger but qualitatively correct. We note that the comparison to the experiment is complicated by the varying linewidths in the experimental spectra, which in turn modifies the peak maxima. The linewidth is related to the phonon lifetime, which is not evaluated in our calculations. Instead, in the simulated spectra we have only included a reasonable phonon lifetime-induced broadening of 8 cm$^{-1}${}. The experimental spectra appear to contain also a Gaussian-type (instrumental) broadening, which we do not attempt to reproduce here. Also, while perfectly ordered bulk crystals are used in calculations, in experiments the material purity or even exact composition may be unknown and the spectrum is affected by parameters such as temperature, pressure, and measurement geometry. While we are relying in harmonic approximation, phonon renormalization due to anharmonic effects can affect the frequencies as well as linewidths. Also, we are simulating non-resonant Raman spectra, while in resonant Raman the intensities may change depending on the electronic resonance conditions. Nevertheless, in the cases where the Raman tensors are affected by any of these effects, the Raman-active modes found based on the group theory can still be used to assist in the analysis of the experimental spectra. \section*{Usage Notes} We introduced an optimized workflow for performing high-throughput first-principles calculations of Raman tensors. The workflow takes full advantage of the crystal symmetry, adopts carefully benchmarked computational parameters, and avoids calculation of vibrational modes by importing them from existing Phonon database. We carried out such calculations for 5099 materials and the results are included in the dataset accompanying this paper. The database encompasses a wide variety of materials from different compound classes (oxides, halides, etc.) and of different dimensionality. The calculated spectra were also shown to compare favorably with the experimental ones. The final database contains Raman tensors and other vibrational information, such as phonon eigenmodes, Born charges, and symmetry information, stored in JSON document that can be downloaded directly from the Materials Cloud Archive \cite{mparchive} and queried with a simple python script. The whole dataset can also be browsed online in Computational Raman Database website (\hyperlink{http://ramandb.oulu.fi/}{http://ramandb.oulu.fi}), wherein one can also find other relevant information, such as atomic structure, phonon dispersion, and infrared spectrum. We hope that the vibrational properties and Raman spectra of materials in the database will prove useful for computational and experimental researchers alike. \section*{Code availability} VASP \cite{Kresse1996,Kresse_1999} used in all DFT calculations is a proprietary software. For the database, dimensionality analysis, and web app, we used Atomic Simulation Environment (ASE) \cite{Hjorth_Larsen_2017}, released under GNU Lesser General Public License (LGPL). Phonopy \cite{Togo2015} used in calculating the eigenvectors and performing symmetry analysis is released under New Berkeley Software Distribution (BSD) License. The workflow is defined as a part of Atomate code package \cite{Ceriotti2006} with FireWorks \cite{CPE:CPE3505} for defining, managing, and executing jobs which both are released under a modified BSD license and free to the public. Pymatgen (Python Materials Genomics) used for producing inputs parameters and custodian \cite{ONG2013314} for performing error checking are both open-source packages under Massachusetts Institute of Technology (MIT) license. To store results and task parameters, MongoDB NoSQL database was used with the Server Side Public License (SSPL). All the information for prescreening and phonon calculation extracted from Phonon Database \cite{Togo2015,HINUMA2017140} and from Materials project\cite{mp, Ong_2015} are both released under Creative Commons Attribution 4.0 International License. \section*{Benchmark} To benchmark of our approach, we selected four materials with different band gaps and different atomic masses: Si, PbO, AlN, and Cd(HO)$_2$. We first verified that the standard approach of calculating Raman-tensors for all modes agrees with the Raman-active modes identified using group theory. That is, all the Raman-inactive modes were found to have vanishingly small Raman tensor, although often nonzero due to numerical errors. \begin{figure}[ht] \centering \includegraphics[width=15cm]{S1.pdf} \caption{The effect of k-point mesh parameter R$_k$. (a) and (b) show the unnormalized Raman activity of AlN and Si, respectively, with R$_k$ = 20 (blue line), 30 (orange line), 40 (green line) and 60 (red line). (c) and (d) show the effects of different R$_k$ on the maximum intensity of AlN and Si spectra, respectively. (e) and (f) show the effects of different R$_k$ on the average dielectric constant of AlN and Si, respectively.} \label{fig:convergance} \end{figure} Next, we investigated the effect of k-point mesh density on Raman tensors and Raman spectra. We calculated Raman tensors for R$_k$ = 30, 40 and 60 and compared the results with the standard value (R$_k$ = 20) used in Phonon database. Results from two (out of four) materials are shown in Fig. \ref{fig:convergance}, where, to represent large and small bandgap materials, we selected AlN (4.05 eV) and Si (0.85 eV), respectively. Fig. \ref{fig:convergance}(a,b) shows the unnormalized Raman activity spectra for different R$_k$, which clearly illustrates that AlN is hardly affected whereas Si experiences significant changes, thus suggesting that R$_k$ should be increased. To better illustrate the magnitude of changes, Fig. \ref{fig:convergance}(c,d) show how the maximum intensity changes with increasing R$_k$ and Fig. \ref{fig:convergance}(e,f) shows the average dielectric constant. In the case of AlN, R$_k$ = 20 already yields Raman tensors within 10 \% of the converged value and dielectric constant within 1 \%. In the case of Si, R$_k$ = 40 is required to reach similar accuracy. Based on these results we decided to use R$_k$=40 for materials with bandgap smaller than 1 eV, R$_k$=30 for materials with bandgap between 1 eV to 2 eV, and R$_k$=20 for materials with a bandgap greater than 2 eV. \begin{figure}[ht] \centering \includegraphics[width=13cm]{S2.pdf} \caption{Changes of dielectric constant of (a) PbO and (b) Cd(HO)$_2$ in different directions as a function of the displacement step size (0.005--0.04). } \label{fig:dielectric} \end{figure} In the third step, we investigated the effect of step size by calculating Raman tensors of PbO and Cd(HO)$_2$ with step sizes of 0.001, 0.02, and 0.04 {\AA} and compared them with those using the standard step size of 0.005 {\AA}. Fig. \ref{fig:dielectric} shows the changes in dielectric constants of PbO and Cd(HO)$_2$ in different directions and plotted for three-step sizes: 0.005, 0.02, and 0.04 {\AA}. Since dielectric tensor is symmetric (xy=yx, xz=zx, and yz=zy), we only plot the inequivalent components. As shown in Fig. \ref{fig:dielectric}, whenever there are pronounced changes in the dielectric constant (corresponding to non-zero components in Raman tensor), the dependence on step size is close to linear. In some cases there is a small parabolic dependence, seen particularly well in the xy=yx component which contains no linear dependence, but these will not affect the Raman tensor since we are using two-point finite-difference stencil. Moreover, in this range of step sizes there is no discernible noise, although some noise could be observed in 0.001 {\AA} results (not shown). Thus, we consider the default value of 0.005 {\AA} a good choice. \begin{figure}[ht] \centering \includegraphics[width=15cm]{S3.pdf} \caption{Comparison of the Raman activity from our workflow and that from vasp\_raman code. The spectra from old version of Atomate with incorrect eigenvector normalization is also shown.} \label{fig:verify} \end{figure} As mentioned in the text, there was an error in the normalization of eigenvectors in Atomate. We fixed the normalization error and changed the formulations to match with the vasp\_raman code \cite{vaspraman,Porezag1996}. To verify our approach, we used vasp\_raman code to calculate Raman tensors and compared them to Atomate with the fixed and old versions of eigenvector normalization. Fig. \ref{fig:verify} shows the Raman activity spectra of MoS$_2$, WS$_2$, SrGaSnH, and BaAlSiH. The revised normalization yields activities closely matching with vasp\_raman code. The incorrect normalization, on the other hand, tends to lead to overestimation of Raman activities and is particularly severe with modes that have very small Raman activity. In CRD website (\hyperlink{http://ramandb.oulu.fi/}{ramandb.oulu.fi}), the total Raman intensity is separated into depolarized ($I_\perp$) and polarized ($I_{||}$) components, $I$ = $I_\perp$ + $I_{||}$, with \begin{align} \frac{I_{||}}{(\omega_L - \omega_\nu)^4} &\sim \frac{\hbar(n+1)}{30\omega_\nu}(10G_{\nu}^{(0)} +4G_{\nu}^{(2)}) \label{polarized} \\ \frac{I_\perp}{(\omega_L - \omega_\nu)^4} &\sim \frac{\hbar(n+1)}{30\omega_\nu}(5G_{\nu}^{(1)} +3G_{\nu}^{(2)}) \label{depolarized} \end{align} where we have taken out the $(\omega_L - \omega_\nu)^4$ term that depends on the laser wavelength, since (i) this removes the dependence on one external parameter from our spectra, (ii) our calculations are for non-resonant conditions and one needs to be careful to only compare to wavelengths that are far from resonance, and (iii) the dependence on $\omega_\nu$ and thus the changes in the spectra after normalization are usually small. The rotation invariants are \cite{Prosandeev2005,Long2002-qo} \begin{align} G_{\nu}^{(0)} &= \frac{1}{3}(R_{\nu xx} + R_{\nu yy} + R_{\nu zz})^2 \label{Gi0} \\ G_{\nu}^{(1)} &= \frac{1}{2}[(R_{\nu xy}-R_{\nu yx})^2+(R_{\nu xz}-R_{\nu zx})^2+(R_{\nu zy}-R_{\nu yz})^2] \label{Gi1} \\ G_{\nu}^{(2)} &= \frac{1}{2}[(R_{\nu xy}+R_{\nu yx})^2+(R_{\nu xz}+R_{\nu zx})^2+(R_{\nu zy}+R_{\nu yz})^2] \nonumber \\ &+ \frac{1}{3}[(R_{\nu xx}-R_{\nu yy})^2+(R_{\nu xx}-R_{\nu zz})^2+(R_{\nu zz}-R_{\nu yy})^2] \label{Gi2} \end{align} \begin{comment} \begin{table}[ht] \centering \begin{tabular}{|l|l|l|} \hline Keys & Datatype & Description \\ \hline cell & array & lattice parameters \\ \hline positions & array & Atomic positions \\ \hline numbers & arrays & The number of atoms and chemical elements atomic numbers \\ \hline mpid & string & Materials Project id \\ \hline bandgap\_mp & float & Structure's bandgap based on the Materials Project database \\ \hline diel\_mp & array & Dielectrics based on the Material Project database \\ \hline frequencies & array & Gamma point frequencies in THz \\ \hline pointgroup & string & Point group of structure \\ \hline IRactive & array & IR active modes \\ \hline IRbands & array & IR bands \\ \hline IRlabels & array & IR labels of modes \\ \hline Ramanactive & array & Information about modes that are active or not \\ \hline \end{tabular} \caption{\label{table:1}Phonon Database Keys and description} \end{table} \end{comment} \begin{longtable}{|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{Mineral name} & \multicolumn{1}{c|}{Formula} & \multicolumn{1}{c|}{mpid} & \multicolumn{1}{c|}{Energy above hull (eV)} & \multicolumn{1}{c|}{RRUFF ID}\\ \hline \endfirsthead \multicolumn{3}{c}% {{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\ \hline \multicolumn{1}{|c|}{Mineral name} & \multicolumn{1}{c|}{Formula} & \multicolumn{1}{c|}{mpid} & \multicolumn{1}{c|}{Energy above hull (eV)} & \multicolumn{1}{c|}{RRUFF ID}\\ \hline \endhead \hline \multicolumn{5}{|r|}{{Continued on next page}} \\ \hline \endfoot \endlastfoot Billingsleyite & Ag$_7$AsS$_6$ & mp-15077 & 0.003 & \textbf{R070350} \\ \hline Sanbornite & BaSi$_2$O$_5$ & mp-3031 & 0 & \textbf{R060489} \\ \hline Hardystonite & Ca$_2$ZnSi$_2$O$_7$ & mp-6227 & 0.015 & \textbf{R040026} \\ \hline Perovskite & CaTiO$_3$ & mp-4019 & 0 & \textbf{R050456} \\ \hline Greenockite & CdS & mp-672 & 0 & \textbf{R090045} \\ \hline Cobaltite & CoAsS & mp-4627 & 0.001 & \textbf{R070372} \\ \hline Cobaltite & CoAsS & mp-16363 & 0.004 & \textbf{R060907} \\ \hline Cuprite & Cu$_2$O & mp-361 & 0 & \textbf{R050374} \\ \hline Stromeyerite & CuAgS & mp-5014 & 0.024 & \textbf{R060908} \\ \hline Emplectite & CuBiS$_2$ & mp-22982 & 0 & \textbf{R070307} \\ \hline Chalcostibite & CuSbS$_2$ & mp-4468 & 0 & \textbf{R060262} \\ \hline Pyrite & FeS$_2$ & mp-226 & 0.008 & \textbf{R050070} \\ \hline Marcasite & FeS$_2$ & mp-1522 & 0 & \textbf{R060882} \\ \hline Langbeinite & K$_2$Mg$_2$(SO$_4$)$_3$ & mp-6299 & 0 & \textbf{R070285} \\ \hline Aphthitalite & K$_3$Na(SO$_4$)$_2$ & mp-22457 & 0 & \textbf{R050651} \\ \hline Goldschmidtite & KNbO$_3$ & mp-7375 & 0 & \textbf{R190009} \\ \hline Nordite-(La) & Na$_3$SrLaZnSi$_6$O$_{17}$ & mp-13726 & 0 & \textbf{R140310} \\ \hline Swedenborgite & NaBe$_4$SbO$_7$ & mp-8075 & 0 & \textbf{R060486} \\ \hline Leucophanite & NaCaBeSi$_2$O$_6$F & mp-560721 & 0 & \textbf{R050004} \\ \hline Neighborite & NaMgF$_3$ & mp-2955 & 0 & \textbf{R080108} \\ \hline Cotunnite & PbCl$_2$ & mp-23291 & 0.006 & \textbf{R060655} \\ \hline Matlockite & PbClF & mp-22964 & 0 & \textbf{R140538} \\ \hline Laurite & RuS$_2$ & mp-2030 & 0 & \textbf{R110120} \\ \hline Zincite & ZnO & mp-2133 & 0 & \textbf{R060027} \\ \hline Chrysoberyl & BeAl$_2$O$_4$ & mp-3081 & 0 & \textbf{R040073} \\ \hline Wurtzite & ZnS & mp-10281 & 0.002 & \textbf{R130069} \\ \hline Montroydite & HgO & mp-1224 & 0 & \textbf{R070235} \\ \hline Quartz & SiO$_2$ & mp-7000 & 0.011 & \textbf{R050125} \\ \hline Bromellite & BeO & mp-2542 & 0 & X050194 \\ \hline Litharge & PbO & mp-19921 & 0.001 & R060959 \\ \hline Romarchite & SnO & mp-2097 & 0 & R080006 \\ \hline Anatase & TiO$_2$ & mp-390 & 0.006 & R060277 \\ \hline Andalusite & Al$_2$SiO$_5$ & mp-4753 & 0 & R050258 \\ \hline Anglesite & Pb(SO$_4$) & mp-3472 & 0 & R040004 \\ \hline Aragonite & CaCO$_3$ & mp-4626 & 0.024 & R040078 \\ \hline Baryte & Ba(SO$_4$) & mp-3164 & 0 & R040036 \\ \hline Brenkite & Ca$_2$CO$_3$F$_2$ & mp-6246 & 0.028 & R060247 \\ \hline Calcite & CaCO$_3$ & mp-3953 & 0 & R040070 \\ \hline Cerussite & Pb(CO$_3$) & mp-19893 & 0 & R040069 \\ \hline Colquiriite & CaLiAlF$_6$ & mp-1224 & 0 & R070417 \\ \hline Dolomite & CaMg(CO$_3$)$_2$ & mp-6459 & 0 & R050129 \\ \hline Eitelite & Na2Mg(CO$_3$)$_2$ & mp-6026 & 0 & R110214 \\ \hline Eulytine & Bi$_4$(SiO$_4$)$_3$ & mp-23331 & 0 & R060058 \\ \hline Farringtonite & Mg$_3$(PO$_4$)$_2$ & mp-14396 & 0 & R130127 \\ \hline Geikielite & MgTiO$_3$ & mp-3771 & 0 & R070479 \\ \hline Glauberite & Na$_2$Ca(SO$_4$)$_2$ & mp-6397 & 0 & R050350 \\ \hline Huntite & CaMg$_3$(CO$_3$)$_4$ & mp-6524 & 0.004 & R040126 \\ \hline Cristobalite & SiO$_2$ & mp-6945 & 0.003 & R070235 \\ \hline Leiteite & ZnAs$_2$O$_4$ & mp-29509 & 0.006 & R040011 \\ \hline Lithiophosphate & Li$_3$(PO$_4$) & mp-2878 & 0.001 & R100092 \\ \hline Magnesite & Mg(CO$_3$) & mp-5348 & 0 & R040114 \\ \hline Nahcolite & NaH(CO$_3$) & mp-696396 & 0 & R070237 \\ \hline Witherite & Ba(CO$_3$) & mp-5504 & 0 & R040040 \\ \hline Arsenolite & As$_2$O$_3$ & mp-2184 & 0.009 & R050383 \\ \hline Åkermanite & Ca$_2$MgSi$_2$O$_7$ & mp-6094 & 0.023 & R061085 \\ \hline Benitoite & BaTi(SiO$_3$)$_3$ & mp-6661 & 0 & R050320 \\ \hline Gahnite & ZnAl$_2$O$_4$ & mp-2908 & 0 & R070591 \\ \hline Rosiaite & PbSb$_2$O$_6$ & mp-20727 & 0 & R070384 \\ \hline Xanthoconite & Ag$_3$AsS$_3$ & mp-561620 & 0 & R070746 \\ \hline Topaz & Al$_2$SiO$_4$F$_2$ & mp-6280 & 0 & R040121 \\ \hline Imiterite & Ag$_2$HgS$_2$ & mp-9635 & 0.03 & R080014 \\ \hline Acanthite & Ag$_2$S & mp-610517 & 0.024 & R070578 \\ \hline Argyrodite & Ag$_8$GeS$_6$ & mp-9770 & 0 & R050437 \\ \hline Andalusite & Al$_2$SiO$_5$ & mp-4934 & 0.007 & R050258 \\ \hline Nitrobarite & Ba(NO$_3$)$_2$ & mp-4396 & 0 & R060622 \\ \hline Barylite & BaBe$_2$Si$_2$O$_7$ & mp-6383 & 0 & R060620 \\ \hline Barylite & BaBe$_2$Si$_2$O$_7$ & mp-12797 & 0 & R060606 \\ \hline Guanajuatite & Bi$_2$Se$_3$ & mp-23164 & 0.028 & R080140 \\ \hline Merwinite & Ca$_3$Mg(SiO$_4$)$_2$ & mp-558209 & 0.038 & R070195 \\ \hline Rankinite & Ca$_3$Si$_2$O$_7$ & mp-3932 & 0.009 & R140775 \\ \hline Hurlbutite & CaBe$_2$(PO$_4$)$_2$ & mp-6772 & 0 & R090048 \\ \hline Rynersonite & CaTa$_2$O$_6$ & mp-18229 & 0 & R080064 \\ \hline Arsenopyrite & FeAsS & mp-561511 & 0 & R050071 \\ \hline Gudmundite & FeSbS & mp-27904 & 0 & R060741 \\ \hline Cinnabar & HgS & mp-634 & 0.004 & R070532 \\ \hline Cinnabar & HgS & mp-9252 & 0.004 & R070532 \\ \hline Kalsilite & KAlSiO$_4$ & mp-8355 & 0.002 & R060801 \\ \hline Kalsilite & KAlSiO$_4$ & mp-9480 & 0.002 & R060030 \\ \hline Avogadrite & KBF$_4$ & mp-4929 & 0 & R110062 \\ \hline Kotoite & Mg$_3$(BO$_3$)$_2$ & mp-5005 & 0 & R060940 \\ \hline Natrosilite & Na$_2$Si$_2$O$_5$ & mp-3193 & 0 & R060855 \\ \hline Molybdomenite & PbSeO$_3$ & mp-20716 & 0 & R140388 \\ \hline Valentinite & Sb$_2$O$_3$ & mp-2136 & 0 & R120096 \\ \hline Stibnite & Sb$_2$S$_3$ & mp-2809 & 0 & R120137 \\ \hline Moissanite & SiC & mp-7631 & 0 & R150016 \\ \hline Tellurite & TeO$_2$ & mp-2125 & 0 & R070606 \\ \hline Rutile & TiO$_2$ & mp-2657 & 0.037 & R060745, R120008 \\ \hline Brookite & TiO$_2$ & mp-1840 & 0.02 & R050363, R050591, R130225 \\ \hline Lorándite & TlAsS$_2$ & mp-4988 & 0 & R110055 \\ \hline Tungstenite & WS$_2$ & mp-224 & 0 & R070616 \\ \hline Waimirite-(Y) & YF$_3$ & mp-2416 & 0 & R130714 \\ \hline Reinerite & Zn$_3$(AsO$_3$)$_2$ & mp-27580 & 0 & R080132 \\ \hline Baddeleyite & ZrO$_2$ & mp-2858 & 0 & R100171 \\ \hline \caption{Common structures in our database based on the same chemical formula and the mineral name in RRUFF compared to Materials Project tags. The bold RRUFF IDs refer to structures that have also similar lattice parameters.} \label{table:2} \\ \end{longtable}
proofpile-arXiv_065-3927
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The problem of optimizing an expensive, stochastic, black box function appears in many domains, such as in simulation-based optimization \cite{ankenman2008stochastic}, in machine learning hyperparameter tuning \cite{snoek2012practical}, or in engineering optimization \cite{yamawaki2018multifunctional}. In such problems, the mapping between decision variables and outputs is not a simple mathematical expression but either a complex computer simulation, a wet lab biological experiment or machine learning training pipeline, from the perspective of an optimization algorithm, they are black boxes. Formally, given a point in a low dimensional continuous space(typically $D\leq 10$), $x\in X\subset \mathbb{R}^D$, we aim to find the point with the highest expected output $$x^* = \argmax_{x\in X}\mathbb{E}[f(x)]$$ where $f:X\to R$ is a stochastic black box function and the expectation is over the stochasticity in repeated calls to $f(x)$, e.g. multiple simulation runs with a different random number seed. For such problems, Bayesian optimization (BO) methods have become a powerful and widely studied toolbox for finding the optimum using as few expensive black box evaluations as possible. BO methods consist of two main components, a Gaussian Process surrogate model, and an acquisition function. The surrogate model is trained to predict $f(x)$, outputting both a prediction and an uncertainty/confidence. The acquisition function, $\alpha(x)$, quantifies the exploration, exploitation trade-off for evaluating the black box at a new point $x$, and the new point with the highest acquisition value is then passed to the black box for evaluation. There exist many acquisition functions, the arguably most commonly used method is Expected Improvement (EI) \cite{jones1998efficient} that measures the expected amount that a new output $y=f(x)$ improves over the current best sampled output. \cite{srinivas2010} proposes to optimize an optimistic upper-confidence bound (UCB) acquisition function where the benefit of a point $x$ is quantified using a quantile of the distribution $f(x)$. EI and UCB are quick any easy to implement because there is an analytical solution. Thompson sampling (TS) \cite{Thompson1933} corresponds to using the GP to sample a set of predicted objective function values at a finite set of locations, the point with largest sample realization is chosen for evaluation, one may think of the sampled function as a randomly generated acquisition function to be maximized. This acquisition function is simple to implement though scales cubically with discretization size and more involved tricks are required otherwise. In contrast with EI and UCB, there exist many acquisition functions with more sophisticated theoretical motivation that also comes with much greater implementation difficulty. entropy Search (ES) \cite{hennig2012entropy} considers an information-based acquisition function where a GP over outputs generates a distribution over the input domain that models the probability of any input being the maximizer. This acquisition criterion involves computing the expected entropy reduction of the global solution and may be used for noisy observations. Computing the distribution over the input domain and its entropy and the same distribution and entropy for each possible future outcome, an average of one step look ahead entropies, introduces extensive mathematical and implementation problems to be tackled. Predictive Entropy Search \cite{ES3_hernandez2014predictive} proposed a quicker implementation however with more sophisticated approximations. Max Value entropy search \cite{wang2017max} methods aim to reduce entropy of the predictive distribution of the output value and cheaper methods have been proposed. Knowledge Gradient (KG) \cite{frazier2009knowledge} is derived from Bayesian Decision theory and samples locations that provide the greatest increase in the peak of of the GP posterior mean. However, as with entropy methods, numerical approximations are required and several implementations using different approximations have been proposed in the literature varying in their complexity, accuracy and computational cost, we provide a detailed review of these methods in Section \ref{sec:kg_background}. Some are simple to implement and find optimal points in simple convenient cases while struggling in more challenging problems. Other KG implementations are more involved to implement (open source versions are recommended) and also perform very well across a broad range of use cases but incur a larger computational overhead. With this work, we aim to take steps towards an algorithm that has both well founded theoretical motivation (like KG and entropy methods) while also being easy to implement and cheap to compute (like EI and UCB methods). We in particular focus on KG methods. In this manuscript, we give a detailed review of the major technical milestones in the development of KG implementations and then merge these enhancements to provide an implementation of KG that is simple and cheap to compute, yet also practical and performing well across a broad range of problems. We hope this can make KG a far more accessible Bayesian Optimization acquisition function for the average user and newcomers to the field, and due to its computational efficiency also broaden the scope of problems for which KG is a preferred choice. In Sections~\ref{sec:BO_background} and \ref{sec:kg_background}, we provide the mathematical background on BO and the major milestones of KG implementation. In Section~\ref{sec:one-shot-hybrid-KG} we describe a natural novel implementation, One-Shot-Hybrid KG, and discuss it's complexity, theoretical properties and practical implementation. In Section~\ref{sec:numerical} we present numerical ablation studies across the methods comparing time and opportunity cost and finally conclude in Section~\ref{sec:conclusion}. \section{Bayesian Optimization} \label{sec:BO_background} Bayesian optimization sequentially collects data from the black box function and builds a surrogate model, most often a Gaussian process (GP) model. Let the $n$ collected inputs be denoted $X^n=(x_1, x_2, \ldots, x_n) \subset X$ with outputs $Y^n\in\mathbb{R}^n$ and $\mathcal{D}^n$ the dataset of pairs. A Gaussian process is specified by prior mean, $\mu^0(x)$ that is typically constant $\mu^0(x) = \text{mean}(Y^n)$, and prior kernel $k(x, x')\in \mathbb{R}$ that gives the covariance (expected similarity) between the output values at $x$ and $x'$. Common choices of kernel are the Squared Exponential (RBF) and Mat\'ern kernel \begin{eqnarray*} r_l^2 &=& \sum_{i=1}^D\frac{(x_i-x_i')^2}{l_i^2} \\ k_{RBF}(x, x'; \sigma_0, l) &=& \sigma_0\exp\left(-\frac{r_l^2}{2}\right) \\ k_{Mat}(x, x'; \sigma_0, l) &=& \sigma_0\left(1 + \sqrt{5}r_l + 5r_l^2/3 \right)\exp\left(-\sqrt{5}r_l\right) \end{eqnarray*} where the $\theta = \{\sigma_0, l_1,...,l_D\}$ are hyperparameters estimated using maximum marginal likelihood \cite{rasmussen2003gaussian}. In our experiments, we adopt the squared exponential kernel. After observing $n$ points, the \emph{posterior} mean and covariance functions are given by \begin{eqnarray} \mu^n(x) &=& k(x, X^n)\big(k(X^n, X^n) + \sigma^2I\big)^{-1}Y^n,\\ k^n(x, x') &=& k(x, x') - k(x, X^n)\big(k(X^n, X^n) + \sigma^2I\big)^{-1}k(X^n, x'). \end{eqnarray} For each BO iteration, the latest data is used to build a new model, then given a new \emph{candidate} point $x^{n+1}$, an acquisition function $\alpha(x^{n+1}, \mu(\cdot), k(\cdot, \cdot))$ quantifies the expected benefit of evaluating the black box at $x^{n+1}$, accounting for both exploration and exploitation. The acquisition function is optimized over $X$ to find the most beneficial next point ${x^{n+1}}^* =\argmax_{x^{n+1}}\alpha(x^{n+1},\cdot)$ which is then passed to the expensive black box function $y^{n+1} = f({x^{n+1}}^*)$. The dataset is updated, $\mathcal{D}^{n+1}$, and the next iteration starts, see pseudocode in Algorithm~\ref{alg:BO_framework}. In this work, we focus exclusively on the Knowledge Gradient acquisition function and its many implementations. \begin{algorithm} \label{alg:BO_framework} \caption{The Bayesian Optimization Algorithm. An initial dataset of $n_0$ points is collected over the domain $X$. Then new points are sequentially determined and evaluated for the rest of the budget $N$. In each round, a Gaussian process regression model is fit to the current dataset and the acquisition function is optimized to find the next point to evaluate. Finally, the best predicted point is returned.} \begin{algorithmic} \Require blackbox objective $f:X\to\mathbb{R}$, budget $N$, initialisation budget $n_0$, GP kernel $k(x, x'|\theta)$, acquisition function $\alpha(x, \mu(\cdot), k(\cdot, \cdot))$ \State $X^{n_0}\gets \text{LHC}(X, n_0)$ \Comment{initial inputs, latin hypercube over $X$} \State $\mathcal{D}^{n_0} \gets \big\{(x^i, f(x^i)) \big| x^i \in X^{n_0}\big\}$ \Comment{make initial dataset} \For{$n = n_0,\dots, N-1$} \State $\mu^n(\cdot), k^n(\cdot, \cdot) \gets \mathcal{GP}(\mathcal{D}^n, k(\cdot, \cdot))$ \Comment{construct GP model} \State ${x^{n+1}}^* \gets \argmax_{x^{n+1}}\alpha(x^{n+1}, \mu^n(\cdot), k^n(\cdot, \cdot))$ \Comment{optimize the acquisition function} \State $y^{n+1} \gets f({x^{n+1}}^*)$ \Comment{evaluate black box at next point} \State $\mathcal{D}^{n+1}\gets \mathcal{D}^n\cup\{({x^{n+1}}^*, y^{n+1})\}$ \Comment{update dataset} \EndFor \State $\mu^N(\cdot), k^N(\cdot, \cdot) \gets \mathcal{GP}(\mathcal{D}^N, k(\cdot, \cdot))$ \Comment{construct final GP model} \State \textbf{return} $x^* = \argmax_x \mu^N(x)$ \Comment{return best predicted point} \end{algorithmic} \end{algorithm} \section{A Tour of Knowledge Gradient Implementations} \label{sec:kg_background} We aim to create a simple, easy to use Knowledge Gradient implementation. In the following, we provide a mathematical review of existing methods after which we present our new method as the natural next step in Section~\ref{sec:one-shot-hybrid-KG}. Given a set of past observations $\mathcal{D}^n$ and a proposed new sampling point location $x^{n+1}$, Knowledge Gradient (KG) quantifies the value of a new hypothetical observation $y^{n+1}=f(x^{n+1})$ by the expected increase in the peak of the posterior mean \begin{equation}\label{eq:KG_global} \text{KG}(x^{n+1}) = \mathbb{E}_{y^{n+1}}\big[\max_{x'} \mu^{n+1}(x')\big| x^{n+1}\big] - \max_{x''\in X} \mu^n(x''). \end{equation} where we suppress arguments $\mu^n(\cdot)$, $k^n(\cdot, \cdot)$ and $\mathcal{D}^n$ for brevity. Unfortunately, $\max_{x'\in X} \mu^{n+1}(x')$ and the enclosing expectation has no explicit formula and approximations are required. We here emphasize, \emph{accurate approximation is the central challenge in implementing Knowledge Gradient methods} and has been the focus of many prior works. These methods rely on the following ``reparameterization trick": at time $n$, the new posterior mean is an unknown random function, however, it may be written as \begin{equation}\label{eq:mu_n1_Z} \mu^{n+1}(x) = \mu^n(x) + \tilde\sigma(x; x^{n+1})Z \end{equation} where $\tilde\sigma: X \times X \to \mathbb{R}$ is a deterministic scalar valued function and the scalar random variable $Z\sim\mathcal{N}(0,1)$ captures the posterior predictive randomness of the yet unobserved $y^{n+1}$, see Appendix~\ref{apndx:one_step_post_mean}. Hence one may also write \begin{equation}\label{eq:KG_global_Z} \text{KG}(x^{n+1}) = \mathbb{E}_{Z}\big[\max_{x'} \mu^n(x') + \tilde\sigma(x'; x^{n+1})Z\big] - \max_{x''\in X} \mu^n(x''). \end{equation} Moreover, by Jensen's inequality and the convexity of the $\max()$, it is easily shown that $\text{KG}(x^{n+1}) \geq 0$, there is never an \emph{expected} disadvantage to collecting more data. \subsection{Discrete Knowledge Gradient} The early KG methods for continuous spaces, \cite{Frazier2009, scott2011correlated}, approximated $\text{KG}(x^{n+1})$ by replacing the domain of the inner maximization from the continuous space $X$ to a finite discretization of $d$ points, $X_d\subset X$. $X_d$ may simply be a latin hypercube design over $X$, or the past sampled points $X^n$ or both. Denoting vectors $\underline{\mu}=\mu^n(X_d)\in\mathbb{R}^d$ and $\underline{\tilde\sigma}(x^{n+1}) = \tilde\sigma(X_d; x^{n+1})\in\mathbb{R}^d$, then $$ \text{KG}_d(x^{n+1}, X_d) = \mathbb{E}_Z\left[ \max \{\underline \mu + \underline{\tilde\sigma}(x^{n+1})Z \} \right] - \max \underline \mu. $$ The $\max \{\underline \mu + \underline{\tilde\sigma}(x^{n+1})Z \}$ is a piece-wise linear function of $Z$, thus the expectation over Gaussian $Z$, and therefore $\text{KG}_d(\cdot)$, is analytically tractable, the algorithm has been proposed in \cite{Frazier2008} and is provided in Appendix~\ref{apndx:discrete_KG_epigraph} for completeness. If the current best predicted point is in the discretization, $\argmax \mu^n(x) \in X^d$, the discrete Knowledge Gradient is a \emph{lower bound} of the true Knowledge Gradient $$ 0 \leq \text{KG}_d(x^{n+1}, X_d) \leq \text{KG}(x^{n+1})$$ and increasing the density of points in $X_d$ such that $X_d\to X$ tightens the bound. The REVI \cite{pearce2018continuous} and the MiSo \cite{poloczek2017multi} algorithms used $\text{KG}_d(\cdot)$ with 3000 uniformly random distributed points. While this method provides an analytic lower bound, it suffers the curse of dimensionality. To be space filling, the number of points in $X_d$ must grow exponentially with input dimension $D$. Further, a totally random discretization is highly likely to contain many useless points in uneventful regions of the space $X$ resulting in wasted computation, a sparse $X_d$ results in a loose ineffective lower bound, see Figure~\ref{fig:KG_demo} centre-left plot. \subsection{Monte-Carlo Knowledge Gradient} To avoid the curse of dimensionality, the expectation over $Z$ in Equation~\ref{eq:KG_global_Z} may be stochastically approximated by Monte-Carlo sampling \cite{wu2017discretization,wu2017bayesian}. Given $x^{n+1}$, the method samples $n_z$ standard Gaussian values, $Z_{MC}\in\mathbb{R}^{n_z}$. For each sample, $Z_j \in Z_{MC}$, it constructs a corresponding posterior mean realisation, $$\mu^{n+1}_j(x) = \mu^n(x) + \tilde\sigma(x; x^{n+1})Z_j,$$ and finds the maximum with a continuous numerical {\tt Optimizer()} like L-BFGS \cite{liu1989limited} or conjugate gradient \cite{shewchuk1994introduction} with multiple restarts. We use {\tt Optimizer()} to denote a functional taking an arbitrary function $g:X\to \mathbb{R}$ as input and returning $\max_{x\in X} g(x)$ as output. The Monte-Carlo KG is then defined as the average of the maxima from all $Z_j$ as follows \begin{align*} \text{KG}_{MC}(x^{n+1}, Z_{MC}) = \frac{1}{n_s}\sum_j \underset{x'}{\text{\tt{Optimizer}}}\big(\mu^{n+1}_j(x')\big) - \underset{x''}{\tt{Optimizer}}\big(\mu^n(x'')\big). \end{align*} Assuming {\tt Optimizer()} converges, the result is an unbiased, consistent stochastic estimate of true Knowledge Gradient. Slightly abusing $Z_{MC}$ notation, we have \begin{eqnarray} \mathbb{E}_{Z_{MC}|n_z}\left[\text{KG}_{MC}(x^{n+1},Z_{MC})\right] = \text{KG}(x^{n+1}), \\ \lim_{n_z\to \infty} \text{KG}_{MC}(x^{n+1},Z_{MC}) = \text{KG}(x^{n+1}). \end{eqnarray} For larger input dimension $D$, the {\tt Optimizer()} over $X\subset \mathbb{R}^D$ may simply be run for more steps (linear in $D$) to converge thus avoiding the curse of dimensionality. Compared with discrete KG that discretizes optimization over $X$ and continuously integrates over (1 dimensional) $Z$, Monte-Carlo KG instead continuously optimizes over $X$ and discretely integrates over $Z$ with Monte Carlo samples, see Figure~\ref{fig:KG_demo} centre right. However, for a good estimate, $n_z$ must be large, e.g. $n_z=1000$, and many {\tt Optimizer()} calls are required. Furthermore, if $Z_j\approx Z_j'$, the optimal value may be near identical and need not be called twice. Finally to optimize $\text{KG}_{MC}(x^{n+1}\ Z_{MC})$ over $x^{n+1}$, a stochastic gradient ascent optimizer is required, e.g. Adam \cite{kingma2014adam}, and must be set up correctly to ensure convergence. A small choice of $n_z$ or poor inner {\tt Optimizer()} increases bias and variance in the KG estimate. Further, repeated calls to $\text{KG}_{MC}(\cdot)$ for different values of $x^{n+1}$ can be expensive as all the $Z_{MC}$ values are resampled and the {\tt Optimizer()} calls must be executed from scratch. \subsection{Hybrid Knowledge Gradient} The Hybrid Knowledge Gradient first proposed in \cite{pearce2020practical} aims to combine the best of both Discrete KG (analytic tractability, speed) and Monte-Carlo KG (scalabilty to higher input dimensions). Given $x^{n+1}$, a set of $n_z=5$ unique, deterministic $Z$ values is constructed from uniformly spaced Gaussian quantiles $$Z_h = \{\Phi^{-1}(0.1), \Phi^{-1}(0.3), \Phi^{-1}(0.5), \Phi^{-1}(0.7),\Phi^{-1}(0.9)\} \subset \mathbb{R}$$ where $\Phi^{-1}:[0, 1]\to \mathbb{R}$ is the inverse Gaussian cumulative distribution function. Following Monte-Carlo KG, for each $Z_j\in Z_h$, the posterior mean realisation is constructed, $\mu^{n+1}_j(x)$, and optimized with {\tt Optimizer()} however the resulting optimal input $x^*_j$ is stored in a set $X_{MC}$, \begin{eqnarray} X_{MC} &=& \big\{x^*_j \big| \mu^{n+1}_j(x^*_j) = \text{{\tt Optimizer}}(\mu^{n+1}_j(x')), j \in {1,\dots,n_z}\big\}. \end{eqnarray} Finally, following Discrete Knowledge Gradient, the optimal inputs $X_{MC}$ form the discretization used in Hybrid KG, i.e., \begin{eqnarray} \text{KG}_h(x^{n+1}) &=& \text{KG}_d(x^{n+1}, X_{MC}). \end{eqnarray} Thus Hybrid Knowledge Gradient is a deterministic, analytic, \emph{maximized} lower bound to the true Knowledge Gradient. Hybrid KG scales to higher dimensional inputs like Monte-Carlo KG, while reducing computation using only $n_z=5$. Similar to Monte-Carlo KG, repeated calls to $\text{KG}_h(x^{n+1})$ for different $x^{n+1}$ still require executing all the {\tt Optimizer()} calls from scratch. \subsection{One-Shot Knowledge Gradient} With the goal of reducing the computation of Monte-Carlo KG, with a few changes, we next show how to derive One Shot KG \cite{balandat2020botorch}. If we assume we are given a set of $Z_{MC}$ each with corresponding optimal points $X_{MC}$, each $Z_j\in Z_{MC}$ is paired with a $x_j^* \in X_{MC}$, the One-Shot estimate of KG is as follows, \begin{eqnarray} \text{KG}_{\text{OS}} (x^{n+1}, Z_{MC}, X_{MC}) &=& \frac{1}{n_z}\sum_j \mu^{n+1}_j(x^*_j) - \max_{x'}\mu^n(x'). \end{eqnarray} $\text{KG}_{\text{OS}}$ would be a very poor under estimate if $x^*_j\in X_{MC}$ points are random, and when the points are all optimized it recovers $\text{KG}_{\text{MC}}$. In One-Shot KG, the random samples $Z_{MC}$ are fixed for each BO iteration hence $\text{KG}_{\text{OS}}$ is deterministic. Next, in the search for $x^{n+1}$, we may \emph{simultaneously} search over $X_{MC}$ hence the $\text{KG}_{\text{OS}}$ estimate improves over the course of the search for the next candidate point $x^{n+1}$, \begin{eqnarray} {x^{n+1}}^* &=& \argmax_{x^{n+1}} \max_{X_{MC}} \text{KG}_{\text{OS}}(x^{n+1}, X_{MC}, Z_{MC}). \end{eqnarray} equivalently, this acquisition function may be optimized with the same deterministic optimizer \begin{equation} \underset{x^{n+1},X_{MC}}{\tt Optimizer}\big(\text{KG}_{\text{OS}}(x^{n+1}, X_{MC}, Z_{MC})\big), \end{equation} where $Z_{MC}$ are frozen constant values and all the $x$ points are optimized over the same domain $X$, the final optimal ${x^{n+1}}^*$ is used as the next sample (the final optimized $X_{MC}$ is no longer explicitly required). In Monte-Carlo KG and Hybrid KG, one optimizer searches for $x^{n+1}$, and at each candidate $x^{n+1}$, nested optimizers are applied to find $X_{MC}$, even if subsequent $x^{n+1}$ are very close and optimization of $X_{MC}$ may be largely repeated. One-Shot KG optimizes both $x^{n+1}$ and $X_{MC}$ at the same time in a single optimizer, significantly reducing computational effort to find $X_{MC}$. However, freezing $Z_{MC}$ and not ensuring $X_{MC}$ is fully converged introduces bias. \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[height=3.5cm]{Pics/oneshot_2.pdf}& \includegraphics[height=3.5cm]{Pics/oneshot_9.pdf}& \includegraphics[height=3.5cm]{Pics/oneshot_32.pdf}\\ (a) & (b) & (c)\\ \includegraphics[height=3.5cm]{Pics/oneshot_epigraph_2.pdf}& \includegraphics[height=3.5cm]{Pics/oneshot_epigraph_9.pdf}& \includegraphics[height=3.5cm]{Pics/oneshot_epigraph_32.pdf}\\ (d) & (e) & (f)\\ \end{tabular} \caption{Illustration of the $\text{KG}_{\text{OS}}$ acquisition function optimization. (a) shows an initial sample $x^{n+1}$ (red) and set $X_{MC}$ over a black-box function landscape, with brighter colors indicating higher function values. (b) shows the resulting $x^{n+1}$ and $X_{MC}$ after applying {\tt Optimizer()} to the acquisition function where both, $x^{+1}$ and $X_{MC}$ are optimized at the same time in the optimizer. (c) shows the final $x^{n+1^*}$ and $X_{MC}$ achieved by the optimizer.(d-f) shows the surface of the maximum posterior over the set $X_{MC}$ given the values of $Z_{MC}$ (blue dots). } \end{figure} \section{One Shot Hybrid Knowledge Gradient} \label{sec:one-shot-hybrid-KG} In this work we propose a simple unification of the aforementioned innovations. We take discrete KG and and make the discretization an explicit variable to be optimized along with the next sample point, that is \begin{eqnarray} {x^{n+1}}^* = \argmax_{x^{n+1}}\max_{X_d} \text{KG}_{OSH}(x^{n+1}, X_d) \end{eqnarray} where $\text{KG}_{OSH()} = \text{KG}_d()$ but we use separate notation for clarity here. The optimization is performed over the joint domain $(x^{n+1}, X_d)\in X^{1+n_z}$. Note that neither $Z_h$ or $Z_{MC}$ are required. This method may be viewed as Discrete KG and One-Shot KG where both tricks have been applied simultaneously, the hybrid trick: enabling $n_z=|X_d|=5$ and a tight lower bound estimate of true KG, and the one-shot trick: simultaneous optimization drastically reducing execution time. As discrete KG is analytically tractable, the gradients with respect to both arguments are also analytically tractable and hence may be optimized with any \emph{deterministic} gradient ascent algorithm. For a given discretization size, the One-Shot Hybrid KG has almost exactly the same computational cost as discrete KG. Both methods compute $\text{KG}_d()$ and $\nabla_{x^{n+1}}\text{KG}_d()$ for gradient ascent over $x^{n+1}$. However, One-Shot Hybrid KG also computes $\nabla_{X_d}\text{KG}_d()$ for gradient ascent over $X_d$. In practice we use PyTorch that supports automatic differentiation via the back-propagation algorithm and GPU acceleration. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{Pics/KG_demo-crop.pdf} \caption{ \label{fig:KG_demo} Methods for computing $\text{KG}(x^{n+1})$ at $x^{n+1}=7$. Left: $\mu^n(x)$ and samples of $\mu^{n+1}(x)$ determined by a scalar $Z\sim N(0,1)$. Centre-left: $\text{KG}_d$ replaces $X$ with up to 3000 points $x_i\in X_d$ and $\mu^{n+1}(x_i)$ is linear in $Z$. Centre-right: $\text{KG}_{MC}$ samples up to 1000 functions $\mu^{n+1}(x)$ functions and maximises each of them numerically. Right: $\text{KG}_h$ samples up to 5 functions $\mu^{n+1}(x)$ and maximizes them numerically, the $\argmax$ points $x^*_1,..,x^*_5$ are used as $X_d$ in $\text{KG}_d$. } \vspace{-0.3cm} \end{figure*} \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[height=3.7cm]{Pics/hybridoneshot_2.pdf}& \includegraphics[height=3.7cm]{Pics/hybridoneshot_9.pdf}& \includegraphics[height=3.7cm]{Pics/hybridoneshot_38.pdf}\\ (a) & (b) & (c)\\ \includegraphics[height=3.7cm]{Pics/hybridoneshot_epigraph_2.pdf}& \includegraphics[height=3.7cm]{Pics/hybridoneshot_epigraph_9.pdf}& \includegraphics[height=3.7cm]{Pics/hybridoneshot_epigraph_38.pdf}\\ (d) & (e) & (f)\\ \end{tabular} \caption{Illustration of the One-Shot Hybrid KG acquisition function optimization. (a) shows an initial sample $x^{n+1}$ (red) and set $X_{MC}$ over a black-box function landscape, with brighter colors indicating higher function values. (d) shows the surface of the maximum posterior over the set $X_{MC}$ for $Z$. The One-Shot Hybrid KG aims to maximize the expectation of the piece-wise linear function (red). (b) shows the resulting $x^{n+1}$ and $X_{MC}$ after applying {\tt Optimizer()} to Discrete KG where both, $x^{+1}$ and $X_{MC}$ are optimized at the same time in the optimizer. (c) shows the final $x^{n+1^*}$ and $X_{MC}$ achieved by the optimizer with an optimized epigraph (f). } \end{figure} \subsection{Theoretical Properties} As Hybrid One-Shot KG is simply an extension of Discrete KG, it inherits the theoretical properties of Discrete KG in continuous spaces previously proven in \cite{Scott2011a}. The algorithm converges in the limit of infinite budget, with infinitely many BO iterations and calls to the expensive black box, the true optimal input will be found. We only require that $\text{KG}_d(x^{n+1}, X_d) \geq 0$ for all $x^{n+1} \in X$, which is trivially satisfied by enforcing that $x^*_n = \argmax \mu^n(x)$ is included in the set $X_{MC}$ and thus \begin{eqnarray} \max_{X_{d}}\text{KG}_d(x^{n+1}, X_{d})&=& \max_{X_{d}} \mathbb{E}_Z\left[\max \mu^n(X_{d}\cup\{x_n^*\}) + Z \tilde\sigma(x^{n+1}, X_{d}\cup\{x_n^*\})\right] - \max\mu^n(x) \\ &\geq& \mathbb{E}_Z\left[\mu^n(x^*_n) + Z \tilde\sigma(x^{n+1}, x^*_n)\right] - \max \mu^n(x) \\ &=& \mu^n(x^*_n) + \mathbb{E}[Z]\tilde\sigma(x^{n+1}, x^*_n) - \max \mu^n(x) \\ &=& 0. \end{eqnarray} The equality holds when $k^n(x^{n+1}, x) = c$ (typically $c=0$), and there is no benefit in sampling $x^{n+1}$, if this equality holds for all $x^{n+1}\in X$, it can be shown that the true optimal input is known. Further details can be found in \cite{poloczek2017multi, pearce2022bayesian}. Also inherited from Discrete KG is the consistency of the One Shot Hybrid KG estimator as the discretization size increases to infinity, increasing discretization size increases accuracy of the KG estimate. Let $X_d^k = \{x_i|x_i\sim U(X), i=1,....,d\}$ be the uniformly randomly generated discretization over $X$ with $d$ points, then we have that \begin{eqnarray} \lim_{d\to\infty} \text{KG}_{OSH}(x^{n+1}, X_d) = \lim_{d\to\infty} \text{KG}_{d}(x^{n+1}, X_d) = \text{KG}(x^{n+1}). \end{eqnarray} While the result may be clear, the practical implication of this is two fold. Firstly, $d$ is an algorithm hyperparameter. One may choose to increase $d$ and improve the accuracy (and cost) of each $\text{KG}_{OSH}()$ call or alternatively, one may run ${\tt Optimizer()}$ for more iterations so that even for small $d$ the sparse $X_d$ converges towards an optimum. In contrast, for one-shot KG where the \emph{first} call to $\text{KG}_{OS}()$ with random $X_{MC}$ is a poor estimate of true KG regardless of $n_z$, increasing the hyperparameter will not increase KG estimate accuracy, the algorithm requires {\tt Optimizer()}~to be run for multiple iterations for the KG estimate to become more accurate. Hence One Shot hybrid KG may be somewhat less sensitive to hyperparameter settings. In our experiments, we run the methods for a range of hyper parameter settings comparing final performance however creating a strictly controlled experiment for comparison is a non trivial task which we leave to future work. \section{Numerical Experiments}\label{sec:numerical} In this section we compare all KG implementations under the following acquisition function parameters: \begin{itemize} \item Discrete Knowledge Gradient (DISC): We test this approach under 3, 10, and 1000 quasi-random uniformly distributed points. \item Monte-Carlo Knowledge Gradient (MC): We generate $n_{z}=$ 3 and 10 quasi-random standard Gaussian values. \item Hybrid Knowledge Gradient (HYBRID): We generate $n_{z}=$ 3 and 10 uniformly spaced Gaussian quantiles. \item One-Shot Knowledge Gradient (ONESHOT): We generate $n_{z}=$ 3, 10, 128, and 500 quasi-random standard Gaussian values. \item One-Shot Hybrid Knowledge Gradient (ONESHOT-HYBRID): We optimize over a discretization size of 3 and 10. \end{itemize} For each method that depends on quasi-random samples, we fix the samples at each BO iteration. The resulting acquisition function is an entirely deterministic optimization problem and may be optimized using a deterministic optimizer. For One-Shot Knowledge Gradient, we used implementations available in BOTorch \cite{balandat2020botorch}. The remaining algorithms have been implemented from scratch. \subsection{GP-Generated Experiments} We consider a 100 test functions generated from a Gaussian process with a squared exponential kernel and hyper-parameters $l_{X} = 0.1$, $\sigma^2_{0}=1$. All functions are generated on a continuous space $X = [0,1]^{D}$ with dimensionality $D=\{2,6\}$, and without observation noise. The total budget of evaluations is set to $B=100$ and the results over the 100 test functions are aggregated to obtain confidence intervals (CI). To obtain the wall clock time, we measure the acquisition function evaluation time of each generated test function immediately after the initial design is evaluated. We initially train the Gaussian process model to a set of $2(D + 1)$ initial black-box evaluations from the overall budget using a Latin hypercube (LHS) ‘space-filling’ experimental design. Furthermore, we assume that the hyper-parameters are known throughout the whole run of the algorithm to avoid the issue of model mismatch. Fig.~\ref{fig:OC_vs_ClockTime} shows the Opportunity cost (OC) once the budget, $B$, is depleted and the evaluation time in logarithmic scale. In both figures, DISC presents a performance close to random sampling when sparse discretizations are employed. Compared to other methods, a moderately high discretization size (1000) must be used to obtain competitive results. Notably, MC avoids the curse of dimensionality and drastically reduces the discretization size required compared to DISC. However, a small discretization size ($n_{z} =3$) produces high variance estimates of KG which reduces its performance. Furthermore, optimizing the discretization requires solving $n_{z}$ sequential inner optimization problems at each acquisition function call which drastically increases the wall-clock time. The HYBRID approximation improves over MC by generating a low variance approximation of KG which results in a superior performance when a low discretization is considered. However, HYBRID shows a similar evaluation time given by solving all inner optimization problems sequentially. On the other hand, ONESHOT avoids this problem by jointly optimizing the discretization space and the new solution. This results in a considerable decrease of the acquisition evaluation time, however, similar to DISC, ONESHOT relies on a moderately high discretization size to achieve competitive results. Lastly, the newly propopsed ONESHOT-HYBRID achieves a computational time comparable with DISC with competitive performance for low a higher discretization sizes. \begin{figure} \centering \begin{tabular}{cc} Dim: 2 & Dim: 6 \\ \includegraphics[height=5cm]{Pics/OC_vs_time_dim2_ls_0.1.pdf}& \includegraphics[height=5cm]{Pics/OC_vs_time_dim6_ls_0.1.pdf}\\ (a) & (b)\\ \end{tabular} \begin{tabular}{c} \includegraphics[height=0.7cm]{Pics/OC_vs_time_legend.pdf}\\ \end{tabular} \caption{Final Log OC vs Log Wall Clock Time (seconds) for (a) 2 design dimensions and (b) 6 design dimensions. In both plots, the mean performance of random sampling is shown as a grey horizontal line. All results are averaged over 100 independent test functions and both figures show the mean and 95\% confidence intervals for the OC.} \label{fig:OC_vs_ClockTime} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper we considered the problem of implementing a fast and accurate approximation of KG. We proposed One-Shot Hybrid Knowledge Gradient, a fast method to compute KG that scales to higher dimensions. We empirically demonstrate the effectiveness of the proposed approach where One-Shot Hybrid Knowledge Gradient is both fast to compute and preserves its performance even under low discretization sizes in higher dimensions. As future work, we also plan to extend the algorithm to be able to handle constraints, and for batch acquisition, i.e., where several solutions are to be selected in every iteration. \section*{Acknowledgements} The first author would like to acknowledge funding from ESTECO SpA and EPSRC through grant EP/L015374/1. \section{Gaussian Process Hyperparameter Estimation} \label{apndx:gp_hyperparams} \section{One Step Look-Ahead Posterior Mean Derivation}\label{apndx:one_step_post_mean} At iteration $n$ during optimization, let the training inputs be $X^n=\left(x^1,...,x^n\right)$ and the training outputs $Y^n = (y^1,...,y^n)$. Given a prior mean and kernels functions, $\mu^0(x):X\to \mathbb{R}$ and $k^0(x,x'): X \times X \to \mathbb{R}$. Finally let the new sample point be $x^{n+1}$. Updating the mean function with data from the $0^{th}$ step to $n^{th}$ step is given by \begin{eqnarray} \mu^n(x) &=& \mu^0(x) + k^0(x, X^n) \underbrace{K^{-1}\left(Y^n - \mu^0(X^n)\right )}_{\text{define }\tilde Y^n} \end{eqnarray} where $K=k^0(X^n, X^n)+\sigma_\epsilon^2I$. A simple change of indices from $0\to n$ and $n \to n+1$, yields the one-step updated posterior mean \begin{equation} \label{eqn:one-mean} \mu^{n+1}(x) = \mu^n(x) + \frac{k^n(x, x^{n+1})}{k^n(x^{n+1}, x^{n+1})+\sigma_\epsilon^2} \left(y^{n+1} - \mu^n(x^{n+1})\right ). \end{equation} which contains the random $y^{n+1}$ which has a predictive distribution \begin{equation} \mathbb{P}[y^{n+1}|x^{n+1}, X^n, Y^n] = N(\mu^n(x^{n+1}), k^n(x^{n+1}, x^{n+1})+\sigma_\epsilon^2). \end{equation} hence we may take factorise the one-step look head posterior mean expression as follows \begin{eqnarray} \mu^{n+1}(s,x) &=& \mu^n(s,x) + % k^n(x, x^{n+1})\frac{1}{\underbrace{\sqrt{ k^n(x^{n+1}, x^{n+1})+\sigma_\epsilon^2} }_{\text{standard deviation of $y^{n+1}$}}} % \underbrace{\frac{\left(y^{n+1} - \mu^n(x^{n+1})\right )} {\sqrt{ k^n(x^{n+1}, x^{n+1})+\sigma_\epsilon^2}} }_{\text{Z-score of $y^{n+1}$}} \\ % &=& \mu^n(x) + \frac{ k^n(x, x^{n+1}) }{ \sqrt{ k^n(x^{n+1}, x^{n+1})+\sigma_\epsilon^2} }Z \label{eq:factorized_new_post_mean} \\ &=& \mu^n(x) + \tilde\sigma(x, x^{n+1}) Z \end{eqnarray} where the left factor is a deterministic and the right factor is the (at time $n$) stochastic Z-score of the new $y^{n+1}$ value. One may simply sample $Z\sim N(0,1)$ values and compute Equation \ref{eq:factorized_new_post_mean} to generate posterior mean functions. \section{Discrete KG Algorithm}\label{apndx:discrete_KG_epigraph} \newcommand{\underline{\mu}}{\underline{\mu}} \newcommand{\underline{\sigma}}{\underline{\sigma}} \newcommand{\underline{\tilde{Z}}}{\underline{\tilde{Z}}} \begin{algorithm}[!h] \caption{Knowledge Gradient by discretization. This algorithm takes as input a set of linear functions parameterised by a vector of intercepts $\underline{\mu}$ and a vector of gradients $\underline{\sigma}$. It then computes the intersections of the piece-wise linear epigraph (ceiling) of the functions and the expectation of the output of the function given Gaussian input. Vector indices are assumed to start from 0. \label{alg:KGdisc}} \begin{algorithmic} \Require $\underline{\mu}$, $\underline{\sigma} \in \mathbb{R}^{n_A}$ \State $O \gets \text{order}(\underline{\sigma})$ \Comment{get sorting indices of increasing $\underline{\sigma}$} \State $\underline{\mu} \gets \underline{\mu}[O]$, $\underline{\sigma} \gets \underline{\sigma}[O]$ \Comment{arrange elements} \State $I\gets[0,1]$ \Comment{indices of elements in the epigraph} \State $\underline{\tilde{Z}} \gets [-\infty, \frac{\mu_0 - \mu_1}{\sigma_1 - \sigma_0}]$ \Comment{z-scores of intersections on the epigraph} \For{$i=2$ \textbf{to} $n_z-1$} \State ($\star$) \State $j\gets last(I)$ \State $z\gets \frac{\mu_i - \mu_j}{\sigma_j - \sigma_i} $ \If {$z<last(\underline{\tilde{Z}})$} \State Delete last element of $I$ and of $\underline{\tilde{Z}}$ \State Return to ($\star$) \EndIf \State Add $i$ to end of $I$ and $z$ to $\underline{\tilde{Z}}$ \EndFor \State $\underline{\tilde{Z}}\gets [\underline{\tilde{Z}},\infty]$ \State $\underline{A} \gets \phi(\underline{\tilde{Z}}[1:]) - \phi(\underline{\tilde{Z}}[:-1])$ \Comment{assuming python indexing} \State $\underline{B} \gets \Phi(\underline{\tilde{Z}}[1:]) - \Phi(\underline{\tilde{Z}}[:-1])$ \State $\text{KG} \gets \underline{B}^T\underline{\mu}[I] - \underline{A}^T\underline{\sigma}[I] - \max \underline{\mu}$ \Comment{compute expectation} \State \Return K \end{algorithmic} \end{algorithm} \label{apndx:} \section{Related Work} \label{sec:related_work} Bayesian optimization (BO) has gained wide popularity, especially for problems involving expensive black-box functions. BO constructs a surrogate model, usually a Gaussian process (GP) after collecting some initial data. Then, it iteratively uses an acquisition function to decide what data would be most valuable to collect next, explicitly balancing exploration (collecting more information about yet unexplored areas) and exploitation (evaluating solutions that are predicted to be good). After sampling the next solution, the Gaussian process model is updated with the new information and the process is repeated until the available budget of evaluations has been consumed. For a comprehensive introduction see \cite{frazier2018tutorial} and \cite{deFreitas2016}. Several acquisition functions have been proposed in the literature for single objective problems. The most popular is Expected Improvement (EI) of \cite{jones1998efficient} that measures the expected amount that a new output $f(x)$ improves over the current best sampled design output. \cite{srinivas2010} proposes to optimize an optimistic upper-confidence bound (UCB) acquisition function where the benefit each point $x$ is quantified using a quantile of the distribution $f(x)$. Thompson sampling (TS) \cite{Thompson1933} is a randomized strategy which samples a reward function from the posterior and selects the arm with the highest simulated reward. However, in a continuous design space, TS corresponds to sampling the objective function from the GP posterior and then obtain the sample that maximizes this GP realization. This acquisition function is simple to implement and may be applied for noisy objective functions. \cite{Hernandez-Lobato2014} considers an information-based acquisition function where a GP over outputs generates a distribution over the input domain that models the probability of any input being the maximizer. This acquisition criterion involves computing the expected entropy reduction of the global solution and may be used for noisy observations. The Knowledge Gradient (KG) acquisition function (\cite{frazier2009knowledge}) is another myopic acquisition function that aims to maximize the new GP predicted optimal performance after one new sample. KG is derived by reconsidering the assumption made EI where the Bayesian optimization algorithm returns only a previously evaluated design as a final solution. This can be considered as a sensible assumption if the decision maker is highly risk-averse and evaluations are noise-free, but if the decision-maker is willing to tolerate some risk then we may report a design that has uncertainty attached to it. Moreover, if evaluations have noise then, the final recommended solution is necessarily uncertain and any returned solution would require a substantial amount of re-evaluations to determine the quality. Therefore, KG replaces this assumption by allowing the algorithm to return any solution according to the GP predicted performance, even if it has not been previously evaluated. Although KG has demonstrated superior empirical performance compared to other acquisition functions (\cite{Picheny2013}), especially for larger levels of noise, KG is not available in closed-form. Therefore, many different approximations have been proposed. \cite{frazier2009knowledge} propose discretising the design space and solving a series of linear problems. A more recent approach involves Monte-Carlo sampling of the observed value at design $x^{n+1}$, and solving a inner optimization problem for each sample to identify the best posterior mean \cite{wu2017discretization}. \cite{pearce2020practical} proposed a hybrid between between both approaches that consists of obtaining high value points from the predictive posterior GP mean that would serve as a discretization. Combining both approaches allows to leverage the scalability of the Monte-Carlo based acquisition function and the computational performance of discretising the design space. Lastly, the "one-shot" formulation (\cite{balandat2020botorch}) treats optimizing KG as a deterministic optimization problem. It involves drawing fixed base samples for the inner optimization inside the expectation. Then, the discretization, and the sample decision are jointly optimized. In the end, the sample decision is recommended and the discretization is discarded as an optimization by-product. EI, UCB are fast but simple. KG,. ES, MES are slow yet more advanced. WE propose hybrid KG that is both fast and sophisticated. {\bf I think we'll never be able to compete with EI and UCB in terms of speed. So may have to word more carefully.}
proofpile-arXiv_065-3930
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Real-world engineering systems are unavoidably subject to uncertainty, rising from various sources: material properties, geometric parameters, external perturbations and so on. In practice, it is vital to characterise and quantify the impact of the uncertainties on the system performance or reliability, which constitutes a central task in the field of uncertainty quantification (UQ). Mathematical models and simulations are important tools to assess how engineering systems are impacted by the uncertainty. Within these, the system performance or reliability is often characterised by a scalar parameter \emph{y}, which we will now refer to as the \emph{performance variable}. This performance variable can be expressed by a performance function $y = g(\mathbf{x})$, where $\mathbf{x}$ is a multi-dimensional random variable representing all the uncertainty factors affecting the system; the performance function is usually not of analytical form, and needs to be evaluated by simulating the underlying mathematical model. A typical example is in structural engineering, where the performance variable $y$ is the deformation of some key components. The distribution of this performance variable is important in many UQ problems, ranging from risk management to utility optimisation. A challenge here is that these UQ problems may demand various statistical information of the performance $y$: for example, in robust optimisation, the interests are predominantly in the mean and variance~\cite{du2004sequential}, in risk management, one is interested in the tail probability as well as some extreme quantiles~\cite{rockafellar2000optimization}, and in utility optimisation, the complete distribution of the performance parameter is required~\cite{hazelrigg1998framework}. To this end, methods that can efficiently reconstruct the probability distribution of the performance variable directly are strongly desirable. In principle the distribution of $y$ can be estimated by standard Monte Carlo (MC) simulations, however MC can be prohibitively expensive for systems with complex mathematical models. In our previous works~\cite{wu2016surrogate,chen2017subset}, we proposed using the Multicanonical Monte Carlo (MMC) method for computing the distribution of $y$. The MMC method is a special adaptive importance sampling (IS) scheme, which was initially developed by Berg and Neuhaus~\cite{berg1991multicanonical,berg1992multicanonical} to explore the energy landscape of a given physical system. In the MMC method, one splits the state space of the performance parameter of interest into a set of small bins and then iteratively constructs a so-called flat-histogram distribution that can assign equal probability to each of the bins. This allows for the construction of the entire distribution function of the performance parameter, significantly more efficiently than using standard MC. There are other advanced MC techniques developed in reliability engineering, such as the cross-entropy method~\cite{li2011efficient}, subset simulation~\cite{AuB99}, sequential Monte Carlo~\cite{cerou2012sequential}, etc. These methods are designed to provide a variance-reduced estimator for a specific quantity associated with the distribution of $y$, such as the probability of a given random event, rather than reconstructing the distribution itself. A key characteristic of MMC is that, within each iteration, samples are drawn from an IS distribution in a nonstandard form, which is usually done via Markov chain Monte Carlo (MCMC). MCMC is inherently serial \cite{hafych2022parallelizing}, in that it relies on the convergence of a single Markov chain to its stationary distribution, and therefore often struggles with parallelism. As a result the MMC method implemented with MCMC (referred to as MMC-MCMC hereafter) cannot take advantage of high-powered parallel computing. There are further limitations to MCMC - detailed in Section \ref{MCMCvSMCS} - which reduce the overall efficiency of the MMC-MCMC method. We propose using an alternative sampling method, namely the Sequential Monte Carlo sampler (SMCS), to draw samples from the IS distributions. The SMCS method, first developed in \cite{del2006sequential}, can fulfil the same role as MCMC in that, by conducting sequential IS for a sequence of intermediate distributions, it can generate (weighted) samples from an arbitrary target distribution. The reason that we choose to implement MMC with the SMCS method is two-fold: first, since SMCS is essentially an IS scheme, it is easily parallelisable; second, SMCS can take advantage of a sequence of intermediate distributions, allowing it to be effectively integrated into the MMC scheme. Both points will be elaborated on later. The rest of the paper is organized as follows. In Section \ref{Section:MMC}, we present the Multicanonical Monte Carlo method and in Section \ref{SMCSDetail}, the Sequential Monte Carlo sampler. We bring these techniques together in Section \ref{Section:MMCSMCS} to present the proposed \emph{Multicanonical Sequential Monte Carlo Sampler} and then apply this to various numerical examples in Section \ref{Section:NumEx}. Finally, Section \ref{Section:Conclusion} provides concluding remarks. \section{Multicanonical Monte Carlo method}\label{Section:MMC} \subsection{Problem setup and the Monte Carlo estimation} We start with a generic setup of the problems considered here. Let $\mathbf{x}$ be a $d$-dimensional random vector following distribution $p_\mathbf{x}(\cdot)$, and let $y$ be a scalar variable characterised by a function $y = g(\mathbf{x})$. We want to determine the probability density function (PDF) of $y$, given by $\pi(y)$, where we assume that both $\mathbf{x}$ and $y$ are continuous random variables. We now discuss how to estimate the PDF using the standard MC simulation. For the sake of convenience, we assume that $\pi(y)$ has a bounded support $R_y=[a,b]$, and if the support of $\pi(y)$ is not bounded, we choose the interval $[a,b]$ that is sufficiently large so that $\mathbb{P}(y\in[a,b])\approx1$. We first decompose $R_y$ into $M$ bins of equal width $\Delta$ centred at the discrete values $\{b_1,...,b_M\}$, and define the $i$-th bin as the interval $B_i = [b_i - \Delta/2, b_i + \Delta/2]$. This binning implicitly defines a partition of the input space $X$ into $M$ domains $\{D_i\}_{i=1}^{M}$, where \begin{equation} D_i = \{\mathbf{x} \in X : g(\mathbf{x}) \in B_i\} \end{equation} is the domain in X that maps into the $i$-th bin $B_i$ (see Fig.~\ref{fig:MultiBin}). While $B_i$ are simple intervals, the domains $D_i$ are multidimensional regions with possibly tortuous topologies. Therefore, an indicator function is used to classify whether a given $\mathbf{x}$-value is in the bin $D_i$ or not. Formally, the indicator function is defined as, \begin{equation} I_{D_i}(\mathbf{x}) = \begin{cases} 1, & \text{if $\mathbf{x} \in D_i$};\\ 0, & \text{otherwise} \end{cases} \end{equation} or equivalently $\{y = g(\mathbf{x}) \in B_i\}$. By using this indicator function, the probability that $y$ is in the $i$-th bin, i.e. $P_i = \PR\{y \in B_i\}$, can be written as an integral in the input space: \begin{equation} P_i = \int_{D_i}p(\mathbf{x})d\textbf{x} = \int I_{D_i}(\mathbf{x})p(\mathbf{x})d\textbf{x} = \E[I_{D_i}(\mathbf{x})]. \label{Pi_estimator} \end{equation} We can estimate $P_i$ via a standard MC simulation. Namely, we draw $N$ i.i.d. samples $\{\mathbf{x}_1,...,\mathbf{x}_N\}$ from the distribution $p(\mathbf{x})$, and calculate the MC estimator of $P_i$ as \begin{equation} \hat{P}_i^{MC} = \frac{1}{N} \sum_{j=1}^{N} I_{D_i}(\mathbf{x}^j) = \frac{N_i}{N},\quad \mathrm{for}\,\, i=1,\,...,M, \label{MC_estimator} \end{equation} where $N_i$ is the number of samples that fall in bin $B_i$. Once we have obtained $\{P_i\}_{i=1}^M$, the PDF of $y$ at the point $y_i\in B_i$ - for a sufficiently small $\Delta$ - can be calculated as $\pi(y_i) \approx P_i / \Delta$. \begin{figure} \centering \includegraphics[width=100mm]{MulticanonicalMapping} \caption{Schematic illustration of the connection between $B_i$ and $D_i$. This figure is reprinted from \cite{wu2016surrogate}.} \label{fig:MultiBin} \end{figure} \subsection{Flat Histogram Importance Sampling} The MC approach can be improved through the use of Importance Sampling. Here IS is used to artificially increase the number of samples falling in the tail bins of the histogram. Given an IS distribution $q(\mathbf{x})$, Eq. \ref{Pi_estimator} can be re-written as \begin{equation} P_i = \int I_{D_i}(\mathbf{x})[\frac{p(\mathbf{x})}{q(\mathbf{x})}]q(\mathbf{x})d\textbf{x} = \E_q[I_{D_i}(\mathbf{x})w(\mathbf{x})] \end{equation} where $w(\mathbf{x}) = p(\mathbf{x})/q(\mathbf{x})$ is the IS weight and $\E_q$ indicates expectation with respect to the IS distribution $q(\mathbf{x})$. The IS estimator for $P_i$ can then be written as follows: \begin{equation} \hat{P}_i^{IS} = \left[\frac{1}{N} \sum_{j=1}^{N} I_{D_i}(\mathbf{x}^j) w(\mathbf{x}^j)\right] \label{IS_estimator} \end{equation} for each bin $i=1,...,M$. As is well known, key to the successful implementation of IS is identifying a good IS distribution $q(\mathbf{x})$, which is particularly challenging for the present problem, as we are interested in multiple estimates (i.e. $P_1,\,...,\,P_M$) rather than a single one, as in conventional IS problems. The solution provided by MMC is to use the so-called \emph{uniform weight flat-histogram (UW-FH)} IS distribution. The UW-FH IS distribution is designed to achieve the following two goals. First, it should allocate the same probability to each bin, i.e. assuming $x\sim q(x)$, $$P_i^{\ast}:=\mathbb{P}(y=g(\mathbf{x})\in B_i) = 1/M,$$ for all $i$. Intuitively, this property allows all bins to be equally visited by the samples generated from the IS distribution. Second, it should assign a constant weight to all samples falling in the same bin, that is, $w(\mathbf{x}) = \Theta_{i}$ for all $\mathbf{x} \in D_i$, where $\Theta_i$ is a positive constant. Loosely speaking, the second property ensures that all samples falling in the same bin are equally good. The UW-FH distribution can be expressed in the form of: \begin{equation}\label{e:uwfh} q(\mathbf{x}) \propto \begin{cases} \frac{p(\mathbf{x})}{c_\Theta\Theta(\mathbf{x})}, & \mathbf{x} \in D,\\ 0, & \mathbf{x} \notin D, \end{cases} \end{equation} where $\Theta(\mathbf{x}) = \Theta_{i}$ for $ \mathbf{x} \in D_i,\; i = 1,...,M$, and $c_\Theta$ is a normalizing constant. It is easy to see that, \begin{equation} P_i^{\ast} = \int_{D_i} q(\mathbf{x}) d\mathbf{x} =\frac{\int_{D_i} p(\mathbf{x})d\mathbf{x}}{c_\Theta\Theta_i} = \frac{P_i}{c_\Theta\Theta_i}. \label{biasing_PMF} \end{equation} Recall that $P_i^*=1/M$ for all $i$, so it follows $\Theta_i \propto P_i$, i.e. $\Theta_i$ is proportional to the sought probability $P_i$, and $c_{\Theta} = \sum_{i=1}^{M} \frac{P_i}{\Theta_i}$. \subsection{Multicanonical Monte Carlo} The UW-FH distribution, given by Eq.~\eqref{e:uwfh}, cannot be used directly as $\Theta_{i}$ depends on the sought-after unknown $P_i$. The MMC method iteratively addresses this, starting from the original input PDF $p(\mathbf{x})$. Simply put, starting with $q_{0}(\mathbf{x})$ and $\Theta_{0,i} = p$ for all $i = 1,...,M$, where $p = \sum_{i=1}^{M}P_i$, the MMC method iteratively constructs a sequence of distributions (for $t\geq1$) \begin{equation}\label{e:qt} q_t(\mathbf{x}) \propto \begin{cases} \frac{p(\mathbf{x})}{c_t\Theta_{t}(\mathbf{x})}, & \mathbf{x} \in D;\\ 0, & \mathbf{x} \notin D. \end{cases} \end{equation} where $\Theta_t(\mathbf{x}) = \Theta_{t,i}$ for $\mathbf{x} \in D_i$ and $c_t$ is the normalizing constant for $q_t$. Ideally we want to construct $q_t$ in a way that it converges to the actual UW-FH distribution as $t$ increases. The key here is to estimate the values of $\{ \Theta_{t,i}\}_{i=1}^M$. It is easy to see that when $q_t$ is used as the IS distribution, we have $P_i = c_t P_{i}^{\ast}\Theta_{t,i}$. That is, in the $t$-th iteration, one draws $N$ samples $\{\mathbf{x}^j\}^N_{j = 1}$ from the current IS distribution $q_t(\mathbf{x})$, then updates $\{\Theta_{t+1,i}\}_{i=1}^M$ using the following formulas, \begin{subequations} \label{e:params} \begin{gather} \hat{H}_{t,i} = \frac{N_{t,i}^{\ast}}{N}\label{e:Hti}\\ P_{t,i} = \hat{H}_{t,i} \; \Theta_{t,i}\label{e:Pti}\\ \Theta_{t+1,i} = P_{t,i} \end{gather} \end{subequations} where $N_{t,i}^{\ast}$ is the number of samples falling into region $D_i$ in the $t$-th iteration. Note that in Eq.~\eqref{e:Pti} we neglect the normalizing constant $c_t$ as it is not needed in the algorithm, which will become clear later. The process is then repeated, until the resulting histogram is sufficiently ``flat'' (see e.g. \cite{iba2014multicanonical}). \subsection{The limitation of MCMC} \label{MCMCvSMCS} To implement the MMC method, one must be able to generate samples from the IS distribution $q_t(\cdot)$ at each iteration. Typically, this is done using Markov Chain Monte Carlo (MCMC). Simply speaking, MCMC constructs a Markov Chain that converges to the target distribution. It is convenient to use as it only requires the ability to evaluate the target PDF up to a normalizing constant (and therefore the knowledge of $c_t$ in Eq.~\eqref{e:qt} is not needed). The core of MCMC is to construct a single Markov chain converging to its stationary distribution, which often takes a very large number of iterations (known as the burn-in period) to be achieved. The process cannot be easily accelerated by parallel processing. We note here that there are some MCMC variants, e.g. \cite{vanderwerken2013parallel}, that attempt to exploit parallel implementation; however, to the best of our knowledge, none of these methods can take full advantage of modern parallel computing power. For example, the multi-chain MCMC algorithms can be implemented in parallel, but each single chain still requires a long burn-in period before it converges to the target distribution. As a result, MMC-MCMC cannot fully exploit the potential provided by high-performance parallel computing available nowadays. In this work, we want to provide an alternative implementation of MMC, which is based on the sequential Monte Carlo sampler. \section{Sequential Monte Carlo sampler} \label{SMCSDetail} First proposed in \cite{del2006sequential}, SMCS is an IS method for drawing samples from a sequence of distributions $\{q_t(\cdot)\}_{t=1}^{T}$. It is a generalisation of the particle filter \cite{arulampalam2002tutorial}, where weighted samples are generated in a sequential manner. Several extensions to this method have been proposed, e.g. \cite{beskos2017multilevel,heng2020controlled,green2022increasing,south2019sequential}, with the latest advances being summarised in two recent reviews \cite{chopin2020introduction, dai2020invitation}. Suppose we have samples following distribution $q_{t-1}(\cdot)$ but want them to follow $q_t(\cdot)$ instead, we can use SMCS. First, a forward Kernel is applied to each of the current samples - sometimes with an acceptance criterion - and then a weight is calculated for each new sample. Finally, if the effective sample size across all the samples is below a certain threshold (usually less than half the total number of samples) the proposed samples are resampled. These new weighted samples follow the distribution $q_t(\cdot)$. We present the SMCS method in a recursive formulation, largely following the presentation of \cite{del2006sequential} and \cite{wu2020ensemble}. Suppose that at time $t-1$, we have an IS distribution $\gamma_{t-1}(\mathbf{x}_{t-1})$, from which we have or can generate an ensemble of $N$ samples $\{\mathbf{x}_{t-1}\}_{j=1}^N$. To implement SMCS, we first choose two conditional distributions $K_t(\cdot|x_{t-1})$ and $L_{t-1}(\cdot|x_t)$, referred to as the forward and backward kernels respectively. Using $L_{t-1}(\cdot|x_{t})$, we are able to construct a joint distribution of $\mathbf{x}_{t−1}$ and $\mathbf{x}_t$ in the form of \begin{equation} r_t(\mathbf{x}_{t−1}, \mathbf{x}_t) = q_{t}(\mathbf{x}_t)L_{t−1}(\mathbf{x}_{t−1}|\mathbf{x}_{t}) \end{equation} such that the marginal distribution of $r_t(\mathbf{x}_{t−1}, \mathbf{x}_t)$ over $\mathbf{x}_{t−1}$ is $q_t(\mathbf{x}_t)$. Now, using $\gamma_{t-1}(\mathbf{x}_{t-1})$ and the forward Kernel $K_t(\mathbf{x}_t|\mathbf{x}_{t-1})$, we can construct an IS distribution for $r_t(\mathbf{x}_{t−1}, \mathbf{x}_t)$ in the form of \begin{equation} \label{SMCS_3.2} \gamma(\mathbf{x}_{t−1}, \mathbf{x}_t) = \gamma_{t−1}(\mathbf{x}_{t−1})K_t(\mathbf{x}_t|\mathbf{x}_{t−1}). \end{equation} One can draw samples from this joint IS distribution $\gamma(\mathbf{x}_{t−1}, \mathbf{x}_t)$ using $\{\mathbf{x}_{t-1}\}_{j=1}^N$ and the forward kernel $K_t$, and let $\{(\mathbf{x}^{j}_{t-1},\mathbf{x}^j_t)\}^{N}_{j=1}$ be an ensemble drawn from $\gamma(\mathbf{x}_{t−1}, \mathbf{x}_t)$. The corresponding weights are computed as \begin{subequations} \label{SMCS_weight_std} \begin{equation} \begin{aligned} \label{SMCS_weight_full} w_t(\mathbf{x}_{t-1:t}) &= \frac{r_t(\mathbf{x}_{t−1}, \mathbf{x}_t)}{\gamma(\mathbf{x}_{t−1}, \mathbf{x}_t)} = \frac{q_{t}(\mathbf{x}_t)\;L_{t−1}(\mathbf{x}_{t−1}|\mathbf{x}_{t})}{\gamma_{t−1}(\mathbf{x}_{t−1})\;K_t(\mathbf{x}_t|\mathbf{x}_{t−1})}\\ &= w_{t-1}(\mathbf{x}_{t-1})\alpha(\mathbf{x}_{t-1},\mathbf{x}_t) \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \label{SMCS_weight_split} w_{t-1}(\mathbf{x}_{t-1}) &= \frac{q_{t-1}(\mathbf{x}_{t-1})}{\gamma_{t−1}(\mathbf{x}_{t−1})},\\ \alpha_t(\mathbf{x}_{t-1},\mathbf{x}_t) &= \frac{q_{t}(\mathbf{x}_t)\;L_{t−1}(\mathbf{x}_{t−1}|\mathbf{x}_{t})}{q_{t−1}(\mathbf{x}_{t−1})\;K_t(\mathbf{x}_t|\mathbf{x}_{t−1})} \end{aligned}. \end{equation} \end{subequations} As such the weighted ensemble $\{\mathbf{x}^{j}_{t-1:t},w^{j}_{t}\}^{N}_{j=1}$ follows the joint distribution $r_t(\mathbf{x}_{t−1:t})$, and as such, $\{\mathbf{x}^{j}_{t},w^{j}_{t}\}^{N}_{j=1}$ follows the marginal distribution $q_t$. By repeating this procedure we can obtain weighted samples from the sequence of distributions $\{q_t\}_{t=1}^T$. For the SMCS method, the choice of forward and backward kernels are essential. While noting that there are a number of existing methods for determining the forward kernel, we adopt the MCMC kernel proposed in \cite{del2006sequential}, which is closely related to the Metropolis step in MCMC as the name suggests. Specifically, the forward kernel (more precisely the process for generating samples from the forward kernel) is constructed as follows. We chose a proposal distribution $k(\-x_t|\-x_{t-1})$, and with a sample from the previous iteration $\-x^j_{t-1}$, we draw a sample $\-x^*_t$ from $k(\-x_t|\-x^j_{t-1})$, and then accept (or reject) $\-x^*_t$ according to the following acceptance probability: \begin{equation} a_t(\mathbf{x}^{\ast}_{t}|\mathbf{x}^j_{t-1}) = \text{min}\left\{\frac{q_t(\mathbf{x}^{\ast}_{t})}{q_t(\mathbf{x}^j_{t-1})} \frac{k(\-x^j_{t-1}|\-x_t^*)}{k(\-x^*_{t}|\-x^j_{t-1})},1\right\}. \label{e:ap} \end{equation} That is, we set \begin{equation}\label{e:assign} \mathbf{x}_{t}^j = \left\{ \begin{aligned} &\mathbf{x}^{\ast}_{t}, ~\text{with probability} ~ a_t(\mathbf{x}^{\ast}_{t}|\mathbf{x}^{j}_{t-1}) \\ &\mathbf{x}^{j}_{t-1}, ~\text{otherwise.} \end{aligned} \right. \end{equation} Once a forward Kernel $K_t(\mathbf{x}_t|\mathbf{x}_{t-1})$ is chosen, one can determine an optimal choice of $L_{t-1}$ by: \begin{equation} \begin{aligned}\label{L_opt} L_{t-1}^{opt}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) &= \frac{q_{t-1}(\mathbf{x}_{t-1})K_{t}(\mathbf{x}_t|{\mathbf{x}_{t-1}})}{q_{t}(\mathbf{x}_{t})}\\ &= \frac{q_{t-1}(\mathbf{x}_{t-1})K_{t}(\mathbf{x}_t|{\mathbf{x}_{t-1}})}{\int q_{t-1}(\mathbf{x}_{t-1})K_{t}(\mathbf{x}_t|{\mathbf{x}_{t-1}})d\mathbf{x}_{t-1}}, \end{aligned} \end{equation} where the optimality is achieved through yielding the minimal estimator variance \cite{del2006sequential}. In reality, this optimal backward kernel usually cannot be used directly as the integral on the denominator cannot be calculated analytically. However, when the MCMC kernel is used, an approximate optimal kernel can be derived from Eq.~\eqref{L_opt}: \begin{equation} \label{L_mcmc} L_{t-1}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) = \frac{q_{t}(\mathbf{x}_{t-1})K_{t}(\mathbf{x}_t|{\mathbf{x}_{t-1}})}{q_{t}(\mathbf{x}_{t})}, \end{equation} the detailed derivation can be found in \cite{del2006sequential}. When Eq.~\eqref{L_mcmc} is used, the incremental weight function $\alpha_t(\mathbf{x}_{t-1},\mathbf{x}_t)$ in Eq. \ref{SMCS_weight_split}, reduces to the following: \begin{equation} \alpha_t(\mathbf{x}_{t-1},\mathbf{x}_t)=\frac{q_t(\mathbf{x}_{t-1})}{q_{t-1}(\mathbf{x}_{t-1})}. \label{eqAlphaMCMC} \end{equation} Note that, interestingly only the previous sample is used in the weight calculation when Eq.~\eqref{L_mcmc} is used. In our method, we use the MCMC kernel and Eq.~\eqref{L_mcmc} as the forward and backward kernels respectively. To alleviate sample degeneracy, a key step in SMCS is the resampling of samples according to their associated weights. The resampling algorithms are well documented, e.g. \cite{douc2005comparison}, and are not discussed here. In SMCS, typically resampling is conducted when the effective samples size (ESS) \cite{doucet2009tutorial} is lower than a prescribed threshold value $ESS_{\min}$. To conclude, we provide the complete procedure of SMCS in Algorithm \ref{alg:SMCS}, to generate $N$ samples from the target distribution $q_{t}(\cdot)$. \begin{algorithm} \caption{Sequential Monte Carlo Sampler} \label{alg:SMCS} \textbf{input}: weighted ensemble $\{(x_{t-1}^j,w_{t-1}^j)\}_{j=1}^N$ \textbf{for} {$j=1$ to $N$} \begin{algorithmic} \State (a) draw $\mathbf{\textbf{x}}^*_{t}$ from $k(\cdot|\mathbf{\textbf{x}}^j_{t-1})$ \State (b) calculate the acceptance probability $a(\-\textbf{x}^*_t,\-\textbf{x}_{t-1}^j)$ using Eq.~\eqref{e:ap} \State (c) determine $\textbf{x}^j_t$ using Eq.~\eqref{e:assign} and $a(\-\textbf{x}^*_t,\-\textbf{x}_{t-1}^j)$ \State(d) calculate $\alpha_t^j$ using Eq.~\eqref{eqAlphaMCMC} \State (e) compute $w^{j}_{t} = w^j_{t-1}\alpha^j_t$ \end{algorithmic} \textbf{end for} normalize the weights calculated calculate ESS \textbf{if} {$ESS < ESS_{\min}$} \begin{algorithmic} \State resample the ensemble and set $w^j_t= 1/N$ for $j=1,...,N$ \end{algorithmic} \textbf{end if} \end{algorithm} As one can see from Algorithm~\ref{alg:SMCS}, the SMCS algorithm is easily parallelizable, which is the main advantage over MCMC for our purposes. In addition, since SMCS is designed for sampling from a sequence of target distributions, it can naturally take advantage of the similarity between two successive target distributions, like the warped distributions in two consecutive iterations of MMC, which will be further demonstrated in Section~\ref{Section:MMCSMCS}. \section{Multicanonical Sequential Monte Carlo Sampler} \label{Section:MMCSMCS} Our proposed algorithm, termed as the \emph{Multicanonical Sequential Monte Carlo Sampler} (MSMCS) uses SMCS to generate the samples in each MMC iteration. As has been shown in Section~\ref{SMCSDetail}, SMCS can naturally be used to generate samples from a sequence of target distributions and is therefore well suited for MMC, where the biasing distributions within each MMC iteration can be considered as a sequence of distributions. Though the implementation seems straightforward, there are still some issues that need to be addressed with in the proposed MSMCS method. In the standard MMC method, using MCMC (denoted by MMC-MCMC), the samples generated are unweighted and as such the update procedure for $\Theta$'s - determined by the proportion of samples landing in each bin - is based on the samples being unweighted. However, as SMCS produces weighted samples, we need to adapt the MMC procedure to account for this, by altering the update procedure for the Theta distributions. Specifically, we change how the value of $\hat{H}_{t,i}$ - the estimator of $P_i$ - is determined. The update procedure, when using unweighted samples, is determined by Eq.~\eqref{e:params}. When SMCS is used, the update procedure needs to be modified, specifically Eq.~\eqref{e:Hti} becomes, \begin{equation} \hat{H}_{t,i} = \sum^{N}_{j = 1} I_{D_i}(\mathbf{x}^j)\;w(\mathbf{x}^{j}). \end{equation} Another issue is that, for SMC to be effective, two successive distributions cannot be too far apart from each other; otherwise, the samples are very likely to be rejected in the Metropolis step. Within the MMC method, there is no guarantee that the IS distributions obtained in two successive iterations are close to each other. For example, in our numerical experiments, we have observed that, for high-dimensional problems, such an issue appears frequently in the first MMC step, due to the difference in the initial distribution $q_{0}(\textbf{x})$ and subsequent target distribution $q_{1}(\textbf{x})$. To address this issue, we propose including a simulated tempering process in the method. Namely, we introduce a set of intermediate distributions in between $q_t$ and $q_{t+1}$, which we can apply SMCS too. Note that the difference in the IS distributions, can be attributed to differences in the $\Theta$-functions (i.e. $\Theta_{t}(\-x)$ and $\Theta_{t+1}(\-x)$), as per Eq.~\eqref{e:qt}. We choose a strictly increasing sequence of scalars $\{\alpha_k\}_{k=1}^K$ with $\alpha_0=0$ and $\alpha_K=1$, such that the intermediate $\Theta$-functions are \begin{equation} \Theta_{k}(\textbf{x}) = \alpha_k\; \Theta_{t+1}(\textbf{x}) + (1-\alpha_k) \; \Theta_{t}(\textbf{x}). \end{equation} It follows that the sequence of intermediate distributions $\{q_k\}_{k=0}^K$ can be defined accordingly via Eq.~\eqref{e:qt}, and we apply SMCS to this sequence of distributions ultimately yielding samples from the target distribution $q_{t+1}(\-x)$. One can see that when $q_t$ and $q_{t+1}$ are close to each other, SMCS can efficiently generate samples from $q_{t+1}$ via the forward kernel and the samples from $q_{t}$, so this tempering process is not needed. However, for two consecutive IS distributions that are far apart, we found that whilst introducing more intermediate steps increases the computational time for generating samples according to the next target distribution $q_{t+1}(\textbf{x})$, overall the MMC converges faster, offsetting this increased cost. Therefore, in our algorithm, tempering is only triggered when certain prescribed conditions are satisfied (e.g. $\|\Theta_t(\-x)-\Theta_{t+1}(\-x)\|$ exceeds a threshold value). We have presented the proposed MSMCS method in a MMC framework: namely, we want to implement MMC for a given problem, where the samples are drawn from the target distribution $q_t$ using SMCS. Alternatively, we can also understand the method from a SMCS perspective: that is, the SMCS method is used in a particular problem where the sequence of distributions are constructed via MMC. \section{Numerical Examples} \label{Section:NumEx} In this section, we provide four numerical examples of increasing complexity to demonstrate the performance of the proposed MSMCS algorithm. By complexity, we are referring to the dimensionality of the problem and the rarity of the performance parameter values. Each numerical example also demonstrates a different aspect of the advantages our proposed method has over MMC-MCMC. \subsection{Chi-Square Distribution} In the first example, we consider the Chi-square distribution, a continuous distribution with $k$-degrees of freedom, describing the distribution of a sum of squared random variables. In this example, we demonstrate that MMC can be used to reconstruct the Chi-square distribution with very low error compared to the true analytical distribution, using both MCMC and SMCS. If $x_1,...,x_k$ are independent zero-mean Gaussian random variables, with unit variance, then the sum of their squares, \begin{equation} y = \sum_{i=1}^{k} x^2_i, \end{equation} is distributed according to the Chi-square distribution with $k$ degrees of freedom, where we often use the notation: $y \sim \chi^{2}(k)$. In this example, we construct the Chi-square distribution for $k=20$ degrees of freedom, where the analytical form of the PDF is available. In both MMC-MCMC and MSMCS, we use $20$ iterations with $5\times10^{3}$ samples per iteration, to allow for a fair comparison. Within each MMC-MCMC iteration, a single long chain of $5\times10^{3}$ samples with no burn-in period is used, so all samples are utilised. The results are shown in Figure \ref{fig:CSplot}, on both the linear and logarithmic scales. We also show the absolute and relative errors compared to the true analytical solution. The results demonstrate that the MMC method can reconstruct the Chi-square PDF with a low relative error compared to the true analytical solution, and that the MMC method can effectively explore the low probability events with a relatively small total sample size. In addition, the results show that, in this relatively simple example, both the MSMCS and MMC-MCMC methods obtain comparable performance with regard to the error measures. \begin{figure} \centering \includegraphics[width=1\textwidth]{ChiSquaredAbsSMCS.png} \caption{Chi-square distribution with 20 degrees of freedom computed by MSMCS and MMC-MCMC, compared to the analytical solution. The results are plotted on both the linear scale (left column) and the logarithmic scale (right column). The first row contains the approximated and analytical PDFs of y. The second and third rows show the absolute and relative errors of MMC compared to the analytical solution, respectively.} \label{fig:CSplot} \end{figure} \subsection{Cantilever Beam Problem} We now consider a real-world engineering example: a cantilever beam model studied in \cite{li2011efficient,wu1990advanced}. In this example, we impose a burn-in period on MCMC, as is often required, to ensure all the samples generated by MCMC follow the MMC distribution in each iteration. As outlined previously, this is not required for SMCS, where all samples can be utilised. As illustrated in Figure \ref{fig:CantBeam}, we define our beam with width $w$, height $t$, length $L$, and elasticity $E$. We are interested in the beam's reliability when subjected to transverse load $Y$ and horizontal load $X$. This is a widely adopted testbed problem in reliability analysis, where the failure of the system relates to the maximum deflection of the beam $(y)$, as determined by the following equation: \begin{equation} y = \frac{4L^{3}}{Ewt} \sqrt{\left(\frac{Y}{t^{2}}\right)^{2}+\left(\frac{X}{w^{2}}\right)^{2}} \end{equation} \begin{figure} \centering \includegraphics[width=0.55\textwidth]{CantileverV1} \caption{Cantilever Beam Problem} \label{fig:CantBeam} \end{figure} Following the problem set up of \cite{li2011efficient,wu1990advanced}, we assume that the beam is of fixed length $L = 100$, with beam width $w$, height $t$, applied loads $X$ and $Y$, and elastic modulus of the material $E$ being random parameters, which are all independently distributed following a normal distribution. The mean and variance of each normally distributed parameter are provided in Table \ref{CantileverTable}. \begin{table}[ht] \caption{The mean and variance of the random parameters} \centering \begin{tabular}{ cccccc } \hline Parameter & $w$ & $t$ & $X$ & $Y$ & $E$ \\ \hline Mean & $4$ & $4$ & $500$ & $1000$ & $2.9 \times 10^{6}$ \\ Variance & $0.001$ & $0.0001$ & $100$ & $100$ & $1.45 \times 10^{6}$ \\ \hline \end{tabular} \label{CantileverTable} \end{table} We compute the PDF of $y$ with three methods: plain MC, MMC-MCMC and MSMCS. In the MC simulation, we use $10^{8}$ full model evaluations. In both MMC-MCMC and MSMCS, we use $20$ iterations with $5\times10^{4}$ samples in each iteration, to allow for a fair comparison. Within each MMC-MCMC iteration, we use a single long chain MCMC, and as such it cannot be implemented in parallel. Also in this example, we impose a burn-in period of $15\%$. We set $R_y = [5.35,6.80]$ divided into 145 bins, each of width 0.01. To compare the results, we plot the PDF obtained by the three methods in Fig. \ref{fig:CBplot}. \begin{figure} \centering \includegraphics[width=115mm]{CantileverBeamSMCS.png} \caption{Cantilever Beam PDF computed by MC, MSMCS and MMC-MCMC. The results are shown on both the linear scale (left column) and logarithmic scale (right column).} \label{fig:CBplot} \end{figure} First, one can see that the results of three methods agree very well in the high probability region, indicating that all the methods can correctly reproduce the sought PDF. The two MMC based methods are substantially more effective in the low probability regions -- the plain MC cannot reach the same level of rarity (e.g. at $y=6.6$) while using 100 times more samples. The two MMC methods yield comparable results in this example, but as has been mentioned, MSMCS has the advantage of parallel implementation. \subsection{Quarter Car Model} In our third example, we consider a further real-world example: a quarter car model studied by Wong et al \cite{wong2008theory}. In this example, we implement MMC-MCMC in two alternate ways, to demonstrate the computational efficiency gained by using MSMCS - \emph{see implementation details}. \subsubsection*{Problem Set Up} The quarter-car model is used for vehicle suspension systems to investigate how they respond under a random road profile. As illustrated in Figure \ref{fig:QC}, we set-up our model following \cite{wong2008theory}, such that the sprung mass $m_{s}$ and the unsprung mass $m_{u}$ are connected by a non-linear spring (with stiffness $k_{s}$) and a linear damper (with damping coefficient $c$). The unsprung mass interacts with the road surface via a non-linear spring (with stiffness $k_{u}$). The displacement of the wheel $z(t)$ represents the interaction of the quarter car system with the road surface. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{QuartCarV1} \caption{Quarter Car Model} \label{fig:QC} \end{figure} The displacements of the sprung and the unsprung masses are denoted by $x_{1}$ and $x_{2}$ respectively. Mathematically, the model is described by a two-degree-of-freedom ordinary differential equation (ODE) system: \begin{subequations} \begin{eqnarray} m_{s}\frac{d^{2}x_1}{dt^2} = -k_{s}(x_{1}-x_{2})^{3} - c\left(\frac{dx_1}{dt}-\frac{dx_2}{dt}\right), \end{eqnarray} \begin{equation} m_{u}\frac{d^{2}x_2}{dt^2} = k_{s}(x_{1}-x_{2})^{3} + c\left(\frac{dx_1}{dt}-\frac{dx_2}{dt}\right) +k_{u}(z(t)-x_{2}). \end{equation} \label{e:qcm} \end{subequations} In our problem, the uncertainty arises through the random road profile $z(t)$ which is modelled as a zero-mean white Gaussian random force with standard deviation $\sigma = 1$. For the sake of our model, all other parameters are assumed to be fixed, taking the values as given by Table \ref{t:qcm}. \begin{table}[ht] \caption{The parameter values of the quarter car model} \centering \begin{tabular}{ccccc} \hline $m_{s}$ & $m_{u}$ & $k_{s}$ & $k_{u}$ & $c$ \\ \hline $20$ & $40$ & $400$ & $2000$ & $600$ \\ \hline \end{tabular} \label{t:qcm} \end{table} We are interested in the maximum difference between the displacements of the sprung and unsprung springs in a given interval $[0, T]$, as calculated by: \begin{equation} y = \underset{0 \leq t \leq T}{\mbox{max}} \{|x_{1}(t) - x_{2}(t)|\}. \end{equation} In extreme scenarios when this displacement exceeds a certain value, say $y^{\ast}$, the car's suspension would break. We want to reconstruct the entire probability density function (PDF) of y. With the PDF, we can estimate the probability $\PR(y > y^{\ast})$ for any value of $y^{\ast}$ in the range of interest. \subsubsection*{Implementation Details} We solve Eqs. \ref{e:qcm} numerically using the 4-th order Runge-Kutta method where the step size is taken to be $\Delta t = T/100$, so the random variable in this problem is effectively of 100 dimensions. We take $T=1$ and set initial conditions of Eqs. \ref{e:qcm} to be \begin{equation} x_{1}(0) = \frac{dx_1}{dt}(0) = 0,\;\; x_{2}(0) = \frac{dx_2}{dt}(0) = 0 \end{equation} We conduct a standard MC simulation with $10^{6}$ samples. In both MSMCS and MMC-MCMC, we use $20$ iterations with $2\times10^{4}$ samples in each iteration. The MSMCS method is easily parallelisable, meaning that within each MMC iteration, one can update the new samples completely in parallel according to the target MMC distribution, rather than forming a single long chain - significantly improving the computational efficiency. To provide a fair computational comparison, for this example, we conduct MMC-MCMC in two ways. In the first case, we use a single long chain of length $2\times10^{4}$ - the most typical implementation of MCMC, which is also how the MCMC is implemented in the first two examples. In the second case, within each iteration we use $10$ chains each of length $2\times10^{3}$, to provide a fairer comparison to the parallel implementation of MSMCS. \subsubsection*{Results} The results of all three methods are shown in Figure \ref{fig:QCplot}. The MC method only estimated the PDF to the order of $10^{-6}$ (as expected), while the MSMCS method estimated it to order $10^{-12}$. MMC-MCMC with a single chain (referred to as MMC-MCMC-SC), also accurately reconstructed the performance parameter PDF, however MMC-MCMC with multiple chains (referred to as MMC-MCMC-MC) and therefore enabling parallel implementation, significantly underestimated the PDF values for values $y>1.8$. The results indicate that due to the sequential nature of MCMC, running multiple short chains substantially undermines the performance of the method. Therefore, on the basis of parallel implementation, the MSMCS method clearly outperforms MMC-MCMC. \begin{figure} \centering \includegraphics[width=.85\textwidth]{QuarterCarModelSMCS.png} \caption{Quarter Car Model PDF computed by MC, MSMCS and MMC-MCMC. MMC-MCMC-SC uses a single long chain. MMC-MCMC-MC uses ten shorter chains in parallel. The results are shown on both the linear scale (left column) and the logarithmic scale (right column).} \label{fig:QCplot} \end{figure} \subsection{Copula Model} The development of rare event simulation techniques is also critical for the risk management in financial markets. Therefore, the final application we investigate is applying the MMC method to a Copula model - one of the most widely used portfolio risk models. A copula model allows one to separate the dependence structure of the portfolio from the marginal densities of each variables - representing the individual risks of each obligor - which can have different probability distributions. We consider the Student's t-copula model, proposed by Bassamboo \textit{et al} \cite{bassamboo2008portfolio}. \subsubsection*{Problem Set Up} We follow the problem set up of \cite{bassamboo2008portfolio} and \cite{chan2010efficient}. Consider a portfolio of loans consisting of $n$ obligors, we aim to find the distribution of losses from defaults over a fixed time horizon, from which we can determine large loss probabilities. Suppose the probability of default for the $i$th obligor over the time horizon is $p_i \in (0,1)$, for $i=1,...,n,$ and that in the event that the $i$th obligor defaults, a fixed and given loss of $c_i$ monetary units occurs. We begin by introducing a vector of underlying latent variables $\textbf{X} = (X_{1},...,X_{n})$ such that the $i$th obligor defaults if $X_i$ exceeds a given threshold level $x_i$. This threshold $x_i$ is set according to the marginal default probability of the $i$th asset, so that $\PR(X_i > x_i) = p_i$. The portfolio loss from defaults is given by \begin{equation} L(\textbf{X}) = c_{1}I_{\{X_1>x_1\}} + ... + c_{n}I_{\{X_n>x_n\}} \end{equation} where $I_{\{X_i>x_i\}}$ denotes the indicator function, which is equal to $1$ if $X_i > x_i$ and 0 otherwise. We let the common risk factor and the individual idiosyncratic risks be independent normally distributed random variables, that is, \begin{equation} Z \sim N(0,1)\;\text{and}\; \eta_i \sim N(0,\sigma^{2}_{\eta})\text{, for }i=1,...,n. \end{equation} We choose $0<p<1$ and let \begin{equation} X_i = \frac{pZ + \sqrt{1-p^2}\eta_i}{T}, i=1,...,n, \end{equation} where $T$ is a non-negative random variable, independent of the other risk factors. For a positive integer $k$, let $T = \sqrt{k^{-1}{\Gamma}(1/2,k/2)}$ where $\Gamma$ represents the PDF of the Gamma distribution \cite{bassamboo2008portfolio}. Therefore, our latent variables follow a multivariate t-distribution, whose dependence structure is given by a t-copula with $k$ degrees of freedom. \subsubsection*{Implementation Details} We use the same set up as Chan \textit{et al} \cite{chan2010efficient}, that is, we set $\sigma^{2}_{\eta} = 9$, $x = \sqrt{n}\; \text{x}\; 0.5$, $p = 0.25$, and $c=1$. We conduct a standard MC simulation, with different sample sizes - as detailed in the results tables. In both MMC-MCMC and MSMCS, we use $20$ iterations with $1\times10^4$ samples in each iteration. We implement MMC-MCMC in twos forms, one with a single long chain - as it would typically be implemented - and one with parallel chains (100 chains each of length 100), which provides a fairer comparison to parallel implementation of MSMCS. Neither MCMC case uses a burn-in period. \subsubsection*{Results} We are interested in the probability of large losses, defined as the loss function value $L(\textbf{X}) > l$, where $l = bn$ for different samples sizes $n$ and different threshold values $b$. We vary either the degrees of freedom $k$ or the sample size $n$, and for each of these scenarios, we determine the probability that the loss exceeds $l = b \textbf{x} n$, for $b=0.1,0.2,0.25,0.3$. The results are presented in Table \ref{Table:CopRes}. \begin{table} \caption{Copula Results using MC; MSMCS; and MMC-MCMC.} \label{Table:CopRes} \begin{subtable}[t]{\textwidth} \caption{$k=4$ \& $n=250$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline $0.1$ & $5 \times 10^{5}$ & \multicolumn{1}{l|}{$7.36 \times 10^{-2}$} & \multicolumn{1}{l|}{$7.27 \times 10^{-2}$} & \multicolumn{1}{l|}{$1.69 \times 10^{-1}$} & \multicolumn{1}{l|}{$7.31 \times 10^{-2}$} \\ \hline 0.2 & $5 \times 10^{5}$ & \multicolumn{1}{l|}{$1.72 \times 10^{-2}$} & \multicolumn{1}{l|}{$1.63 \times 10^{-2}$} & \multicolumn{1}{l|}{$5.96 \times 10^{-2}$} & $1.71 \times 10^{-2}$ \\ \hline 0.25 & $5 \times 10^{5}$ & \multicolumn{1}{l|}{$8.08 \times 10^{-3}$} & \multicolumn{1}{l|}{$8.13 \times 10^{-3}$} & \multicolumn{1}{l|}{$3.29 \times 10^{-2}$} & $8.05 \times 10^{-3}$ \\ \hline 0.3 & $5 \times 10^{5}$ & \multicolumn{1}{l|}{$3.21 \times 10^{-3}$} & \multicolumn{1}{l|}{$3.24 \times 10^{-3}$} & \multicolumn{1}{l|}{$1.71 \times 10^{-2}$} & $3.28 \times 10^{-3}$ \\ \hline \end{tabular}% } \vspace{0.1cm} \end{subtable} \begin{subtable}[t]{\textwidth} \caption{$k=8$ \& $n=250$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline 0.1 & $5 \times 10^{6}$ & \multicolumn{1}{l|}{$1.45 \times 10^{-2}$} & \multicolumn{1}{l|}{$1.39 \times 10^{-2}$} & \multicolumn{1}{l|}{$2.24 \times 10^{-3}$} & \multicolumn{1}{l|}{$1.42 \times 10^{-2}$} \\ \hline 0.2 & $5 \times 10^{6}$ & \multicolumn{1}{l|}{$9.49 \times 10^{-4}$} & \multicolumn{1}{l|}{$9.43 \times 10^{-4}$} & \multicolumn{1}{l|}{$1.66 \times 10^{-4}$} & $9.49 \times 10^{-4}$ \\ \hline 0.25 & $5 \times 10^{6}$ & \multicolumn{1}{l|}{$2.38 \times 10^{-4}$} & \multicolumn{1}{l|}{$2.49 \times 10^{-4}$} & \multicolumn{1}{l|}{$4.29 \times 10^{-5}$} & $2.46 \times 10^{-4}$ \\ \hline 0.3 & $5 \times 10^{6}$ & \multicolumn{1}{l|}{$4.04 \times 10^{-5}$} & \multicolumn{1}{l|}{$3.98 \times 10^{-5}$} & \multicolumn{1}{l|}{$1.04 \times 10^{-5}$} & $4.01 \times 10^{-5}$ \\ \hline \end{tabular}% } \vspace{0.1cm} \end{subtable} \begin{subtable}[t]{1\textwidth} \caption{$k=12$ \& $n=250$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline 0.1 & $5 \times 10^{7}$ & \multicolumn{1}{l|}{$9.77 \times 10^{-3}$} & \multicolumn{1}{l|}{$9.82 \times 10^{-3}$} & \multicolumn{1}{l|}{$5.96 \times 10^{-5}$} & $9.78 \times 10^{-3}$ \\ \hline 0.2 & $5 \times 10^{7}$ & \multicolumn{1}{l|}{$7.49 \times 10^{-3}$} & \multicolumn{1}{l|}{$7.63 \times 10^{-3}$} & \multicolumn{1}{l|}{$1.04 \times 10^{-6}$} & $7.53 \times 10^{-3}$ \\ \hline 0.25 & $5 \times 10^{7}$ & \multicolumn{1}{l|}{$1.05 \times 10^{-5}$} & \multicolumn{1}{l|}{$1.02 \times 10^{-5}$} & \multicolumn{1}{l|}{$1.22 \times 10^{-7}$} & $1.03 \times 10^{-5}$ \\ \hline 0.3 & $5 \times 10^{7}$ & \multicolumn{1}{l|}{$1.12 \times 10^{-6}$} & \multicolumn{1}{l|}{$1.34 \times 10^{-6}$} & \multicolumn{1}{l|}{$1.65 \times 10^{-8}$} & $1.21 \times 10^{-6}$ \\ \hline \end{tabular}% } \vspace{0.1cm} \end{subtable} \begin{subtable}[t]{1\textwidth} \caption{$k=16$ \& $n=250$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline 0.1 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$9.40 \times 10^{-4}$} & \multicolumn{1}{l|}{$9.36 \times 10^{-4}$} & \multicolumn{1}{l|}{$2.50 \times 10^{-6}$} & $9.43 \times 10^{-4}$ \\ \hline 0.2 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$6.91 \times 10^{-6}$} & \multicolumn{1}{l|}{$6.90 \times 10^{-6}$} & \multicolumn{1}{l|}{$9.58 \times 10^{-9}$} & $6.86 \times 10^{-6}$ \\ \hline 0.25 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$6.22 \times 10^{-7}$} & \multicolumn{1}{l|}{$6.18 \times 10^{-7}$} & \multicolumn{1}{l|}{$6.04 \times 10^{-10}$} & $6.19 \times 10^{-7}$ \\ \hline 0.3 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$4.40 \times 10^{-8}$} & \multicolumn{1}{l|}{$4.37 \times 10^{-8}$} & \multicolumn{1}{l|}{$3.67 \times 10^{-11}$} & $4.51 \times 10^{-8}$ \\ \hline \end{tabular}% } \vspace{0.1cm} \end{subtable} \begin{subtable}[t]{1\textwidth} \caption{$k=20$ \& $n=250$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline 0.1 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$2.83 \times 10^{-4}$} & \multicolumn{1}{l|}{$2.88 \times 10^{-4}$} & \multicolumn{1}{l|}{$1.39 \times 10^{-7}$} & $2.76 \times 10^{-4}$ \\ \hline 0.2 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$7.98 \times 10^{-7}$} & \multicolumn{1}{l|}{$7.61 \times 10^{-7}$} & \multicolumn{1}{l|}{$1.35 \times 10^{-10}$} & $7.73 \times 10^{-7}$ \\ \hline 0.25 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$5.40 \times 10^{-8}$} & \multicolumn{1}{l|}{$4.92 \times 10^{-8}$} & \multicolumn{1}{l|}{$2.99 \times 10^{-12}$} & $5.32 \times 10^{-8}$ \\ \hline 0.3 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{$5.72 \times 10^{-9}$} & \multicolumn{1}{l|}{$1.02 \times 10^{-13}$} & $5.63 \times 10^{-9}$ \\ \hline \end{tabular}% } \vspace{0.1cm} \end{subtable} \begin{subtable}[t]{1\textwidth} \caption{$k=12$ \& $n=500$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline 0.1 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$9.61 \times 10^{-5}$} & \multicolumn{1}{l|}{$9.42 \times 10^{-5}$} & \multicolumn{1}{l|}{$5.08 \times 10^{-12}$} & $9.52 \times 10^{-5}$ \\ \hline 0.2 & $5 \times 10^{8} $& \multicolumn{1}{l|}{$1.34 \times 10^{-6}$} & \multicolumn{1}{l|}{$1.39 \times 10^{-6}$} & \multicolumn{1}{l|}{$7.15 \times 10^{-13}$} & $1.38 \times 10^{-6}$ \\ \hline 0.25 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$1.36 \times 10^{-7}$} & \multicolumn{1}{l|}{$1.57 \times 10^{-7}$} & \multicolumn{1}{l|}{$4.37 \times 10^{-13}$} & $0.84 \times 10^{-7}$ \\ \hline 0.3 & $5 \times 10^{8}$ & \multicolumn{1}{l|}{$1.00 \times 10^{-8}$} & \multicolumn{1}{l|}{$1.29 \times 10^{-8}$} & \multicolumn{1}{l|}{$2.54 \times 10^{-13}$} & $1.27 \times 10^{-8}$ \\ \hline \end{tabular}% } \vspace{0.1cm} \end{subtable} \begin{subtable}[t]{1\textwidth} \caption{$k=12$ \& $n=1000$} \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|llll|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Large Loss \\ Threshold (b)\end{tabular}}} & \multicolumn{1}{c|}{Sample Size} & \multicolumn{4}{c|}{Probability Estimate} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MC} & \multicolumn{1}{c|}{MMC-MCMC-SC} & \multicolumn{1}{c|}{MMC-MCMC-MC} & \multicolumn{1}{c|}{MSMCS} \\ \hline 0.1 & $3 \times 10^{8}$ & \multicolumn{1}{l|}{$1.96 \times 10^{-6}$} & \multicolumn{1}{l|}{$1.88 \times 10^{-6}$} & \multicolumn{1}{l|}{$2.54 \times 10^{-13}$} & $1.91 \times 10^{-6}$ \\ \hline 0.2 & $3 \times 10^{8}$ & \multicolumn{1}{l|}{$3.67 \times 10^{-8}$} & \multicolumn{1}{l|}{$3.58 \times 10^{-8}$} & \multicolumn{1}{l|}{$6.29 \times 10^{-14}$} & $3.72 \times 10^{-8}$ \\ \hline 0.25 & $3 \times 10^{8}$ & \multicolumn{1}{l|}{$2.39 \times 10^{-9}$} & \multicolumn{1}{l|}{$2.24 \times 10^{-9}$} & \multicolumn{1}{l|}{$4.18 \times 10^{-14}$} & $2.28 \times 10^{-9}$ \\ \hline 0.3 & $3 \times 10^{8}$ & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{$3.25 \times 10^{-10}$} & \multicolumn{1}{l|}{$7.24 \times 10^{-15}$} & $3.19 \times 10^{-10}$ \\ \hline \end{tabular}% } \end{subtable} \end{table} As the MMC method reconstructs the whole loss distribution, we only require seven simulations to be performed, from which the loss probability for any $b$-value can be obtained. This is a significant computational saving, compared to with other existing methods, like the Conditional-MC in \cite{chan2010efficient}, which would require a new simulation for each $b$-value. Our results show that the MMC method - with both MCMC and SMCS - produces significant computational savings for estimating large loss probabilities, given a Copula model. Both MMC-MCMC (with a single long chain, denoted by MMC-MCMC-SC) and MSMCS, are very effective here - although, MMC-MCMC (with multiple parallel chains, denoted by MMC-MCMC-MC) performs poorly, particularly in the high-dimensional setting, clear illustrating the advantage of MSMCS in the parallel implementation. Finally, as shown by comparison to the standard MC, MMC is very effective method for the purposes of a Copula model and estimating large loss probabilities. \section{Conclusion} \label{Section:Conclusion} In summary, we consider UQ problems where the full distribution of a performance parameter is sought, and we propose a method to do so by incorporating the MMC and SMCS methods. Specifically the method uses SMCS instead of MCMC to draw samples from the warped distributions in each iteration of MMC. We have demonstrated that the proposed MSMCS method can outperform both the standard MMC-MCMC, in the sense that SMCS is easily parallelisable and so it can take full advantage of parallel high-powered computing, while MCMC, due to its sequential nature, requires a (often very long) burn-in period, which in fact is the reason that the implementation with multiple short chains does not perform well. We believe that our proposed algorithm has wide applicability, improving the computational efficiency associated with finding failure probabilities or reconstructing the whole probability distribution of interest. One weakness of the proposed method is that MCMC is easier to implement than SMCS and involves simpler computations - so MMC-MCMC is marginally faster than MSMCS to run. However, if one can use a parallel implementation then MSMCS significantly outperforms MMC-MCMC, as shown in the numerical examples. More importantly, both approaches to MMC can struggle in high-dimensional settings, where the generation of a new sample is likely to get rejected, which should be dealt with by developing and utilising more effective proposal distributions, for example, that based on the Hamiltonian dynamics~\cite{neal2011mcmc}.
proofpile-arXiv_065-3937
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Active galactic nuclei (AGN) are the most powerful non-transient sources in the Universe. AGN properties are defined by the accretion of matter on a supermassive black hole, one of the appearances of which is producing bipolar outflows. The jets are effectively studied by very long baseline interferometry (VLBI) observations. Developed in the 60s by \citet{Matveenko65}, the VLBI technique enabled achieving record angular resolution at centimetre wavelengths, further improving with the VSOP/HALCA \citep[e.g., ][]{Hirabayashi98, Hirabayashi00a, Hirabayashi00b, Gurvits20} and RadioAstron \citep{Kardashev13, KovalevYuA14, Bruni20, KovalevKardashev20} ground-space interferometers that provided the longest baseline projections. On the other hand, an improvement in angular resolution can be reached by increasing the observational frequency. This principle is implemented in the Event Horizon Telescope project \citep[e.g.,][]{EHT14}. The RadioAstron and EHT projects allow us to investigate the innermost jet regions of active galaxies and improve our understanding of the processes operating there. To study the evolution of the morphological structure of the jets, systematic long-term monitoring is needed. To date, the VLBA program MOJAVE\footnote{\url{https://www.cv.nrao.edu/MOJAVE}} focused on the full Stokes monitoring of bright AGN jets in the northern sky has accumulated the longest-ever observational series. Supplemented by observations made within the framework of its predecessor, the VLBA 2-cm Survey \citep{2cmVLBA,Zensus02}, it, covering the period from 1994 to the present, contains $\approx$450 AGN jets, for each of which there are at least five observational epochs \citep{Lister21}. Analyzing these data in total intensity $I$, a number of jet properties on parsec-scales have been revealed: (i) velocity distribution of jet features, detection of accelerated motion along curved trajectories \citep[e.g.,][and references therein]{Homan15,Lister21}; (ii) change in the position angle (PA) of the inner jet \citep{Lister13, Lister19, Lister21}; (iii) apparent and intrinsic opening angle and shape of the jets \citep{Pushkarev09, PushkarevKLS17}; (iv) spectral index and brightness temperature, and their change along the outflow \citep{Hovatta14,MOJAVE_XIX}; (v) frequency-dependent synchrotron opacity in the VLBI core \citep{Pushkarev12}. Magnetic field plays a primary role in the processes of jet formation, acceleration, and collimation \citep[e.g., ][]{BlandfordZnajek77, BlandfordPayne82, Nakamura01, Lovelace02}. Its azimuthal component, naturally originated by rotation of accretion disc or black hole, is required to form and then hold the jet. Thus, the helical \textbf{B}-field in the jet is widely expected. Based on polarimetric-sensitive VLBA observations at 15~GHz for a sample over 450 sources, \citet{Pushkarev17} found that the linear fractional polarization $m=\sqrt{Q^2+U^2}/I$ (where $Q$ and $U$ are the Stokes parameters) typically increases to the jet edges. It was later confirmed by analysis of stacked polarization images for a comparable source sample \citep{MOJAVE_XXI}. The stacked maps showed a much more complete cross-section coverage of a jet in polarization compared to the patchy patterns detected in the single-epoch maps due to the limited sensitivity of the observations. Stacked maps strongly indicate the ordered \textbf{B}-field in jets. Its helical configuration can naturally explain the dip in polarization degree closer to the jet axis due to the partial cancelling of P-signal from regions with different electric vector position angles (EVPAs). Moreover, \citet{Clausen-BrownLyutikov11, Gabuzda18, Gabuzda21} obtained additional observed evidence of the helical magnetic field by detecting significant gradients of Faraday rotation across the jet. On the other hand, \citet{Laing1980} proposed a model of the spine-sheath structure of the \textbf{B}-field, in which the spine and sheath contain a toroidal and poloidal field, respectively. \citet{Attridge99,Pushkarev05} found supporting this scenario observational evidence in several AGN jets. Developing this model, \citet{Ghisellini05} suggested that the plasma speed in the sheath is lower than in the spine. The decrease of the jet flow speed towards the edges was obtained both in the analytical model \citep[e.g.,][]{Beskin17} and analysis of observational data for the nearby active galaxy M87 \citep{MertensLobanov16}. Therefore, in our simulation of the jet polarization properties, we consider two configurations of the magnetic field: the helical and ``spine-sheath'' structure. For the latter configuration, we analyse two cases: the sheath speed is equal to or less than in the spine. To account for Doppler factor changes caused by a motion of a jet feature along a curved path, we introduce the geometric model in \autoref{sec:jetmodel}. In its framework, the jet component velocity vector, in the general case, does not coincide with the local jet axis and radial direction. Section~\ref{sec:simul} contains a description of the performed simulation. The results, their discussion, and conclusions are presented in \autoref{sec:res}, \ref{sec:discus}, and \ref{sec:conc}, respectively. \section{Jet model} \label{sec:jetmodel} Typically, VLBI maps of AGN jets show the brightest compact feature, called the VLBI core, and weaker extended regions, tracing the outflow. The core is partially opaque \citep{Hovatta12}, and its position is frequency-dependent \citep[e.g., ][]{Pushkarev12}. Downstream from the core, the jet is optically thin. For simulations, we considered only this case. The nature of the bright jet components is still actively debated. These can be regions with an increased density of radiating particles formed by the central engine initially \citep[e.g., ][]{Stawarz04} or by the development of hydrodynamic instabilities \citep[see, e.g., ][]{Perucho12}. Alternatively, the features can be the regions of recollimation or some jet disturbance, for example, a shock wave \citep{Marscher08}. Also, they may be regions where the jet bends so that the viewing angle decreases, and due to relativistic beaming, the jet radiation increases for the observer. In addition, it is not known whether the observed features represent the entire jet flow or only some part of it characterised by enhanced emission. Therefore, we analyze the polarization properties transverse to the local jet axis and model these properties for the jet radius $R_j=1$. We use the geometric model of a helical jet introduced by \citet{But18a}. Due to the importance of the used geometric parameters for our simulations, we reproduce their description here. It is assumed that the jet axis forms a helix located on the surface of a imaginary cone with a half-opening angle $\xi$. The tangent to each point of the helix makes up an angle $\rho$ to the cone generatrix. The cone axis is at a constant angle $\theta_0$ with the line of sight. The azimuthal angle $\varphi$ characterizes the position of the jet segment on the helix relative to the observer. By the segment, we mean the part of the modelled jet formed by two cross-sections, within which the local jet axis can be considered as a straight line. Adjacent jet segments have different $\varphi$ (\autoref{fig:jetmod}). The angle $\varphi$ is measured in a secant plane perpendicular to the cone axis and counted counterclockwise, starting from the point on the cone surface lying at the intersection of the secant plane and the plane containing the jet axis and the line of sight. Note that the transition to a particular case --- a straight jet --- can be carried out by setting $\rho=0^\circ$ and an arbitrary constant value of $\varphi$. \begin{figure} \includegraphics[width=\columnwidth]{helical_jet_parameters.pdf} \caption{Scheme of the helical jet with designated geometric and kinematic parameters. The thick line denotes the jet. Its parts located on the opposite cone side to the observer are marked by dots.} \label{fig:jetmod} \end{figure} To maintain the helical shape, the speed of the jet segments must be almost the same. The segment axis $z$ coincides with a tangent to the jet helix at a given point. The segment velocity vector $\boldsymbol{\beta}$ forms an angle $p$ to the cone generatrix at a given point. If $p=\rho$, the segments move along the jet helix, and we have a constant-in-space jet. If $p=0^\circ$, the jet segments have radial motion, which manifests through the jet helix's outward motion on the surface of the imaginary cone. If $p\neq\rho$, during outward motion the jet helix turns around its axis. Our model is significantly more complicated than the standard representation of a straight jet moving at a constant angle to the line of sight. Namely, the velocity vector and the local jet axis do not coincide with each other and with the radial direction and, in general, do not lie in the same plane. Different jet segments have a different angle $\theta$ between the velocity vector and the line of sight because of various $\varphi$s under other constant geometrical parameters of the helix. The change in $\theta$ leads to a change in the Doppler factor $\delta$. Additionally, for $p\neq0^\circ$, the angle $\theta$ has a wider range of possible values than $\theta_0\pm \xi$ \citep{But18a}. But, only the introduced geometrical model allows to describe self-consistently the following observed facts: 1) the conical shape of the jets on the radio maps stacked over many epochs of observations \citep{PushkarevKLS17}; 2) quasi-periodic changes in the positional angle of the inner (closest to the core) part of a jet detected for more than a dozen sources \citep{Lister13, Lister21}; 3) established both radial and non-radial trajectories of jet features \citep{Lister13, Homan15}. \section{Simulations} \label{sec:simul} To calculate the polarization degree (PD) and direction of an electric vector (EV) in a wave, we used expressions for the Stokes parameters accounting for the relativistic effects written by \citet{LyutikovPG05} and reproduced in \autoref{sec:appendix}. Unlike \citet{LyutikovPG05}, we integrated expressions for the Stokes parameters along the line of sight to construct transverse distributions of polarization properties. The jet model introduced by us contains several parameters, namely, the velocity vector of the segments $\boldsymbol{\beta}$, the angle $\theta_\rho$ between a local jet axis and the line of sight, and the angle $\theta_p$ between $\boldsymbol{\beta}$ and the line of sight. Previously, including \citet{LyutikovPG05}, these two angles were assumed to be the same, and there was no distinction between them. We consider these two angles separately and allow their values to vary. It is a significant difference from the previously considered models. The angle $\theta_\rho$ defines the orientation of a rectangular right-hand coordinate system (introduced to specify a \textbf{B}-field in a jet segment) relative to an observer. We chose this coordinate system in such a way that the $z$-axis coincides with the local jet axis, and a unit vector $\boldsymbol{n}$ directed along the line of sight to the observer lies in the $x$-$z$ plane. The $y$-axis lies in the plane of the sky. The angle $\theta_p$ is used for Doppler factor calculation. Different jet segments have different values of $\theta_\rho$ and $\theta_p$, which are specified by Eqs. 11-13 in \citep{But18a} \begin{equation} \sin \theta_i \left(i, \xi, \theta_0,\varphi \right)=\sqrt{f_1^2 (i, \xi, \varphi)+f_2^2(i, \xi,\theta_0, \varphi)}\,, \label{eq:sinTHpro} \end{equation} where \begin{equation*} \begin{split} f_1(i, \xi, \varphi)=\cos i \sin \xi \sin \varphi+\sin i &\cos \varphi \,,\\ f_2(i,\xi, \theta_0, \varphi)= \cos i (\cos \xi \sin \theta_0+& \sin \xi \cos \theta_0 \cos \varphi )- \\ &-\sin i \cos \theta_0 \sin \varphi\,, \end{split} \end{equation*} where $\varphi$ is an azimuthal angle of a segment, $i=p$ if we calculate $\theta_p$, $i=\rho$ if $\theta_\rho$ is obtained. We found components of vector $\boldsymbol{\beta}$ by introducing an auxiliary angle $\varkappa$ and using the scheme displayed in \autoref{fig:forBeta}. The angle $\text{BAK}=|\rho-p|$ because the local jet axis $z$, $\boldsymbol{\beta}$, and cone generatrix, passing through a given segment, lie in the same plane since the distance of the segment from the cone apex is significantly larger than the distance, that the segment passes during the unit time interval. We found $\beta_x$ and $\beta_z$ from the examination of the ABK and AHK triangles and $\beta_y$ from the BHK and ABK triangles \begin{equation} \begin{split} \beta_x=&\beta \cos (\rho-p) \tan \varkappa\,,\\ \beta_y=&-\beta \sqrt{\sin^2(\rho-p)-\cos^2(\rho-p) \tan^2 \varkappa}\,,\\ \beta_z=&\beta \cos(\rho-p)\,. \end{split} \end{equation} We obtained the $\tan \varkappa$ by comparing expressions for AH obtained from the triangles of AHN and AHK \begin{equation} \tan \varkappa=\frac{\cos \theta_p-\cos(\rho-p) \cos \theta_\rho}{\cos(\rho-p) \sin \theta_\rho}\,. \end{equation} \begin{figure*} \includegraphics[scale=1]{sbeta.pdf} \caption{a) The scheme of the coordinate system associating with a separate jet segment with denoted model parameters. The rectangle shows jet segment located on the imaginary cone surface. The cone generatrix, passing through it, makes up the angles of $p$ and $\rho$ with the velocity vector and $z$-axis of the segment. The $\boldsymbol{n}$ denotes the line of sight and lies in the $x$--$z$ plane. b) The $\boldsymbol{\beta}$ and its components are highlighted by purple in the used coordinate system.} \label{fig:forBeta} \end{figure*} For numerical simulations of polarization properties, we adopted the angle between the cone axis and the line of sight $\theta_0$ is equal to 2$^\circ$, 5$^\circ$, and 10$^\circ$. For the cone half-opening angle, we used only typical value $\xi=1^\circ$. Our choice of the angle values is based on long-term VLBI observational data for several hundred sources \citep[see, e.g.,][]{Pushkarev09,Lister13,Lister16,PushkarevKLS17, Lister21}. For the jet segment speed, we chose the most common value $\beta=0.995$ (in units of the speed of light $c$), which corresponds to the Lorentz factor of 10. The choice of a value for the angle between the jet segment velocity vector and the radial direction is based on the study of the movements of several hundred features identified on several epochs of observations \citep{Homan15, Lister21}. They showed that the transverse speed is lower than the radial one. Thus, the values of $p$ are small, and we took them to be equal to 0$^\circ$, 2$^\circ$, 3$^\circ$, 5$^\circ$, and 10$^\circ$. The largest uncertainty is in the choice of values for the angle of $\rho$ between the local jet axis and the cone generatrix. To reduce the free parameters of the model and for the possibility of qualitative comparison of the obtained results, we selected the values of $\rho$, fulfilling the conditions $\rho/p=0^\circ$, 1$^\circ$, 2$^\circ$, 3$^\circ$, 5$^\circ$, 15$^\circ$, 25$^\circ$, and $\rho<90^\circ$. We performed calculations for a uniform electron energy distribution $N(E)\propto E^{-s}$ (where $s=2.5$), filling the jet segment. We assumed that magnetic field strength decreases as $1/R$, where $R$ is the transverse distance from the local jet axis to the edge. In the case of the helical \textbf{B}-field in a jet segment, we introduced the angle $\psi^\prime$ between the magnetic field direction and the local jet axis in the jet reference frame. We assumed $\psi^\prime=0^\circ$, 10$^\circ$, 25$^\circ$, 45$^\circ$, 55$^\circ$, 65$^\circ$, 75$^\circ$, and 90$^\circ$, which correspond to different configurations of the \textbf{B}-field ranging from purely longitudinal to toroidal, respectively. To parameterise the ``spine-sheath'' magnetic field configuration, we used the distance $R_t$ from the jet axis, where the transition from the spine to the sheath occurs. The spine and sheath contain the toroidal and poloidal magnetic fields, respectively. In the simulation, all azimuthal magnetic field components are directed from the observer at positive $y$ and to the observer at negative $y$. We considered the different speeds of the sheath ($\beta_s$) and assign it to be equal to the spine speed ($\beta$), 0.95, and 0.745. As a result, we obtained 528 and 990 parameter sets for the helical field and the spine-sheath structure, respectively (see \autoref{tab:setnumber}). For each parameter set, we performed calculations changing the azimuthal angle $\varphi$ from 1$^\circ$ to 351$^\circ$ in increments of 10$^\circ$. In each parameter set, the Doppler factor $\delta$ changes due to changes in $\varphi$ (see \autoref{eq:sinTHpro}). The interval of $\delta$ changes is different for each case. Additionally, the real jets differ in intrinsic intensity, which decreases with outward distance along jets due to energy losses. Therefore, to refer to the observed intensity in each particular jet, we divided the interval of $\delta$ changes into three parts with low, intermediate, and high values of $\delta$. When calculating the Stokes parameters, we integrated along the line of sight at 61 equidistant points on the cross-section of the jet projection on the sky plane. These points locate from $-0.9$ to $0.9$ of the jet radius with a step of 0.03. To avoid the influence of any edge effects, we considered a part of the jet at distances from the local axis $\leq 0.9$. For the adequate comparison of the theoretical and observed distributions of polarization properties, we convolved the former with a one-dimensional Gaussian of FWHM equal to one-third of the jet width. There are two reasons for one-dimensional convolution. First, transverse distributions smoothly change with a gradual change in $\varphi$. Second, we constructed transverse distributions with a $\varphi$ increment of 10$^\circ$. The distance between jet segments, for which we calculate transverse distribution, is large enough. For example, for a jet length of 10 times its width, a circular two-dimensional Gaussian with the specified FWHM would occupy 1--3 simulated transverse distributions for the whole considered range of $\rho$. Points on the final simulated transverse distributions of polarization properties, obtained for a given jet segment having corresponding $\delta$, have different colours. Namely, red, green, and blue refer to the high, intermediate, and low values of $\delta$, respectively, which are achieved with a given set of parameters. \begin{table*} \caption{Model parameter variations. } \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Jet Geometry and Kinematics} & \multirow{2}{*}{Set numbers} \\ \cline{1-6} ~ & $\xi$, $^\circ$ & $\theta_0$, $^\circ$ & $p$, $^\circ$ & $\rho$, $^\circ$ & $\beta$ &~ \\ \hline Linear & 1 & 2, 5, 10 & 0 & 0 & 0.995 & 3 \\ Helical & 1 & 2, 5, 10 & 2, 3, 5, 10 & $\{1, 2, 3, 5, 15, 25\}\cdot p$; $\rho<90^\circ$ & 0.995 & 63 \\ \hline \multicolumn{6}{|c|}{Magnetic field configuration} & ~\\ \hline Helical & \multicolumn{5}{l|}{$\psi^\prime=0$, 10$^\circ$, 25$^\circ$, 45$^\circ$, 55$^\circ$, 65$^\circ$, 75$^\circ$, 90$^\circ$} & 8 \\ Spine-sheath & \multicolumn{5}{l|}{$R<R_t$, $\psi^\prime=90^\circ$} & 15 \\ ~ & \multicolumn{5}{l|}{$R>R_t$, $\psi^\prime=0^\circ$}& ~\\ ~ & \multicolumn{5}{l|}{$R_t=0.25$, 0.33, 0.5, 0.7, 0.9 of jet radius} & ~ \\ ~ & \multicolumn{5}{l|}{Sheath speed: $\beta_s=0.995$, 0.95, 0.745} & ~ \\ \hline \end{tabular} \end{center} \label{tab:setnumber} \end{table*} The final transverse distributions of the polarization properties are the stacked distributions obtained at different values of $\varphi$ for each parameter set. Therefore, these plots can be compared with distributions obtained from both (i) slices at different distances from the core for single-epoch data and (ii) within a fixed distance interval on stacked multi-epoch maps. The latter is true, as during the stacked epoch interval, jet parts, characterized by different azimuthal angles, passed through the fixed interval of distances from the core. Therefore, for comparison with simulated results, we use the transverse profiles of stacked maps constructed with a large enough number ($>20$) of observing epochs and a wide enough time interval ($>15$~yrs) covered by them. \section{Results and comparison with observational data} \label{sec:res} Typically, the observed linear polarization degree shows U-shaped transverse profile, i.e., it is low near the local jet axis and increases towards the jet edges \citep{Pushkarev17, MOJAVE_XXI}. In some cases, a W-shaped profile is observed, e.g., in BL~Lac and TXS~1611+343 \citep{MOJAVE_XXI}. The cuts of polarized intensity ($P=\sqrt{Q^2+U^2}$) have one or, what is striking, two peaks shifted off the jet ridgeline in total intensity. As for the EV distribution over a source, there are several typical patterns, including predominantly parallel, perpendicular, or so-called ``spine-sheath'' configuration with EV nearly aligned with the local jet direction near the jet axis and transverse at the edge(s) \citep{Attridge99,Gabuzda00,Pushkarev05,ListerHoman05}. \subsection{Linear jet with radial outward motion} In our geometrical model, the transition to the linear jet case occurs by setting a fixed $\varphi$. We use the values $\varphi$, at which the jet viewing angle is 2$^\circ$, 5$^\circ$, and 10$^\circ$ (as it follows from \autoref{eq:sinTHpro}). Thus, $\varphi=101^\circ$ for $\theta \approx 2^\circ$ and $\varphi=91^\circ$ for other $\theta$ values. The jet motion is radial if $p=0^\circ$. For this case, the Stokes $U$ is always 0 for any \textbf{B}-field configuration. Therefore, EVs are exactly perpendicular ($Q<0$) or parallel ($Q>0$) to the local jet axis. \begin{figure*} \includegraphics[scale=0.5]{helical_field_linear_jet_0+Q.PDF} \caption{Transverse distributions of total and linearly polarized intensity, polarization degree, and the Stokes $Q$ (from top to bottom) for helical magnetic field with the pitch-angle $\psi^\prime$ of 25$^\circ$, 55$^\circ$, 75$^\circ$, 90$^\circ$ (from left to right). Dotted, dashed, and solid lines correspond to the jet viewing angle of 2$^\circ$, 5$^\circ$, and 10$^\circ$, respectively.} \label{fig:linear_helical} \end{figure*} \begin{figure*} \includegraphics[scale=1.1]{spine-sheath_field_linear_jet_0+Q.pdf} \caption{Transverse distributions of total and linearly polarized intensity, polarization degree, and the Stokes $Q$ (from top to bottom) for the ``spine-sheath'' magnetic field configuration with $R_t$ of 0.25, 0.5, and 0.9 (from left to right). Dotted, dashed, and solid lines correspond to the jet viewing angle of 2$^\circ$, 5$^\circ$, and 10$^\circ$, respectively. Blue and red lines indicate sheath speed $\beta_s=\beta=0.995$ and $\beta_s=0.745$, respectively. For $Q>0$ or $Q<0$ EVs are parallel or orthogonal to the local jet axis, respectively.} \label{fig:linear_spine-sheath} \end{figure*} For the longitudinal \textbf{B}-field ($\psi^\prime=0^\circ$), PD reaches its maximum theoretical value \citep[$(\alpha+1)/(\alpha+5/3)\approx0.72$ for $\alpha=(s-1)/2=0.75$, ][]{Pachol}, and its transverse profile is flat. The corresponding cuts for total and polarized intensity have one peak and slight asymmetry. With an increase of $\psi^\prime$, a central concavity appears and becomes deeper. \autoref{fig:linear_helical} shows transverse distributions of total and polarized intensity and polarization degree for the different viewing angles and magnetic field pitch-angle $\psi^\prime\! \geqslant \! 25^\circ$. The polarized intensity distributions have a pronounced asymmetry. Only at $\psi^\prime=55^\circ$, this distribution has -peaked shape with a relatively low level of peaked intensities. Additionally, the PD profiles have a strongly asymmetric W-shape. With a further increase in $\psi^\prime$, the polarized intensity distributions become more symmetrical; their peaks are approximately half the total intensity. The $m$-profile has a central peak, which is very high and does not correspond to the observational data of any object. Let us consider the case of the ``spine-sheath'' \textbf{B}-field configuration (\autoref{fig:linear_spine-sheath}). For a thick sheath (for distance from the local jet axis at which the sheath begins $R_t=0.25R$, where $R=1$ is the jet radius), having a speed equal to that of the spine, the polarized intensity distribution has two peaks, and there is a concavity in the distribution of the polarization degree. The Stokes $Q$ is typically negative. The thinner and slower the sheath, the higher value of the Stokes $Q$. As \autoref{fig:linear_spine-sheath} shows, with a decrease in the sheath thickness and/or its speed, the concavity in the fractional polarization profile becomes deeper for cases of $R_t\leq 0.5$. At the same time, the peak values of the polarized intensity and the polarized intensity near the local jet axis decrease and then increase. This behaviour is because of the Stokes $Q$ growth from negative values to zero. For the Stokes $Q$ exceeding zero, the polarization degree and polarized intensity profiles have the central peak, which increases under further decreasing thickness and speed of sheath. So, the W-shape appearances in in polarization degree distributions. With a further increase in the Stokes $Q$ for the case of a thin sheath ($R_t=0.9$), the central peak in the polarization degree distribution increases. It is accompanied by the appearance, growth, and absolute predominance of the central peak in the polarized intensity profile. In the EV profile, a spine-sheath structure also shows up, namely, inside the jet, EV is directed along the axis, and becomes transverse near the jet edges. Thus, the simulated profiles of $P$, $m$, and EV agree well with observations for both the helical and ``spine-sheath'' magnetic field configurations. But the model distributions have no point scatter since the jet viewing angle is constant. The point spread in the simulated distributions can be obtained by \textbf{B}-field parameter fluctuations or assuming a different degree of magnetic field disordering along a jet. Also, variations of an electron number density and spectral index of the power-law energy distribution of emitting electrons can create the point spread. A steady pattern of transverse cuts detected in the observational data \citep{MOJAVE_XXI} indicates that the fine-tuning of parameters necessary to reproduce the observed characteristics in theoretical profiles occurs in at least several jets, which casts doubt on the considered assumption. Further, we will show that it is possible to naturally reproduce the observed point spread due to a change in the angle between the jet segment velocity vector with the line of sight when the segments move along curved (helical for complete rotation cycle) trajectories. \subsection{Non-radial motion in helical jet} Here we investigate how the transverse distributions of polarization properties change qualitatively with an increase in the pitch-angle of the helical magnetic field $\psi^\prime$. \subsubsection{Poloidal field} For an entirely longitudinal \textbf{B}-field, the distribution of the polarization degree is flat at a value near the maximum theoretical limit. The polarized intensity distribution has one central peak. The distributions of EV deviations from the local jet axis ($|\text{PA}_\text{jet}-\text{EVPA}|$) are flat with values of near $90^\circ$ or vary throughout the available interval. With an increase in deviation from the poloidal \textbf{B}-field, an asymmetry in all distributions arises and a central concavity appears in the polarization degree distribution (\autoref{fig:heljetnonrad}). \subsubsection{Helical field} For $\psi^\prime=45^\circ$ and 65$^\circ$, the distributions are strongly asymmetric. The polarized intensity distribution has two peaks with different magnitudes. Not only the direction of the \textbf{B}-field twist but also the values of $p$ determine the dominant peak. The PD profiles are asymmetric, showing U or W shapes with high peak values. There are three types of $|\text{PA}_\text{jet}-\text{EVPA}|$ distributions: (i) unsystematic spread of points in the entire allowable range of values; (ii) EVs are mainly perpendicular to the jet axis at one edge of the jet and longitudinal on the other; (iii) small values of $|\text{PA}_\text{jet}-\text{EVPA}|$ at the axis and large ones at the edges of the jet for large pitch angles exceeding $65^\circ$. It is important to note that in case (ii) under the same direction of the magnetic field pitch-angle, with various other parameters, positive values of $R$ can correspond to both longitudinal and transverse jet EVs (\autoref{fig:heljetnonrad}, $|\text{PA}_\text{jet}-\text{EVPA}|$ distributions for $\psi^\prime=45^\circ$). This fact indicates the ambiguity of determining the direction of the \textbf{B}-field based only on the transverse EV profile, since in model profiles, at $R>0$, the magnetic field lines twirl away from the observer, and at $R<0$, they direct towards the observer. The skewness sides of distributions of $I$, $P$, and PD for $p=2^\circ$ and $p=5^\circ$ are opposite and cannot help to determine the twirl direction of the magnetic field. The key to solving this problem may be that (1) jet components with different Doppler factors have different profiles; (2) for fixed $\psi^\prime$ and different $p$, the profiles corresponding to low and high values of the Doppler factor are different. With a further increase of $\psi^\prime$, the asymmetry dilutes. One peak of $P$ begins to dominate, while the other has a small peak value or disappears altogether in some distributions for $\psi^\prime=65^\circ$. The polarization degree cut is symmetrical bell-shaped, rarely W-shaped. \subsubsection{Toroidal field} For the toroidal magnetic field $(\psi^\prime=90^\circ)$, all distributions become symmetrical. The polarized intensity profile has one central peak. The polarization degree distribution is mainly bell-shaped with a high maximum value. Predominantly EVs are longitudinal on the axis and transverse to the edges of the jet. The width of the region occupied by the longitudinal EVs depends on both $\theta_0$ and $p$. Sometimes the $|\text{PA}_\text{jet}-\text{EVPA}|$ profiles are mainly within $<20^\circ$ or with an unsystematic spread of points over the entire range of values. \begin{figure*} \centering \includegraphics[scale=0.7]{shelical_th0=2_p=2-5_rhop=15.pdf} \caption{Transverse distributions of total and linearly polarized intensity, polarization degree, EV deviations from the local jet axis (from top to bottom) for different angles of the magnetic field with the local jet axis $\psi^\prime$. The distributions are given for different angles of the velocity vector of the jet segments with the radial direction: $p=2^\circ$ (upper panel), $p=5^\circ$ (lower panel). Red, green, and blue colours correspond to the high, medium, and low Doppler factor values, lying in the possible range for the given geometrical and kinematic parameters.} \label{fig:heljetnonrad} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.5]{scol_th0=2_p=2_rho=30.pdf} \caption{Transverse distributions of total intensity linear polarization properties in the case of spine-sheath \textbf{B}-field topology. Model parameters are shown. The left and right plot sides correspond to the sheath velocity of $\beta_s=0.995$ (the same as the spine velocity) and 0.745, respectively. The spine radius $R_t$ is 0.25, 0.5, and 0.9 (from top to bottom) of the jet radius. We associate red, green, and blue points with high, medium, and low Doppler factor values corresponding to the given model parameters.} \label{fig:spine-sheath_heljet} \end{figure*} \subsubsection{Spine-sheath configuration} The ``spine-sheath'' magnetic field structure produces mainly U- and W-shaped transverse profiles of polarization degree (\autoref{fig:spine-sheath_heljet}). The $m$-profiles are rarely bell-shaped if the sheath is relatively thin. If the sheath and spine velocities are equal, U-shaped $m$-cuts appear for $R_t\leq 0.33$. In this case, the polarized intensity profile has two peaks equidistant from the jet axis. The EV distributions are flat and almost transverse to the local jet axis, but $|\text{PA}_\text{jet}-\text{EVPA}|$ changes in a wide range, down to 0, for some viewing angles. For $R_t=0.5$, the W-shaped $m$-cuts appear for some space of the parameter sets, and the third peak of a relatively small value arises at the jet axis in the $P$-profiles. The ``spine-sheath'' structure in the $|\text{PA}_\text{jet}-\text{EVPA}|$ distribution shows up, in which EVs are longitudinal near the axis and transverse at the edges of the jet. Note that the jet width with longitudinal EVs is about two times smaller than the width occupied by the toroidal \textbf{B}-field. With a further increase in $R_t$, the central peak values in the $P$ and $m$-cuts increase. The central peak in the $P$-profiles begins to emerge and dominate in some parameter sets. The width of the region with longitudinal EVs increases too. With further increase in $R_t$, the $P$ and $m$-cuts become bell- and W-shaped, respectively. The speed of the sheath with the longitudinal \textbf{B}-field influences the transverse cuts of the polarization properties. Namely, the slower speed and thinner the sheath, the faster the transition described above from two- to one-peak $P$-profiles, from U-shaped to W-shaped $m$-cuts, and from flat to spine-sheath distributions of $|\text{PA}_\text{jet}-\text{EVPA}|$. We emphasize the particular qualitative profile of, e.g., $m$, can correspond to the qualitatively different cuts of other polarization parameters for the given space of the model parameters. For example, the U-shaped $m$-cut (for $R_t=0.25$ on the top and bottom panels of \autoref{fig:spine-sheath_heljet}) has different profiles of EV deviations from the local jet axis. The W-shaped $m$-profile for $R_t=0.5$ and $\beta_s=0.995$ and for $R_t=0.9$ and $\beta_s=0.995$ correspond to the $P$-cuts having two and one peaks, respectively. \begin{figure*} \centering \includegraphics[scale=0.4]{sI2peaks.pdf} \caption{Examples of the obtained two-peaked intensity profiles with corresponded transverse distributions of polarization properties. Red, green, and blue colours correspond to high, middle, and low values of the Doppler factor within the permissible range.} \label{fig:twoIpeaks} \end{figure*} A serendipitous finding was the indication of a two-peak total intensity profile as a modelling result. Such cuts we obtained for different sheath speeds and $\rho/p$ values, but only for $\theta_0=10^\circ$ and the spine width $R_t \geq 0.5$. We show examples of the transverse two-peaked $I$-profiles and their corresponding polarization property cuts in \autoref{fig:twoIpeaks}. There are no evident parameters for which two-peaked total intensity distribution arises. Highly likely, with an increase in $\theta_0$, such profiles will occur more often. The search for conditions producing two-peak $I$-profiles seems to be an attractive problem, but this is the subject of future research. Note that the two-peak transverse $I$-cuts were obtained in the simulation for the reverse pinch field \citep{PriorGourg19} and toroidal \textbf{B}-field \citep{KramerMcD21}. For $\psi^\prime=90^\circ$, we overwhelmingly obtained a bright jet on the axis. There are several sets of parameters at $3\leq \rho/p \leq 15$, for each of which both single-peak and two-peak transverse $I$-profiles are. \subsection{Comparison with observations} To compare the simulated distributions of polarization properties with the observed ones, we selected three objects with significantly different transverse distributions of the polarization degree. Although, \citet{Hovatta12} obtained that the Faraday rotation of AGN jets from the MOJAVE sample is several degrees and mainly in the region close to the 15~GHz VLBI core, we selected sources with the detected low Faraday rotation to be sure of our conclusions. \autoref{fig:threeobj} shows the stacked VLBI maps of jets 0333+321 (NRAO~140), 0836+710 (4C~+71.07), 1611+343 (DA~406) in total and polarized intensity. PD and the EV direction are also presented. All maps are taken from \citet{MOJAVE_XXI}. The black rectangles denote the jet parts used to construct the distributions transverse to the jet ridgeline. Then, by visual comparison with the simulation results, we searched for the simultaneous correspondence of all four observed distributions with the theoretical ones. We found coincidences for all three considered objects (\autoref{tab:modparobj}, Figures~\ref{fig:distribs1}-\ref{fig:distribs3}). The model parameters corresponding to each source are the only ones in the case of 0836+710 or lie in a narrow interval. \begin{figure*} \includegraphics[scale=0.4]{squasars.pdf} \caption{Stacked maps of the quasars 0333+321, 0836+710, and 1611+343 (from left to right). Black contours show the total intensity. The degree of linear polarization is represented by colour. Blue contours of polarized intensity are constructed relative to the shifted lower one of the total intensity. Sticks display EV directions. Black rectangles indicate the regions, in which we took the observed transverse distributions for further analysis.} \label{fig:threeobj} \end{figure*} \begin{table*} \caption{Model parameters that reproduce qualitatively well the character of the observed transverse distributions of polarization properties.} \label{tab:modparobj} \begin{tabular}{|c|c|c|c|c|l| Object & $z$ & $\theta_0$, ($^\circ$) & $p$, ($^\circ$) & $\rho$, ($^\circ$) & Magnetic field configuration: parameters \\ \hline 0333+321 & 1.3 & 5 & 3 & 45 & ``spine-sheath'': $R_t=0.33$, $\beta_s=0.95$, $\beta=0.995$ \\ ~ & ~ & 2 & 2 & 50 & ``spine-sheath'': $R_t=0.33$ or $R_t=0.25$, $\beta_s=0.95$, $\beta=0.995$ \\ ~ & ~ & 2 & 2 & 30 & ``spine-sheath'': $R_t=0.25$, $\beta_s=0.745$, $\beta=0.995$ \\ 0836+710 & 2.2 & 10 & 2 & 4 & helical: $\psi^\prime=25^\circ$, $\beta=0.995$ \\ 1611+343 & 1.4 & 2 & 5 & 25 & ``spine-sheath'': $R_t=0.9$, $\beta_s=\beta=0.995$ \\ ~ & ~ & 2 & 10 & 30 & helical: $\psi^\prime=90^\circ$, $\beta=0.995$ \\ \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[scale=0.45]{s0333p321.pdf} \caption{Observed (black, top panel) and simulated (colour, bottom panel) transverse distributions of total and polarized intensity, polarization degree, and deviations of EVs from the local jet axis (from left to right) for the quasar 0333+321. We took observed data within 4-8~mas from the VLBI core. The model parameters belong to the ``spine-sheath'' magnetic field configuration and are $R_t=0.33$, $\beta_s=0.95$, $\theta_0=5^\circ$, $p=3^\circ$, $\rho=45^\circ$. Red, green, and blue points correspond to the Doppler factor within intervals 14.3-18.1, 10.5-14.3, and 6.6-10.5, respectively.} \label{fig:distribs1} \end{figure*} \begin{figure*} \includegraphics[scale=0.45]{s0836p710.pdf} \caption{Observed (black, top panel) and simulated (colour, bottom panel) transverse distributions of total and polarized intensity, polarization degree, and deviations of EVs from the local jet axis (from left to right) for the quasar 0836+710. We took observed data within 4-10~mas from the VLBI core. The helical magnetic field with $\psi=25^\circ$ reproduce the observed data under model parameters: $\theta_0=10^\circ$, $p=2^\circ$, $\rho=4^\circ$. Red, green, and blue points correspond to the Doppler factor within intervals 5.9-7.1, 4.8-5.9, and 3.6-4.8, respectively. } \label{fig:distribs2} \end{figure*} \begin{figure*} \includegraphics[scale=0.45]{s1611p343.pdf} \caption{Observed (black, top panel) within 2-4~mas from the VLBI core and simulated (colour, bottom panel) transverse distributions of total and polarized intensity, polarization degree, and deviations of EVs from the local jet axis (from left to right) for quasar 1611+343. The model parameters belong to the ``spine-sheath'' magnetic field configuration without a difference in the spine and sheath speeds. The model parameters are $R_t=0.9$, $\theta_0=2^\circ$, $p=5^\circ$, $\rho=25^\circ$. Red, green, and blue points correspond to the Doppler factor within intervals 13.0-15.5, 10.4-13.0, and 7.9-10.4, respectively. } \label{fig:distribs3} \end{figure*} The polarization properties of the 0836+710 jet are well reproduced only by a helical field (\autoref{fig:distribs2}). In this case, even the asymmetry of the theoretical and observed distributions is consistent for all polarization properties. This fact establishes that the helical \textbf{B}-field directed toward the observer on the brightest side of the jet. The ``spine-sheath'' structure is unsuitable for this source because, in this case, either a W- or a bell-shaped PD distribution corresponds to one maximum in the distribution of the polarized intensity. Meanwhile, the observed U-shaped PD distribution corresponds to the two-peak distribution of the polarized intensity. On the other hand, only the ``spine-sheath'' \textbf{B}-field structure is suitable for interpreting the observed transverse distributions for the 0333+321 jet (\autoref{fig:distribs1}). In the case of the helical field, the obtained from simulations trend of transverse distribution shape changes with $\psi^\prime$ indicates that the two comparable peaks in distributions of polarized intensity are realized for $\psi^\prime=45^\circ-65^\circ$\footnote{Fig.~\ref{fig:heljetnonrad} shows small part of the obtained distributions, all are at \url{ftp://jet.asc.rssi.ru/outgoing/pushkarev/transverse_cuts}}. The PD distributions associated with them have a strongly asymmetric W-shape. Additionally, the variation of EV deviations from the local jet axis occurs in the entire possible range of values. Among the model distributions for the ``spine-sheath'' \textbf{B}-field topology, we found distributions qualitatively corresponding to those observed for 0333+321 in several parameter sets (see \autoref{tab:modparobj}). The observed distributions of the quasar 0333+321 exhibit a weak asymmetry, which we can interpret if the field in the spine is not toroidal, but helical with a high value of $\psi^\prime$. Previously, \citet{Asada08} performed detailed VLBI studies the jet in 0333+321 at frequencies 5 and 8~GHz. From the analysis of the EV distribution, the authors concluded that in the radiating region, the \textbf{B}-field direction is at an angle $\gtrsim 80^\circ$ to the local jet axis. In the surrounding jet sheath, containing thermal electrons and thus acting as an external Faraday screen, the \textbf{B}-field inclination is $<10^\circ$. If we assume that the considered sheath of \textbf{B}-field extends to the regions with thermal plasma at larger distance from the jet axis, the conclusions of \citet{Asada08} agree well with our simulation results. Moreover, \citet{Asada08} estimate the maximum jet angle with the line of sight at $10\fdg4$ based on kinematics data from \citet{Kellermann04}. Recent kinematic data \citep{Lister21} also confirm this value. From \autoref{eq:sinTHpro} follows for $\theta_0=5^\circ$, $p=3^\circ$, and $\theta_0=2^\circ$, $p=2^\circ$, the angle between the velocity vector of the jet segment and the line of sight does not exceed $8\fdg2$ and $4\fdg2$, respectively. The transverse distributions for the jet 1611+343 correspond to the case of a thin sheath with a speed equal to spine one. Perhaps, due to the insignificant influence of the sheath, the observed profiles correspond to some theoretical ones for the toroidal magnetic field (\autoref{tab:modparobj}). If we account only for the modelled points corresponding to large values of the Doppler factor, then for 1611+343, there is correspondence at $\psi^\prime=75^\circ$. Thus, we obtained good qualitative correspondence with the observed data even using a rough parameter grid. Figures~\ref{fig:distribs1}-\ref{fig:distribs3} show that the values at the simulated distributions of the PD and the polarized intensity in all cases are slightly larger than at the corresponding observed distributions. As we discuss in the next Section, perhaps, better correspondence can be achieved by varying the model parameters without introducing the disorder of the magnetic field, which, being an additional free parameter, noticeably simplifies the fitting of theoretical distributions to the observed ones. \section{Discussion} \label{sec:discus} Currently, the long-term polarimetric VLBI monitoring of several hundred jets of active galactic nuclei has been performed within the framework of the MOJAVE project at the observing frequency of 15~GHz. The analysis of these data revealed a tendency for fractional polarization to increase towards the jet edges, which is present at various distances from the VLBI core \citep{MOJAVE_XXI}. This finding indicates a well-ordered magnetic field on the probed parsec scales. For example, \citet{Clausen-BrownLyutikov11} showed that the increase in the PD to the jet edge, accompanied by a spectral flattening, is due to a helical \textbf{B}-field. Therefore, it is necessary to investigate the polarization properties created by a completely ordered magnetic field, the patterns of which cover widely discussed topologies (namely, the helical field and ``spine-sheath'' structure), before concluding about the degree of disorder or the presence of a turbulent component of the \textbf{B}-field. The latter is often used to interpret sudden jumps in EVPA observed in the optical domain for blazars \citep[for example, ][ for 0836+710]{Raiteri19}. But \citet{LyutikovKrav17} showed that variations of orientation and velocity of the jet's radiating region could reproduce the observed behaviour of EVPA (smooth swing rotations and sharp jumps by $90^\circ$), accompanied by random changes in PD and intensity even in the presence of a strictly ordered helical \textbf{B}-field. Changes in the jet PA and feature speeds are confirmed by long-term observations of several hundred sources carried out within the framework of the MOJAVE project \citep{Lister13,Lister21}. On the other hand, in the well-ordered \textbf{B}-field, an inhomogeneous distribution of synchrotron photons affects the spectral energy distribution (SED) of both synchrotron and self-Compton radiation \citep{Joshi20}. Thus, the variations in the SED synchrotron and Compton peak ratio, the frequency shift of these peaks observed during EVPA variations \citep[e.g., the observed ones for blazar 0836+710 ][]{Raiteri19} can be interpreted by a scenario of the well-ordered magnetic field in a jet. Here we consider two generally accepted topologies of the \textbf{B}-field, namely, the helical one and the ``spine-sheath'' structure. The helical magnetic field is a natural consequence of the jet formation and collimation models \citep[e.g., ][]{BlandfordZnajek77, BlandfordPayne82, Nakamura01, Lovelace02}. The helical \textbf{B}-field explains significant gradients of Faraday rotation measure (RM) across the jets \citep[e.g., ][ and references therein]{Gabuzda21}. Moreover, if earlier only the presence of different RM signs was a necessary indication of the helical \textbf{B}-field in the surrounding jet environment, then now it is not the case. Investigating the temporal changes of the transverse RM gradient in the jet 3C~273, \citet{LisakovKrav21} have shown that even the RM with the same sign at different jet edges can indicate the helical \textbf{B}-field in the surrounding jet medium. It occurs as the jet ``highlights'' different parts of the field at different epochs. But the Faraday rotation occurs outside the synchrotron emitting region, whereas only the the observed EVs can determine magnetic field configuration in a relativistic jet. For individual jet features from the analysis of single-epoch VLBI observations, \citet{ListerHoman05} identified the preferred directions of EV. Namely, the EV is usually either aligned with or orthogonal to the local jet axis. \citet{LyutikovPG05} explained this by relativistic effects with an initially helical \textbf{B}-field in the jet reference frame. In their formalism, EV is strictly parallel or perpendicular to the jet because the Stokes $U$ is zero under integration both on the line of sight and along the cross-section of the jet projection. Deviations from $U=0$ could be interpreted, for example, by the \textbf{B}-field disordering or introducing another additional parameter into the model. But \citet{LyutikovPG05} considered, like many other researchers, the velocity vector co-directional with the local axis of the jet. The MOJAVE program results revealed changes in the inner jet PA \citep{Lister13, Lister21}, the motion of jet features with acceleration \citep{Homan15, Lister21}, and along curved trajectories \citep{Lister16, Lister19}, but located within the fixed for each source angle in the plane of the sky \citep{PushkarevKLS17}. For one of the closest AGNs, M~87, using high angular resolution data, \citet{MertensLobanov16} found a rotational component in the jet flow motion. A similar azimuthal component of the velocity vector is expected from theoretical models \citep[e.g., ][]{Hardee82, Beskin17}. All the abovementioned points oblige us to use a more complex geometric and kinematic jet model for simulations. For this purpose, we considered the jet, whose axis forms a helix on the imaginary cone surface. The segments of this jet move not only outward but also rotate around its axis \citep{But18a}. This model has proven well when matching quasi-periods of long-term variability in the radio and optical ranges and interpreting changes in the inner jet position angle of the S5~0716+714 \citep{But18b} and OJ~287 \citep{ButP20} blazars. A similar curved jet, rotating around its axis, was also considered by \citet{VilRait99} for Mrk~501 and by \citet{Raiteri17} for CTA~102. For the first time, we performed simulations under the assumption that the segment motion does not occur along its axis ($p\neq \rho$) and, moreover, the axis does not coincide with the radial direction. Only in this case the integral for Stokes $U$ over the line of sight will be different from zero. That results in a variety of angles of EV deviation from the local jet axis. Note that the integral Stokes $U$ over the line of sight is zero under $p=\rho>0^\circ$ too. Distributions of the remaining polarization properties for the helical \textbf{B}-field are qualitatively consistent with the previously obtained simulation results in some cases. For example, as well as \citet{Murphy2013}, we reproduced: 1) parallel and perpendicular to the jet axis orientations of EVs or their combinations; 2) significant changes in the polarization degree; 3) one- and two-peak distributions of polarized intensity. \citet{Murphy2013} concluded that a change of the pitch angle by several degrees (from 41$^\circ$ to 53$^\circ$) is necessary to obtain different families of EV distributions. Our results indicate that changes in the pitch angle of the field of several tens of degrees are needed to qualitatively change the transverse distribution of EVs (see \autoref{fig:heljetnonrad}, lines~4 and 8). Although, in some sets of model parameters, different transverse distributions of EVs are realized depending on $\varphi$. For example, at $\psi^\prime=25^\circ$ and $p=2^\circ$, jet segments with relatively low total intensity (blue and green points) have EVs approximately orthogonal to the jet (\autoref{fig:heljetnonrad}). While, for the jet segments, which are bright due to the high Doppler factor, EVs are orthogonal on the one jet side and longitudinal on the other. Moreover, the nature of the EVs distribution varies depending on $p$. Namely, with an increase in the angle of jet segment velocity vector with the cone generatrix ($p$) to $5^\circ$, the transverse EVs are on the side of the jet, which in the previous case had longitudinal EVs, and vice versa. In contrast to the previous case, such longitudinal-transverse distribution of EVs is present in low-intensity segments, whereas, in high-intensity ones, EVs are orthogonal to the jet. The dependence of the EV transverse distribution on both $\psi^\prime$ and the geometric and kinematic parameters of the jet segments does not allow us to uniquely determine the direction of the \textbf{B}-field twist based only on the EV distribution. \autoref{fig:heljetnonrad} shows that longitudinal EVs on the axis and transverse EVs on the edge exist at the constant $\psi^\prime$, the value of which lies in a wide range. This fact is consistent with the conclusions of \citet{LyutikovPG05, Murphy2013, Clausen-BrownLyutikov11}, while \citet{Gabuzda21} emphasized a necessity of a change $\psi^\prime$ across the jet for such EV transverse distributions. It is interesting to note the changes in total and polarized intensity depending on the model parameters (see \autoref{fig:heljetnonrad}). For $p=2^\circ$, the distributions of total intensity at $\psi^\prime=45^\circ$ and $65^\circ$ are very similar and have a peak shift to the right side. The corresponding distributions of polarized intensity have two peaks with different magnitudes. Finding the maximum peak on the left or right depends on $\psi^\prime$. Comparing these distributions with those at $p=5^\circ$ and the corresponding $\psi^\prime$s, we can see that the dominant peaks change the side to opposite. Thus, all transverse distributions of polarization properties depend not only on the angle and direction of the magnetic field twist but also on the geometrical and kinematic parameters of the jet segments. This fact indicates the necessity to compare simulations with observations simultaneously for all polarization parameters. To compare the MOJAVE observational data with the simulation results, we took the angle of the cone axis with the line of sight equal to $2^\circ$, $5^\circ$, and $10^\circ$ in the observer's reference frame. The jet viewing angle for the blazars in the MOJAVE flux-density limited sample is typically $<10^\circ$ \citep{PushkarevKLS17,MOJAVE_XIX}. Since the $\varphi$ of each feature observed on a single-epoch VLBI map is unknown, we used stacked total intensity and linear polarization maps to compare the simulation results with observational data \citep{MOJAVE_XXI}. Thus, we reduced the influence of individual short-term events, and sensitivity increased. Using data in the fixed range of distances from the VLBI core, we are confident that this data covers the entire range of $\varphi$ changes because during the shorter period than considered here, the jet components, on average, completely fill the region with the fixed opening angle on stacked maps \citep{PushkarevKLS17}. In addition, to the helical \textbf{B}-field with the different pitch angles, we examined the ``spine-sheath'' \textbf{B}-field configuration. Note that we did not initially associate it with the ``spine-sheath'' structure of an EV distribution, observed in some sources \citep{Pushkarev05} and which can be interpreted by the helical field. ``Spine-sheath'' \textbf{B}-field topology can occur in various ways. For example, the longitudinal field at the jet edge and the deceleration of the outer layers of the outflow can be the result of the jet's interaction with the environment \citep{Laing1980, Ghisellini05}. Alternatively, the Blandford-Znajek \citep{BlandfordZnajek77} and Blandford-Payne \citep{BlandfordPayne82} mechanisms may form spine and sheath, respectively. Different origination processes can result in a different twist of the magnetic field: almost toroidal in the spine and close to longitudinal at the jet edges, as, for example, for 0333+321 \citep{Asada08}. Analyzing the transverse gradient of the rotation measure, \citet{Gabuzda14} found evidence of a helical \textbf{B}-field in jets showing a change in the EV direction across the jet. Considering that the Faraday rotation mainly occurs in the external screen, this finding, together with our conclusion about the ``spine-sheath'' \textbf{B}-field topology, can indicate a \textbf{B}-field configuration similar to that in the ``cosmic battery'' model \citep[e.g., ][ and reference therein]{Contopoulos09}, but extending much farther beyond the boundaries of the detected jet. We have obtained simulated transverse distributions of total and polarized intensity, polarization degree, and $|\text{PA}_\text{jet}-\text{EVPA}|$ for a wide space of model parameters. We have identified the general patterns of changes in the shape of transverse distributions when the model parameters are changed. When comparing the simulation results with the observing data for individual sources, we found a well qualitative correspondence,which takes place in all three considered \textbf{B}-field configurations. We emphasize that we considered a homogeneous and isotropic distribution of emitting electrons in the jet and constant jet velocity. The exception is the case of the ``spine-sheath'' structure with the slow sheath, but there were no speed changes inside the spine and sheath. As follows from \autoref{eq:parstok}, the Stokes parameters depend on the Doppler factor, which, in turn, depends on the angle between the velocity vector and the line of sight $\theta_p$ and the velocity module $\beta$. The first dependence is stronger. We have considered wide ranges of $\theta_p$, hence $\delta$ for the constant $\beta$. The changes in $\theta_p$ would compensate a small change in $\beta$, which would not affect our results qualitatively. On the other hand, \citet{Beskin17} analytically obtained a transverse jet distribution of the electron number density and flow velocity. Their use for the simulation of transverse polarization properties could affect some particular parts of the distributions. But due to the limited angular resolution, these features can be undetected in the VLBI observational data. The strong influence of the Doppler factor on the polarization properties can be traced both within a particular parameter set and between distributions for different $\theta_0$. Under matching the simulation results to a specific case, the beam size is necessary for accounting. Moreover, when comparing model and observed transverse distributions, it is important to understand that it is impossible to determine whether we detect the entire jet width or only the central and brightest jet part. Noteworthy, under the simplest assumptions, namely, the homogeneous distribution of emitting electrons and the constant velocity of matter in the jet, all four observed transverse distributions, different for each considered object, are qualitatively well reproduced. The only discrepancy is overestimation of the polarized intensity and PD by a factor of 1.5–-2 for all three sources. But, it could also be a consequence of the assumption of the same electron number density in the jet at any distance from its axis. Or it could be caused by changes in other parameters, for example, the different sheath width and velocity than those considered here, or some deviation of the field from the toroidal and poloidal configuration in the spine and sheath, respectively. All of the above-listed is the subject of future research. \section{Conclusions} \label{sec:conc} We have simulated the transverse distributions of the linear polarization properties of parsec-scale AGN jets. We used various configurations of the strictly ordered magnetic field and accounted for the curved shape of jets and the non-radial motion of their segments. The main conclusions are as follows. 1) The Stokes U is zero only if the local jet axis coincides with the motion direction. In this case, the EV is perfectly either perpendicular or parallel to the jet. In the opposite case, deviations of EVs from the local jet axis lie in the range of 0$^\circ$ to 90$^\circ$. 2) Both the helical field and the ``spine-sheath'' structure can reproduce the basic forms of the observed transverse distributions. Namely, there are one- and two-peaked polarized intensity, U- and W-shaped distributions of the polarization degree, longitudinal and transverse EV directions. At the same time, both quantitative and qualitative changes in the transverse distributions are possible only by changing the Doppler factor of the jet segment. 3) Longitudinal EVs on one jet edge and transverse ones on the other can only be reproduced with the helical \textbf{B}-field. In this case, the angle between the \textbf{B}-field and the local jet axis in the reference frame of the source $\psi^\prime$ can be in a wide range of values. For the fixed both $\psi^\prime$ and rotation direction of the \textbf{B}-field, the positions of the jet sides containing longitudinal and transverse EVs also depend on the geometric and kinematic parameters of the outflow. 4) To determine the \textbf{B}-field configuration reliably, analysis of the distributions of all polarization parameters is necessary. 5) The model parameters, at which the observed and theoretical transverse distributions of total intensity and linear polarization properties agree for the 0333+321, 0836+710, and 1611+343 jets, are in narrow ranges of values. This indicates that the study of polarization is a powerful tool for probing and determining the physical, kinematic, and geometric parameters of the AGN jets. The model, though, has difficulty reproducing one of the types of transverse profiles often observed in BL Lacertae objects -- the quasi-constant cuts of fractional polarization. 6) The obtained agreement between the model and observed transverse distributions of polarization properties indicates the well-ordered global magnetic field associated with parsec-scale AGN jets. \section*{Acknowledgements} This study was supported by the Russian Science Foundation: project 21-12-00241. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team \citep{Lister18}. \section*{Data Availability} All simulated distributions are available in \url{ftp://jet.asc.rssi.ru/outgoing/pushkarev/transverse_cuts}. \bibliographystyle{mnras}
proofpile-arXiv_065-3944
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }